Recent from talks
Contribute something
Nothing was collected or created yet.
Futures and promises
View on Wikipedia
In computer science, futures, promises, delays, and deferreds are constructs used for synchronizing program execution in some concurrent programming languages. Each is an object that acts as a proxy for a result that is initially unknown, usually because the computation of its value is not yet complete.
The term promise was proposed in 1976 by Daniel P. Friedman and David Wise,[1] and Peter Hibbard called it eventual.[2] A somewhat similar concept future was introduced in 1977 in a paper by Henry Baker and Carl Hewitt.[3]
The terms future, promise, delay, and deferred are often used interchangeably, although some differences in usage between future and promise are treated below. Specifically, when usage is distinguished, a future is a read-only placeholder view of a variable, while a promise is a writable, single assignment container which sets the value of the future. Notably, a future may be defined without specifying which specific promise will set its value, and different possible promises may set the value of a given future, though this can be done only once for a given future. In other cases a future and a promise are created together and associated with each other: the future is the value, the promise is the function that sets the value – essentially the return value (future) of an asynchronous function (promise). Setting the value of a future is also called resolving, fulfilling, or binding it.
Applications
[edit]Futures and promises originated in functional programming and related paradigms (such as logic programming) to decouple a value (a future) from how it was computed (a promise), allowing the computation to be done more flexibly, notably by parallelizing it. Later, it found use in distributed computing, in reducing the latency from communication round trips. Later still, it gained more use by allowing writing asynchronous programs in direct style, rather than in continuation-passing style.
Implicit vs. explicit
[edit]Use of futures may be implicit (any use of the future automatically obtains its value, as if it were an ordinary reference) or explicit (the user must call a function to obtain the value, such as the get method of java.util.concurrent.Futurein Java). Obtaining the value of an explicit future can be called stinging or forcing. Explicit futures can be implemented as a library, whereas implicit futures are usually implemented as part of the language.
The original Baker and Hewitt paper described implicit futures, which are naturally supported in the actor model of computation and pure object-oriented programming languages like Smalltalk. The Friedman and Wise paper described only explicit futures, probably reflecting the difficulty of efficiently implementing implicit futures on stock hardware. The difficulty is that stock hardware does not deal with futures for primitive data types like integers. For example, an add instruction does not know how to deal with 3 + future factorial(100000). In pure actor or object languages this problem can be solved by sending future factorial(100000) the message +[3], which asks the future to add 3 to itself and return the result. Note that the message passing approach works regardless of when factorial(100000) finishes computation and that no stinging/forcing is needed.
Promise pipelining
[edit]The use of futures can dramatically reduce latency in distributed systems. For instance, futures enable promise pipelining,[4][5] as implemented in the languages E and Joule, which was also called call-stream[6] in the language Argus.
Consider an expression involving conventional remote procedure calls, such as:
t3 := ( x.a() ).c( y.b() )
which could be expanded to
t1 := x.a(); t2 := y.b(); t3 := t1.c(t2);
Each statement needs a message to be sent and a reply received before the next statement can proceed. Suppose, for example, that x, y, t1, and t2 are all located on the same remote machine. In this case, two complete network round-trips to that machine must take place before the third statement can begin to execute. The third statement will then cause yet another round-trip to the same remote machine.
Using futures, the above expression could be written
t3 := (x <- a()) <- c(y <- b())
which could be expanded to
t1 := x <- a(); t2 := y <- b(); t3 := t1 <- c(t2);
The syntax used here is that of the language E, where x <- a() means to send the message a() asynchronously to x. All three variables are immediately assigned futures for their results, and execution proceeds to subsequent statements. Later attempts to resolve the value of t3 may cause a delay; however, pipelining can reduce the number of round-trips needed. If, as in the prior example, x, y, t1, and t2 are all located on the same remote machine, a pipelined implementation can compute t3 with one round-trip instead of three. Because all three messages are destined for objects which are on the same remote machine, only one request need be sent and only one response need be received containing the result. The send t1 <- c(t2) would not block even if t1 and t2 were on different machines to each other, or to x or y.
Promise pipelining should be distinguished from parallel asynchronous message passing. In a system supporting parallel message passing but not pipelining, the message sends x <- a() and y <- b() in the above example could proceed in parallel, but the send of t1 <- c(t2) would have to wait until both t1 and t2 had been received, even when x, y, t1, and t2 are on the same remote machine. The relative latency advantage of pipelining becomes even greater in more complicated situations involving many messages.
Promise pipelining also should not be confused with pipelined message processing in actor systems, where it is possible for an actor to specify and begin executing a behaviour for the next message before having completed processing of the current message.
Read-only views
[edit]In some programming languages such as Oz, E, and AmbientTalk, it is possible to obtain a read-only view of a future, which allows reading its value when resolved, but does not permit resolving it:
- In Oz, the
!!operator is used to obtain a read-only view. - In E and AmbientTalk, a future is represented by a pair of values called a promise/resolver pair. The promise represents the read-only view, and the resolver is needed to set the future's value.
- In C++ (since C++11) a
std::futureprovides a read-only view. The value is set directly by using astd::promise, or set to the result of a function call usingstd::packaged_taskorstd::async. - In the Dojo Toolkit's Deferred API as of version 1.5, a consumer-only promise object represents a read-only view.[7]
- In Alice ML, futures provide a read-only view, whereas a promise contains both a future and the ability to resolve the future[8][9]
- In .NET
System.Threading.Tasks.Task<T>represents a read-only view. Resolving the value can be done viaSystem.Threading.Tasks.TaskCompletionSource<T>.
Support for read-only views is consistent with the principle of least privilege, since it enables the ability to set the value to be restricted to subjects that need to set it. In a system that also supports pipelining, the sender of an asynchronous message (with result) receives the read-only promise for the result, and the target of the message receives the resolver.
Thread-specific futures
[edit]Some languages, such as Alice ML, define futures that are associated with a specific thread that computes the future's value.[9] This computation can start either eagerly when the future is created, or lazily when its value is first needed. A lazy future is similar to a thunk, in the sense of a delayed computation.
Alice ML also supports futures that can be resolved by any thread, and calls these promises.[8] This use of promise is different from its use in E as described above. In Alice, a promise is not a read-only view, and promise pipelining is unsupported. Instead, pipelining naturally happens for futures, including ones associated with promises.
Blocking vs non-blocking semantics
[edit]If the value of a future is accessed asynchronously, for example by sending a message to it, or by explicitly waiting for it using a construct such as when in E, then there is no difficulty in delaying until the future is resolved before the message can be received or the wait completes. This is the only case to be considered in purely asynchronous systems such as pure actor languages.
However, in some systems it may also be possible to attempt to immediately or synchronously access a future's value. Then there is a design choice to be made:
- the access could block the current thread or process until the future is resolved (possibly with a timeout). This is the semantics of dataflow variables in the language Oz.
- the attempted synchronous access could always signal an error, for example throwing an exception. This is the semantics of remote promises in E.[10]
- potentially, the access could succeed if the future is already resolved, but signal an error if it is not. This would have the disadvantage of introducing nondeterminism and the potential for race conditions, and seems to be an uncommon design choice.
As an example of the first possibility, in C++11, a thread that needs the value of a future can block until it is available by calling the wait() or get() member functions. A timeout can also be specified on the wait using the wait_for() or wait_until() member functions to avoid indefinite blocking. If the future arose from a call to std::async then a blocking wait (without a timeout) may cause synchronous invocation of the function to compute the result on the waiting thread.
Related constructs
[edit]Futures are a particular case of the synchronization primitive "events," which can be completed only once. In general, events can be reset to initial empty state and, thus, completed as many times as desired.[11]
An I-var (as in the language Id) is a future with blocking semantics as defined above. An I-structure is a data structure containing I-vars. A related synchronization construct that can be set multiple times with different values is called an M-var. M-vars support atomic operations to take or put the current value, where taking the value also sets the M-var back to its initial empty state.[12]
A concurrent logic variable [citation needed] is similar to a future, but is updated by unification, in the same way as logic variables in logic programming. Thus it can be bound more than once to unifiable values, but cannot be set back to an empty or unresolved state. The dataflow variables of Oz act as concurrent logic variables, and also have blocking semantics as mentioned above.
A concurrent constraint variable is a generalization of concurrent logic variables to support constraint logic programming: the constraint may be narrowed multiple times, indicating smaller sets of possible values. Typically there is a way to specify a thunk that should run whenever the constraint is narrowed further; this is needed to support constraint propagation.
Relations between the expressiveness of different forms of future
[edit]Eager thread-specific futures can be straightforwardly implemented in non-thread-specific futures, by creating a thread to calculate the value at the same time as creating the future. In this case it is desirable to return a read-only view to the client, so that only the newly created thread is able to resolve this future.
To implement implicit lazy thread-specific futures (as provided by Alice ML, for example) in terms in non-thread-specific futures, needs a mechanism to determine when the future's value is first needed (for example, the WaitNeeded construct in Oz[13]). If all values are objects, then the ability to implement transparent forwarding objects is sufficient, since the first message sent to the forwarder indicates that the future's value is needed.
Non-thread-specific futures can be implemented in thread-specific futures, assuming that the system supports message passing, by having the resolving thread send a message to the future's own thread. However, this can be viewed as unneeded complexity. In programming languages based on threads, the most expressive approach seems to be to provide a mix of non-thread-specific futures, read-only views, and either a WaitNeeded construct, or support for transparent forwarding.
Evaluation strategy
[edit]The evaluation strategy of futures, which may be termed call by future, is non-deterministic: the value of a future will be evaluated at some time between when the future is created and when its value is used, but the precise time is not determined beforehand and can change from run to run. The computation can start as soon as the future is created (eager evaluation) or only when the value is actually needed (lazy evaluation), and may be suspended part-way through, or executed in one run. Once the value of a future is assigned, it is not recomputed on future accesses; this is like the memoization used in call by need.
A lazy future is a future that deterministically has lazy evaluation semantics: the computation of the future's value starts when the value is first needed, as in call by need. Lazy futures are of use in languages which evaluation strategy is by default not lazy. For example, in C++11 such lazy futures can be created by passing the std::launch::deferred launch policy to std::async, along with the function to compute the value.
Semantics of futures in the actor model
[edit]In the actor model, an expression of the form future <Expression> is defined by how it responds to an Eval message with environment E and customer C as follows: The future expression responds to the Eval message by sending the customer C a newly created actor F (the proxy for the response of evaluating <Expression>) as a return value concurrently with sending <Expression> an Eval message with environment E and customer C. The default behavior of F is as follows:
- When F receives a request R, then it checks to see if it has already received a response (that can either be a return value or a thrown exception) from evaluating
<Expression>proceeding as follows:- If it already has a response V, then
- If V is a return value, then it is sent the request R.
- If V is an exception, then it is thrown to the customer of the request R.
- If it does not already have a response, then R is stored in the queue of requests inside the F.
- If it already has a response V, then
- When F receives the response V from evaluating
<Expression>, then V is stored in F and- If V is a return value, then all of the queued requests are sent to V.
- If V is an exception, then it is thrown to the customer of each of the queued requests.
However, some futures can deal with requests in special ways to provide greater parallelism. For example, the expression 1 + future factorial(n) can create a new future that will behave like the number 1+factorial(n). This trick does not always work. For example, the following conditional expression:
if m>future factorial(n) then print("bigger") else print("smaller")
suspends until the future for factorial(n) has responded to the request asking if m is greater than itself.
History
[edit]The future and/or promise constructs were first implemented in programming languages such as MultiLisp and Act 1. The use of logic variables for communication in concurrent logic programming languages was quite similar to futures. These began in Prolog with Freeze and IC Prolog, and became a true concurrency primitive with Relational Language, Concurrent Prolog, guarded Horn clauses (GHC), Parlog, Strand, Vulcan, Janus, Oz-Mozart, Flow Java, and Alice ML. The single-assignment I-var from dataflow programming languages, originating in Id and included in Reppy's Concurrent ML, is much like the concurrent logic variable.
The promise pipelining technique (using futures to overcome latency) was invented by Barbara Liskov and Liuba Shrira in 1988,[6] and independently by Mark S. Miller, Dean Tribble and Rob Jellinghaus in the context of Project Xanadu circa 1989.[14]
The term promise was coined by Liskov and Shrira, although they referred to the pipelining mechanism by the name call-stream, which is now rarely used.
Both the design described in Liskov and Shrira's paper, and the implementation of promise pipelining in Xanadu, had the limit that promise values were not first-class: an argument to, or the value returned by a call or send could not directly be a promise (so the example of promise pipelining given earlier, which uses a promise for the result of one send as an argument to another, would not have been directly expressible in the call-stream design or in the Xanadu implementation). It seems that promises and call-streams were never implemented in any public release of Argus,[15] the programming language used in the Liskov and Shrira paper. Argus development stopped around 1988.[16] The Xanadu implementation of promise pipelining only became publicly available with the release of the source code for Udanax Gold[17] in 1999, and was never explained in any published document.[18] The later implementations in Joule and E support fully first-class promises and resolvers.
Several early actor languages, including the Act series,[19][20] supported both parallel message passing and pipelined message processing, but not promise pipelining. (Although it is technically possible to implement the last of these features in the first two, there is no evidence that the Act languages did so.)
After 2000, a major revival of interest in futures and promises occurred, due to their use in responsiveness of user interfaces, and in web development, due to the request–response model of message-passing. Several mainstream languages now have language support for futures and promises, most notably popularized by FutureTask in Java 5 (announced 2004)[21] and the async/await constructions in .NET 4.5 (announced 2010, released 2012)[22][23] largely inspired by the asynchronous workflows of F#,[24] which dates to 2007.[25] This has subsequently been adopted by other languages, notably Dart (2014),[26] Python (2015),[27] Hack (HHVM), and drafts of ECMAScript 7 (JavaScript), Scala, and C++ (2011).
List of implementations
[edit]Some programming languages are supporting futures, promises, concurrent logic variables, dataflow variables, or I-vars, either by direct language support or in the standard library.
List of concepts related to futures and promises by programming language
[edit]- ABCL/f[28]
- Alice ML
- AmbientTalk (including first-class resolvers and read-only promises)
- C++, starting with C++11 via
std::futureandstd::promise- Compositional C++
- Crystal (programming language)
- Dart (with
Future/Completerclasses[29] and the keywordsawaitandasync[26]) - Elm (programming language) via the Task module[30]
- Glasgow Haskell (I-vars and M-vars only)
- Id (I-vars and M-vars only)
- Io[31]
- Java via
java.util.concurrent.Futureorjava.util.concurrent.CompletableFuture - JavaScript as of ECMAScript 2015,[32] and via the keywords
asyncandawaitsince ECMAScript 2017[33] - Lucid (dataflow only)
- Some Lisps
- .NET via
System.Threading.Tasks.Task - Kotlin, however
kotlin.native.concurrent.Futureis only usually used when writing Kotlin that is intended to run natively[35] - Nim
- Oxygene
- Oz version 3[36]
- Python concurrent.futures, since 3.2,[37] as proposed by the PEP 3148, and Python 3.5 added
asyncandawait[38] - R (promises for lazy evaluation, still single threaded)
- Racket[39]
- Raku[40]
- Rust (future as
std::future::Future, promise achieved via.await)[41] - Scala via scala.concurrent package
- Scheme
- Squeak Smalltalk
- Strand
- Swift (only via third-party libraries)
- Visual Basic[clarification needed] 11 (via the keywords Async and Await)[23]
Languages also supporting promise pipelining include:
List of library-based implementations of futures
[edit]- For Common Lisp:
- For C++:
- For C# and other .NET languages: The Parallel Extensions library
- For Groovy: GPars[54]
- For JavaScript:
- Cujo.js'[55] when.js[56] provides promises conforming to the Promises/A+[57] 1.1 specification
- The Dojo Toolkit supplies promises[58] and Twisted style deferreds
- MochiKit[59] inspired by Twisted's Deferreds
- jQuery's Deferred Object is based on the CommonJS Promises/A design.
- AngularJS[60]
- node-promise[61]
- Q, by Kris Kowal, conforms to Promises/A+ 1.1[62]
- RSVP.js, conforms to Promises/A+ 1.1[63]
- YUI's[64] promise class[65] conforms to the Promises/A+ 1.0 specification.
- Bluebird, by Petka Antonov[66]
- The Closure Library's promise package conforms to the Promises/A+ specification.
- See Promise/A+'s list for more implementations based on the Promise/A+ design.
- For Java:
- For Lua:
- The cqueues [1] module contains a Promise API.
- For Objective-C: MAFuture,[69][70] RXPromise,[71] ObjC-CollapsingFutures,[72] PromiseKit,[73] objc-promise,[74] OAPromise,[75]
- For OCaml: Lazy module implements lazy explicit futures[76]
- For Perl: Future,[77] Promises,[78] Reflex,[79] Promise::ES6,[80] and Promise::XS[81]
- For PHP: React/Promise[82]
- For Python:
- For R:
- For Ruby:
- For Rust:
- futures-rs[93]
- For Scala:
- For Swift:
- Async framework, implements C#-style
async/non-blockingawait[95] - FutureKit,[96] implements a version for Apple GCD[97]
- FutureLib, pure Swift 2 library implementing Scala-style futures and promises with TPL-style cancellation[98]
- Deferred, pure Swift library inspired by OCaml's Deferred[99]
- BrightFutures[100]
- SwiftCoroutine[101]
- Async framework, implements C#-style
- For Tcl: tcl-promise[102]
Coroutines
[edit]Futures can be implemented in coroutines[27] or generators,[103] resulting in the same evaluation strategy (e.g., cooperative multitasking or lazy evaluation).
Channels
[edit]Futures can easily be implemented in channels: a future is a one-element channel, and a promise is a process that sends to the channel, fulfilling the future.[104][105] This allows futures to be implemented in concurrent programming languages with support for channels, such as CSP and Go. The resulting futures are explicit, as they must be accessed by reading from the channel, rather than only evaluation.
See also
[edit]- Fiber (computer science)
- Futex
- Pyramid of doom (programming), a design antipattern avoided by promises
References
[edit]- ^ Friedman, Daniel; David Wise (1976). The Impact of Applicative Programming on Multiprocessing. International Conference on Parallel Processing. pp. 263–272.
Preliminary version of: Friedman, Daniel; Wise, David (April 1978). "Aspects of Applicative Programming for Parallel Processing". IEEE Transactions on Computers. C-27 (4): 289–296. CiteSeerX 10.1.1.295.9692. doi:10.1109/tc.1978.1675100. S2CID 16333366. - ^ Hibbard, Peter (1976). Parallel Processing Facilities. New Directions in Algorithmic Languages, (ed.) Stephen A. Schuman, IRIA, 1976.
- ^ Henry Baker; Carl Hewitt (August 1977). The Incremental Garbage Collection of Processes. Proceedings of the Symposium on Artificial Intelligence Programming Languages. ACM SIGPLAN Notices 12, 8. pp. 55–59. Archived from the original on 4 July 2008. Retrieved 13 February 2015.
- ^ Promise Pipelining at erights.org
- ^ Promise Pipelining on the C2 wiki
- ^ a b Barbara Liskov; Liuba Shrira (1988). "Promises: Linguistic Support for Efficient Asynchronous Procedure Calls in Distributed Systems". Proceedings of the SIGPLAN '88 Conference on Programming Language Design and Implementation; Atlanta, Georgia, United States. ACM. pp. 260–267. doi:10.1145/53990.54016. ISBN 0-89791-269-1. Also published in ACM SIGPLAN Notices 23(7).
- ^ Robust promises with Dojo deferred, Site Pen, 3 May 2010
- ^ a b "Promise", Alice Manual, DE: Uni-SB, archived from the original on 8 October 2008, retrieved 21 March 2007
- ^ a b "Future", Alice manual, DE: Uni-SB, archived from the original on 6 October 2008, retrieved 21 March 2007
- ^ Promise, E rights
- ^ 500 lines or less, "A Web Crawler With asyncio Coroutines" by A. Jesse Jiryu Davis and Guido van Rossum says "implementation uses an asyncio.Event in place of the Future shown here. The difference is an Event can be reset, whereas a Future cannot transition from resolved back to pending."
- ^ Control Concurrent MVar, Haskell, archived from the original on 18 April 2009
- ^ WaitNeeded, Mozart Oz, archived from the original on 17 May 2013, retrieved 21 March 2007
- ^ Promise, Sunless Sea, archived from the original on 23 October 2007
- ^ Argus, MIT
- ^ Liskov, Barbara (26 January 2021), Distributed computing and Argus, Oral history, IEEE GHN
- ^ Gold, Udanax, archived from the original on 11 October 2008
- ^ Pipeline, E rights
- ^ Henry Lieberman (June 1981). "A Preview of Act 1". MIT AI Memo 625.
- ^ Henry Lieberman (June 1981). "Thinking About Lots of Things at Once without Getting Confused: Parallelism in Act 1". MIT AI Memo 626.
- ^ Goetz, Brian (23 November 2004). "Concurrency in JDK 5.0". IBM.
- ^ a b "Async in 4.5: Worth the Await – .NET Blog – Site Home – MSDN Blogs". Blogs.msdn.com. Retrieved 13 May 2014.
- ^ a b c "Asynchronous Programming with Async and Await (C# and Visual Basic)". Msdn.microsoft.com. Retrieved 13 May 2014.
- ^ Tomas Petricek (29 October 2010). "Asynchronous C# and F# (I.): Simultaneous introduction".
- ^ Don Syme; Tomas Petricek; Dmitry Lomov (21 October 2010). "The F# Asynchronous Programming Model, PADL 2011".
- ^ a b Gilad Bracha (October 2014). "Dart Language Asynchrony Support: Phase 1".
- ^ a b "PEP 0492 – Coroutines with async and await syntax".
- ^ Kenjiro Taura; Satoshi Matsuoka; Akinori Yonezawa (1994). "ABCL/f: A Future-Based Polymorphic Typed Concurrent Object-Oriented Language – Its Design and Implementation.". In Proceedings of the DIMACS workshop on Specification of Parallel Algorithms, number 18 in Dimacs Series in Discrete Mathematics and Theoretical Computer Science. American Mathematical Society. pp. 275–292. CiteSeerX 10.1.1.23.1161.
- ^ "Dart SDK dart async Completer".
- ^ "Task".
- ^ Steve Dekorte (2005). "Io, The Programming Language".
- ^ "Using promises". Mozilla Developer Network. Retrieved 23 February 2021.
- ^ "Making asynchronous programming easier with async and await". Mozilla Developer Network. Retrieved 23 February 2021.
- ^ Rich Hickey (2009). "changes.txt at 1.1.x from richhickey's clojure". GitHub.
- ^ "Future – Kotlin Programming Language".
- ^ Seif Haridi; Nils Franzen. "Tutorial of Oz". Mozart Global User Library. Archived from the original on 14 May 2011. Retrieved 12 April 2011.
- ^ Python 3.2 Release
- ^ Python 3.5 Release
- ^ "Parallelism with Futures". PLT. Retrieved 2 March 2012.
- ^ "class Promise". raku.org. Retrieved 19 August 2022.
- ^ "Future in std::future - Rust". doc.rust-lang.org. Retrieved 16 December 2023.
- ^ Common Lisp Blackbird
- ^ Common Lisp Eager Future2
- ^ Lisp in parallel – A parallel programming library for Common Lisp
- ^ Common Lisp PCall
- ^ "Chapter 30. Thread 4.0.0". Retrieved 26 June 2013.
- ^ "Dlib C++ Library #thread_pool". Retrieved 26 June 2013.
- ^ "GitHub – facebook/folly: An open-source C++ library developed and used at Facebook". GitHub. 8 January 2019.
- ^ "HPX". 10 February 2019.
- ^ "Threads Slides of POCO" (PDF).
- ^ "QtCore 5.0: QFuture Class". Qt Project. Archived from the original on 1 June 2013. Retrieved 26 June 2013.
- ^ "Seastar". Seastar project. Retrieved 22 August 2016.
- ^ "stlab is the ongoing work of what was Adobe's Software Technology Lab. The Adobe Source Libraries (ASL), Platform Libraries, and new stlab libraries are hosted on github". 31 January 2021.
- ^ Groovy GPars Archived 12 January 2013 at the Wayback Machine
- ^ Cujo.js
- ^ JavaScript when.js
- ^ Promises/A+ specification
- ^ promises
- ^ JavaScript MochKit.Async
- ^ JavaScript Angularjs
- ^ JavaScript node-promise
- ^ "JavaScript Q". Archived from the original on 31 December 2018. Retrieved 8 April 2013.
- ^ JavaScript RSVP.js
- ^ YUI JavaScript class library
- ^ YUI JavaScript promise class
- ^ JavaScript Bluebird
- ^ Java JDeferred
- ^ Java ParSeq
- ^ Objective-C MAFuture GitHub
- ^ Objective-C MAFuture mikeash.com
- ^ Objective-C RXPromise
- ^ ObjC-CollapsingFutures
- ^ Objective-C PromiseKit
- ^ Objective-C objc-promise
- ^ Objective-C OAPromise
- ^ OCaml Lazy
- ^ Perl Future
- ^ Perl Promises
- ^ Perl Reflex
- ^ Perl Promise::ES6
- ^ "Promise::XS – Fast promises in Perl – metacpan.org". metacpan.org. Retrieved 14 February 2021.
- ^ PHP React/Promise
- ^ Python built-in implementation
- ^ pythonfutures
- ^ "Twisted Deferreds". Archived from the original on 6 August 2020. Retrieved 29 April 2010.
- ^ R package future
- ^ future
- ^ Concurrent Ruby
- ^ Ruby Promise gem
- ^ Ruby libuv
- ^ "Ruby Celluloid gem". Archived from the original on 8 May 2013. Retrieved 19 February 2022.
- ^ Ruby future-resource
- ^ futures-rs crate
- ^ Twitter's util library
- ^ "Swift Async". Archived from the original on 31 December 2018. Retrieved 23 June 2014.
- ^ Swift FutureKit
- ^ Swift Apple GCD
- ^ Swift FutureLib
- ^ bignerdranch/Deferred
- ^ Thomvis/BrightFutures
- ^ belozierov/SwiftCoroutine
- ^ tcl-promise
- ^ Does async/await solve a real problem?
- ^ "Go language patterns Futures". Archived from the original on 4 December 2020. Retrieved 9 February 2014.
- ^ "Go Language Patterns". Archived from the original on 11 November 2020. Retrieved 9 February 2014.
External links
[edit]Futures and promises
View on Grokipediafuture primitive allowed immediate returns of future objects while spawning parallel evaluations.[4]
In contemporary programming, futures and promises have evolved into versatile tools for managing concurrency across diverse paradigms. They support operations like chaining (e.g., then or flatMap), error handling, and cancellation, making them essential for scalable applications in distributed systems and reactive programming.[5] Languages such as Scala provide immutable futures as read-only views of promises, ensuring thread-safe access and composability.[6] Similarly, Java's CompletableFuture class, introduced in Java 8, extends these concepts to enable asynchronous pipelines with functional-style transformations. Their adoption underscores a shift toward non-blocking, event-driven architectures that enhance performance in multicore and networked environments.
Fundamentals
Definition and Overview
In computer science, a future is an object that acts as a placeholder for the eventual result of an asynchronous computation, allowing a program to continue execution without blocking while the computation proceeds in parallel or on another thread.[7] A promise, in contrast, represents the writable counterpart to a future, providing a mechanism for the producer of the computation to assign a value (or an error) to the future once the operation completes.[8] This duality enables synchronization and communication between concurrent parts of a program, where the future serves as the read-only interface for consumers awaiting the result.[9] The primary motivation for futures and promises arises from the need to manage concurrency, non-blocking I/O, and parallelism in systems where operations like network requests, file reads, or complex calculations may introduce significant latency.[8] By decoupling the initiation of a task from its completion, these constructs prevent program halting, allowing threads or event loops to handle multiple tasks efficiently and improving overall system responsiveness.[9] For instance, in distributed systems, they facilitate asynchronous procedure calls, enabling callers to proceed immediately while results are computed remotely.[8] The basic lifecycle of a future begins with its creation, typically by spawning an asynchronous task that binds to the future; it remains in a pending state until the computation resolves successfully with a value or fails with an exception.[7] Resolution updates the future's state, after which consumers can access the result through methods like blockingget() (which waits if necessary) or non-blocking await (which may use callbacks or chaining).[9] Key benefits include enhanced UI responsiveness by avoiding freezes during I/O, greater server scalability through concurrent request handling, and easier composition of asynchronous tasks via chaining or pipelining.[8]
A simple pseudocode example illustrates future creation and resolution:
future = createFuture()
spawnAsyncTask(computeValue) { result ->
resolve(future, result) // or reject(future, error)
}
value = get(future) // Blocks until resolved
future = createFuture()
spawnAsyncTask(computeValue) { result ->
resolve(future, result) // or reject(future, error)
}
value = get(future) // Blocks until resolved
Implicit vs. Explicit
In implicit futures and promises, the language runtime automatically generates the future object upon invocation of an asynchronous operation, eliminating the need for upfront explicit creation by the programmer. This approach integrates seamlessly with the language's concurrency model, where the future serves as a transparent proxy for the eventual value. For instance, in JavaScript, declaring an async function implicitly returns a Promise without requiring manual instantiation, allowing the runtime to handle the asynchronous execution and resolution.[10] Explicit futures, by contrast, demand that the programmer manually instantiate the future object before commencing the computation, providing a visible handle for managing the asynchronous task. In Scala, this is typically done via theFuture.apply method, which accepts a code block representing the computation and schedules it on an execution context, returning a Future[T] immediately.[6] This explicit step ensures the developer has direct oversight from the outset.
The choice between implicit and explicit mechanisms involves key trade-offs in usability and control. Implicit creation minimizes boilerplate and enhances code readability by hiding the machinery of asynchrony, but it can obscure latency points, fulfillment stages, and resource overheads, potentially leading to unintended blocking or inefficient resource use.[11] Explicit futures mitigate these issues by offering precise control over scheduling, error propagation, and synchronization, though at the cost of increased verbosity and type complexity in the code.[11][9]
To illustrate, consider the following pseudocode examples:
Implicit (e.g., JavaScript-style async function):
async function fetchData() {
const result = await apiCall(); // Runtime creates [Promise](/page/Promise) implicitly
return process(result);
}
// Caller receives [Promise](/page/Promise) without explicit future handling
async function fetchData() {
const result = await apiCall(); // Runtime creates [Promise](/page/Promise) implicitly
return process(result);
}
// Caller receives [Promise](/page/Promise) without explicit future handling
import scala.concurrent.[Future](/page/Future)
val [future](/page/Future) = [Future](/page/Future).apply { // [Programmer](/page/Programmer) creates [Future](/page/Future) explicitly
val result = apiCall()
process(result)
}
// [Computation](/page/Computation) starts asynchronously; [future](/page/Future) holds the result
import scala.concurrent.[Future](/page/Future)
val [future](/page/Future) = [Future](/page/Future).apply { // [Programmer](/page/Programmer) creates [Future](/page/Future) explicitly
val result = apiCall()
process(result)
}
// [Computation](/page/Computation) starts asynchronously; [future](/page/Future) holds the result
Variations and Features
Promise Pipelining
Promise pipelining is a technique for composing asynchronous operations by sequencing promises, where the resolution of one promise serves as the input to the next, allowing for efficient chaining without intermediate blocking. Introduced as "call-streams" in the Argus system, this mechanism enables a caller to issue a series of remote procedure calls in a stream, deferring resolution until results are needed, thereby supporting concurrency and reducing latency in distributed environments.[12] In modern implementations, such as those in JavaScript's ECMAScript standard, pipelining manifests through methods like.then(), which return new promises that link operations sequentially.[13]
The primary benefits of promise pipelining include improved code readability by avoiding deeply nested callbacks—often termed "callback hell"—and seamless error propagation across the chain, where rejections in any step can be caught uniformly at the end or intermediate points. This approach facilitates dataflow-style programming, where operations proceed as dependencies resolve, minimizing round-trip delays in networked systems. For instance, in distributed computing, pipelining allows multiple messages to be sent before prior responses arrive, optimizing throughput.[14][15]
Mechanically, pipelining distinguishes between transformation operations like map (which applies a function to the resolved value and wraps it in a new promise) and flatMap (which unwraps the inner promise from the transformation result to continue the chain seamlessly). Success paths forward the resolved value through the sequence, while failure paths propagate exceptions or rejections, often short-circuiting the chain unless handled. In the original call-stream model, streams ensure ordered delivery of calls and replies, with promises acting as placeholders that block only upon explicit claiming. Error handling integrates exception propagation, allowing a single catch to manage failures from any linked operation, though unhandled rejections may lead to silent failures if not configured properly.[12][13]
Consider a pseudocode example of chaining asynchronous operations, such as fetching, processing, and storing data:
let dataPromise = fetchData(); // Asynchronous fetch returning a [promise](/page/Promise)
let processedPromise = dataPromise.then(processData); // [Apply](/page/Apply) transformation, returns new [promise](/page/Promise)
let storePromise = processedPromise.then(storeData); // Chain storage, handles success/failure from prior
storePromise.catch(handleError); // Unified error handling for the [pipeline](/page/Pipeline)
let dataPromise = fetchData(); // Asynchronous fetch returning a [promise](/page/Promise)
let processedPromise = dataPromise.then(processData); // [Apply](/page/Apply) transformation, returns new [promise](/page/Promise)
let storePromise = processedPromise.then(storeData); // Chain storage, handles success/failure from prior
storePromise.catch(handleError); // Unified error handling for the [pipeline](/page/Pipeline)
.then() creating a dependent promise that resolves only after the previous one, enabling overlap in distributed settings.[15]
Despite these advantages, promise pipelining has limitations, including the risk of stack overflow in implementations with deep recursive chaining if not optimized for tail calls, and challenges in debugging rejected promises that propagate unexpectedly through long chains. In distributed contexts, network partitions can disrupt streams, requiring recovery mechanisms like restarts, while composing dynamic numbers of operations may introduce complexity in ensuring proper termination.[12][9]
Read-only Views
Read-only views in the context of futures and promises provide a proxy mechanism for accessing the state and eventual value of an underlying promise without permitting modifications to it, treating the future as an immutable placeholder that represents a computation's outcome, allowing consumers to query readiness or retrieve results once available. The core purpose of these views is to facilitate safe sharing of asynchronous results across multiple threads or program modules in concurrent settings, thereby avoiding race conditions associated with direct access to the writable promise. By enforcing read-only access, they ensure that only the designated producer can resolve the value, while multiple observers can independently monitor or consume it without interfering with the shared state. In implementation, read-only views typically expose query methods such asget() for value retrieval and isDone() or wait() for status checks, while deliberately omitting mutation operations like set() or complete(). This interface design promotes thread safety through internal synchronization on the shared state, often using atomic operations or locks to handle concurrent queries.
A representative example appears in C++'s standard library, where std::[future](/page/Future) offers a single-use read-only view tied to exclusive ownership, suitable for one-to-one producer-consumer scenarios, whereas std::shared_future extends this to multiple concurrent accesses via copyable references, at the cost of minor performance overhead from atomic reference counting on the shared state.[16]
Such views find application in parallel frameworks, where they enable the distribution of computation results to numerous worker threads without ownership transfer, supporting efficient result broadcasting in task-parallel systems like those using directed acyclic graphs of dependencies.[17]
Thread-specific Futures
Thread-specific futures are asynchronous computation constructs bound to a particular thread, scheduler, or executor, ensuring that the associated computation executes within a designated execution context rather than migrating across arbitrary threads. This binding enforces locality, where the future's value is produced by a thread dedicated to that task, often representing the thread's lifecycle directly. For instance, in languages like Alice ML, futures are explicitly tied to specific threads, with computations initiating either eagerly upon creation or lazily upon access, allowing the thread to handle the evaluation in isolation.[18] The primary advantages of thread-specific futures include reduced overhead from context switching, as the computation remains confined to its assigned thread, minimizing synchronization costs when accessing thread-local storage or resources. This design simplifies debugging by localizing execution traces to a single thread, making it easier to monitor and profile without tracing cross-thread interactions. Additionally, it supports efficient resource management, such as direct thread interruption for cancellation, which halts the computation and reclaims resources more predictably than in thread-agnostic models. In contrast to thread-agnostic futures, which allow flexible scheduling across any available thread, thread-specific variants prioritize predictability and efficiency in environments with constrained threading models.[19] Examples of thread-specific futures appear in several languages and libraries. In Java, theCompletableFuture class can be configured with a custom Executor, such as a single-threaded executor, to bind the computation to a specific thread or pool, ensuring execution context locality for tasks like I/O-bound operations. Similarly, Scala's libraries like cats-effect and ZIO use "fibers" as lightweight, thread-specific futures that tie computations to managed threads, enabling composable asynchronous workflows while maintaining isolation. These differ from more general promise-futures, where the producing thread is decoupled from the consuming one, potentially requiring additional messaging for resolution.[20][19]
Despite these benefits, thread-specific futures introduce challenges, such as potential performance bottlenecks when overused on a limited number of threads, leading to serialization of tasks that could otherwise parallelize across a thread pool. Migration to broader thread pools may require refactoring to detach futures from strict bindings, complicating scalability in dynamic workloads. In relation to the actor model, thread-specific futures provide a lightweight form of isolation similar to actor threads, confining state mutations without full message-passing semantics, though they lack inherent actor guarantees like supervision.[19]
Semantics and Behavior
Blocking vs Non-blocking Semantics
In futures and promises, blocking semantics refer to operations where the calling thread suspends execution until the future's value is available, ensuring synchronous access to the result.[21] This approach, exemplified by methods likeget() in early Java futures, ties the thread's progress directly to the completion of the asynchronous computation, potentially leading to resource inefficiency if the underlying task involves prolonged waits such as I/O operations.[9] Blocking is particularly straightforward in single-threaded or sequential contexts but can propagate delays across the system.
Non-blocking semantics, in contrast, allow the calling thread to continue execution without suspension, typically through mechanisms like polling the future's state or registering callbacks to handle completion asynchronously.[13] For instance, a future might support an onComplete handler that invokes a provided function only when the result is ready, preserving thread availability for other tasks and enhancing concurrency in multi-threaded environments.[9] This model aligns with event-driven architectures, where selectors or reactors manage multiple futures without dedicated threads per operation, promoting scalability in high-throughput systems like web servers.
The trade-offs between these semantics are significant: blocking simplifies program flow by mimicking synchronous code, reducing the cognitive load on developers, but it risks deadlocks—especially in systems with limited threads—and underutilizes CPU resources during waits.[22] Non-blocking avoids such issues, enabling better scalability and responsiveness, yet it introduces complexity in managing asynchronous flows, such as chaining callbacks or handling exceptions across non-linear execution paths. These concerns often tie into broader evaluation strategies, where blocking may occur only upon explicit demand from the consumer.
To illustrate, consider pseudocode for a blocking approach in a simple loop awaiting multiple futures:
Future<String> f1 = computeAsync("task1");
Future<String> f2 = computeAsync("task2");
String result1 = f1.get(); // Blocks until resolved
String result2 = f2.get(); // Blocks until resolved
process(result1 + result2);
Future<String> f1 = computeAsync("task1");
Future<String> f2 = computeAsync("task2");
String result1 = f1.get(); // Blocks until resolved
String result2 = f2.get(); // Blocks until resolved
process(result1 + result2);
Future<String> f1 = computeAsync("task1");
Future<String> f2 = computeAsync("task2");
f1.onComplete(value1 -> {
if (f2.isCompleted()) {
process(value1 + f2.get()); // Non-blocking check
}
});
f2.onComplete(value2 -> {
if (f1.isCompleted()) {
process(f1.get() + value2);
}
});
Future<String> f1 = computeAsync("task1");
Future<String> f2 = computeAsync("task2");
f1.onComplete(value1 -> {
if (f2.isCompleted()) {
process(value1 + f2.get()); // Non-blocking check
}
});
f2.onComplete(value2 -> {
if (f1.isCompleted()) {
process(f1.get() + value2);
}
});
get(timeout), which suspends only up to a specified duration before yielding control or throwing an exception, or cooperative yielding where the thread periodically checks progress without full suspension.[9] These methods balance simplicity with non-blocking benefits, though they require careful tuning to avoid indefinite waits or excessive polling overhead.[22]
Evaluation Strategy
Futures and promises employ various evaluation strategies that determine when the underlying computation begins, influencing resource utilization and parallelism. These strategies primarily contrast lazy and eager evaluation, with ties to broader paradigms like strict and non-strict evaluation in functional programming.[23] In lazy evaluation, the computation associated with a future is deferred until its value is explicitly demanded, such as through a retrieval operation likeget(). This approach conserves computational resources by avoiding execution of futures whose results are never accessed, thereby reducing unnecessary CPU cycles and memory allocation for discarded computations.[23] For instance, in the R programming language's future package, lazy futures are created with the %lazy% TRUE directive, where evaluation only occurs upon value resolution, freezing globals at creation to maintain consistency without immediate execution.[24]
Conversely, eager evaluation initiates the computation immediately upon the future's creation, often in a separate thread or process to enable early parallelism. This strategy suits scenarios where the result is likely needed, as it overlaps computation with other program activities, potentially improving overall throughput in concurrent settings. In Java's CompletableFuture, methods like supplyAsync start asynchronous execution right away using the default ForkJoinPool, promoting immediate progress on the task without waiting for demand.[25]
The distinction between strict and non-strict evaluation further contextualizes these strategies, particularly in functional languages. Strict evaluation, akin to eager, requires arguments to be fully computed before function application, while non-strict evaluation delays this until necessity arises. Futures facilitate call-by-need—a form of non-strict evaluation—in such languages by using thunks to memoize results, avoiding redundant computations once evaluated, which enhances efficiency in lazy contexts like Haskell.[23] In Haskell, parallel strategies via the parallel package leverage the language's inherent lazy evaluation, sparking computations with rpar for parallelism while deferring full sparking until demanded, ensuring composable and deterministic behavior.[26]
These strategies impact memory usage, as lazy approaches minimize allocation for unused futures; CPU efficiency, where eager starts parallel work sooner but risks waste; and composability, enabling modular chaining of futures in parallel environments without premature evaluation.[23] For example, Haskell's lazy futures support call-by-need for resource-efficient parallelism, contrasting with Java's eager CompletableFuture designed for immediate asynchronous dispatch.[26][25]
Semantics in the Actor Model
In the actor model, actors serve as the fundamental units of computation, operating in isolation and communicating exclusively through asynchronous message passing, where futures embody the expectation of a future reply from a recipient actor.[27] This design ensures that actors maintain independence, avoiding shared state and enabling scalable concurrency without locks or direct synchronization.[27] Futures, originally introduced as specialized actors for parallel execution, allow computations to proceed concurrently while deferring access to results until they become available.[27] Within actor systems, futures integrate seamlessly with message-passing primitives, distinguishing between fire-and-forget sends using the! operator, which dispatches messages without awaiting responses, and request-reply patterns using the ? or !? operator (often termed "ask"), which returns a future representing the anticipated reply. In the ask pattern, the sending actor provides a promise—essentially the writable counterpart to the future—as the reply destination, allowing the recipient actor to complete it with a value or exception via sender() ! reply or Status.Failure(e). This duality of futures and promises facilitates non-blocking interactions, where the future acts as a read-only handle for the pending result.
The incorporation of futures enhances fault tolerance and scalability in distributed actor systems, as exemplified by the Akka framework, where supervision hierarchies propagate failures from unresolved or errored futures, enabling automatic restarts and location transparency across nodes. However, challenges arise in managing timeouts, where unfulfilled asks trigger AskTimeoutException after a specified duration (e.g., 5 seconds), and in supervision trees, which require explicit strategies to escalate or contain failures from incomplete futures without cascading system-wide disruptions.
For instance, the ask pattern in pseudocode might appear as follows, yielding a future for the reply:
implicit val timeout = 5.seconds
val [future](/page/Future) = actor ? RequestMessage
[future](/page/Future).onComplete {
case [Success](/page/Success)(result) => process(result)
case Failure(error) => handleError(error)
}
implicit val timeout = 5.seconds
val [future](/page/Future) = actor ? RequestMessage
[future](/page/Future).onComplete {
case [Success](/page/Success)(result) => process(result)
case Failure(error) => handleError(error)
}
Applications and Expressiveness
Applications
Futures and promises find extensive use in web development, where they facilitate asynchronous HTTP requests without blocking the main thread, enabling efficient handling of dynamic content loading. For instance, in JavaScript, the Promise API allows developers to initiate network requests via XMLHttpRequest or Fetch API, resolving with the response data upon completion or rejecting on errors, which supports seamless integration in both client-side and server-side environments.[28] This non-blocking behavior is particularly beneficial for server-side rendering in Node.js, where promises ensure that rendering tasks proceed concurrently with I/O operations, preventing delays in response times for user requests.[29] In graphical user interface (GUI) programming, futures and promises maintain responsiveness by offloading computationally intensive tasks, such as image processing, to background threads or processes. Python's concurrent.futures module, for example, provides ThreadPoolExecutor and ProcessPoolExecutor classes that submit tasks like applying filters to images, returning Future objects to retrieve results without freezing the UI thread in frameworks like Tkinter or PyQt.[30] This approach ensures that user interactions, such as button clicks or window resizing, remain fluid even during prolonged operations, enhancing overall application usability.[31] For data processing in big data frameworks, futures enable parallel execution of map-reduce style tasks across distributed clusters, scaling computations beyond single-machine limits. Dask, a flexible library for Python, leverages its futures interface—extending concurrent.futures—to submit functions for mapping over large datasets, such as transforming terabyte-scale arrays, and gathering results asynchronously for aggregation in reduce phases.[32] This parallelism reduces processing times for tasks like data cleaning or statistical analysis, making it suitable for environments handling massive volumes without exhaustive memory usage.[33] In distributed systems, futures and promises coordinate asynchronous calls among microservices, allowing services to communicate without synchronous waiting that could propagate failures. The Akka toolkit in Scala utilizes futures to pipe results between actors in a distributed actor model, enabling resilient orchestration of microservices for tasks like event sourcing or request aggregation across nodes. By completing futures with values from remote invocations, systems achieve fault-tolerant coordination, where timeouts and retries handle network latencies inherent in microservice architectures. Performance case studies demonstrate significant throughput gains from futures and promises in event-driven environments like Node.js. In one optimization effort for a billing engine processing hundreds of thousands of records, refactoring sequential API fetches to use Promise.all enabled parallel execution, reducing runtime from over five hours to under five minutes by minimizing idle wait times on independent async operations.[34] Such improvements highlight how promises enhance event loop efficiency, boosting overall system throughput in high-concurrency scenarios without altering core logic.[35]Relations between Expressiveness of Different Forms
Futures and promises can be taxonomized based on several dimensions that influence their computational expressiveness. Single-future variants, where a future represents a single pending value, are common in languages like Java and Scala, allowing straightforward synchronization for individual asynchronous operations. In contrast, multi-future constructs, such as those in C# Tasks or JavaScript's Promise.all, enable aggregation of multiple pending results, supporting more complex parallel compositions but introducing challenges in error propagation across the set. Cancellable futures, exemplified by C#'s CancellationToken-integrated Tasks, permit explicit termination of computations to manage resources, enhancing expressiveness for long-running or speculative tasks, whereas non-cancellable forms like JavaScript Promises lack this, limiting their utility in dynamic environments where operations may need interruption. Composite futures, including pipelined or chained variants, further extend this taxonomy by allowing sequential or parallel assembly of operations, as seen in E's promise pipelining or Scala's flatMap on Futures.[9][36] An expressiveness hierarchy emerges among these forms, with explicit futures providing greater control over synchronization points compared to implicit ones. Implicit futures, such as those in Alice ML or MultiLisp delays, automatically resolve upon access, promoting transparent integration into sequential code but obscuring control flow and complicating debugging of nested dependencies. Explicit futures, requiring deliberate operations like get() in Java, offer finer-grained control, enabling programmers to observe and manage computation stages, which is crucial for optimizing resource usage in concurrent settings. Pipelining enhances compositionality across both, allowing asynchronous operations to be linked without blocking, as in Scala's Future chains, which build on implicit resolution while adding explicit chaining for scalable async workflows; this elevates expressiveness by reducing callback proliferation and supporting declarative pipelines over imperative sequencing.[9][11] Formally, futures and promises exhibit mutual simulation capabilities, though with trade-offs in efficiency and feature support. Promises, as writable single-assignment containers, can simulate read-only futures by restricting fulfillment to once, as in Scala's Promise completing a paired Future; conversely, futures can emulate promises through callback registration on resolution, effectively allowing deferred writes via event-driven completion. These encodings often rely on boxing mechanisms: data-flow (implicit) futures can be implemented atop control-flow (explicit) ones by wrapping values in future containers, and vice versa, preserving semantics but potentially incurring runtime overhead from additional indirection. Limitations arise in handling exceptions and cancellations; for instance, non-cancellable futures struggle to propagate interruptions uniformly in composites, leading to partial failures without coordinated cleanup, while exception handling in pipelined promises may require explicit try-catch chaining to avoid swallowing errors in asynchronous flows.[9][11][36] Theoretically, futures and promises connect to monads in functional programming, modeling asynchronous composition as a monadic structure for sequencing effects. In Scala, Futures form a monad under flatMap (bind) and map (fmap), enabling pure functional composition of async computations akin to the IO monad, where unit wraps values into pending states and bind chains resolutions while handling failures via recover. This monadic view abstracts away low-level concurrency details, allowing expressive definitions of complex workflows, such as parallel map-reduce patterns, through lawful composition that preserves referential transparency.[22][37] Despite these strengths, gaps persist in what futures and promises can express without supplementary primitives. They inherently model asynchrony and lazy evaluation but cannot natively achieve true parallelism for CPU-bound tasks in single-threaded environments, such as JavaScript's event loop, where futures simulate concurrency via non-blocking I/O but serialize execution, necessitating threads or processes for genuine multi-core utilization. Similarly, distributed expressiveness, like in actor models, requires additional messaging semantics beyond basic futures to handle network latencies and failures reliably.[9]Related Constructs and Implementations
Related Constructs
Coroutines represent a form of cooperative multitasking where functions can suspend and resume execution explicitly, often through yield points, enabling structured concurrency without the need for explicit result placeholders like futures.[38] Unlike futures, which focus on representing the eventual result of an asynchronous computation and decoupling producer from consumer until resolution, coroutines emphasize control flow suspension for ongoing tasks, such as in generators or async functions.[39] This distinction makes coroutines suitable for scenarios requiring fine-grained interleaving of cooperative routines, whereas futures prioritize non-blocking result retrieval in event-driven systems.[38] Channels serve as buffered or unbuffered queues facilitating communication between concurrent producers and consumers in a many-to-many or point-to-multipoint manner, enabling streaming data flows with built-in synchronization.[40] In contrast to futures, which handle point-to-point delivery of a single result from an asynchronous operation, channels support ongoing producer-consumer interactions, such as pipelining multiple values without blocking on individual completions.[40] For instance, channels are ideal for decoupling components in distributed systems where data arrives continuously, while futures excel in one-off computations like remote API calls.[41] Observables, as defined in reactive streams, model asynchronous data streams that can emit zero or more values over time, supporting operators for transformation, filtering, and backpressure to manage producer-consumer rates.[42] This extends beyond futures and promises, which typically resolve to a single value or error, by allowing subscription to potentially infinite sequences with cancellation support.[43] Tasks in task-based parallelism abstract units of work scheduled for parallel execution, often returning futures to track completion, but emphasize workload distribution across threads or processes rather than pure asynchrony.[44] Continuations, meanwhile, capture the state of a computation to resume it later, providing a lower-level mechanism for composing asynchronous flows that futures build upon by implicitly chaining handlers. Futures suit one-shot asynchronous results, such as computing a value in the background, while channels and observables better handle streaming or multi-value scenarios like event processing pipelines, and coroutines or tasks fit iterative or parallel control flows requiring suspension points.[43] Actor model messaging resembles channels in enabling decoupled communication but focuses on isolated state management across actors.[40] Hybrids like async generators combine coroutines' suspension with promise-like yielding of multiple values, allowing iterable asynchronous sequences that bridge generator simplicity and future resolution.[39] These constructs overlap in enabling non-blocking code but differ in granularity: async generators facilitate lazy, on-demand production akin to coroutines, yet integrate promise chaining for error propagation and completion.[38]Implementations by Language and Library
Futures and promises have been integrated into numerous programming languages as built-in features or through standard libraries, enabling asynchronous programming in diverse ecosystems. These implementations vary in their support for chaining operations, error handling, and integration with concurrency models, often building on the core concepts of deferred computation and value resolution. Early adoptions focused on callback-based or polling mechanisms, while modern ones emphasize composability and syntactic sugar like async/await. In JavaScript, the Promises/A+ specification defines promises as objects representing the eventual completion or failure of an asynchronous operation, with a standardizedthen method for chaining handlers.[13] This became a native feature in ECMAScript 2015, promoting interoperability across libraries without native cancellation support. Java's CompletableFuture, introduced in Java 8, extends the Future interface to allow explicit completion and acts as a CompletionStage for functional-style composition via methods like thenApply and handle.[25] It supports cancellation through cancel and exception propagation.
Python's asyncio module provides Future as an awaitable object for representing asynchronous results, integrated with the event loop for non-blocking I/O; it is not thread-safe and supports cancellation via cancel().[45] Scala's scala.concurrent.Future serves as a placeholder for a value computed asynchronously, often created via Future.apply and composed with flatMap or onComplete, relying on an ExecutionContext for scheduling but lacking built-in cancellation.[6] In C++, std::future from the <future> header accesses results of asynchronous operations launched via std::async or std::packaged_task, using get() to block and retrieve values, with no direct cancellation but support for shared states via std::shared_future.
| Language | Implementation | Key Features | Cancellability |
|---|---|---|---|
| JavaScript | Native Promise | Chaining with then, rejection handling | No |
| Java | CompletableFuture | Functional composition, explicit completion | Yes |
| Python | asyncio.Future | Awaitable, event loop integration | Yes |
| Scala | scala.concurrent.Future | Monadic composition, callbacks | No |
| C++ | std::future | Polling with wait(), shared state | No |
Future trait from the standard library, where async blocks and functions generate futures polled by executors like Tokio; the await keyword simplifies usage without altering the underlying future mechanics. Go lacks native futures but achieves similar semantics through goroutines and channels, where a channel acts as a single-value future by sending results asynchronously, as recommended for request-response patterns in the effective Go guide.[46]
C#'s Task and Task<T> , part of the Task-based Asynchronous Pattern (TAP) introduced in .NET 4.5 around 2012, underpin async and await for non-blocking code, supporting cancellation via CancellationToken and continuation with ContinueWith.[47] In the 2020s, WebAssembly's JavaScript interfaces have trended toward promise-based async operations, such as WebAssembly.instantiate returning a promise for module loading, aligning with browser APIs for efficient wasm execution.[48]
Library implementations often target domain-specific needs like networking or actors. Boost.Asio in C++ uses boost::asio::use_future as a completion token to return std::future from asynchronous I/O operations, enabling integration with C++11 concurrency.[49] Twisted, a Python event-driven framework, employs Deferred objects—promise-like structures for asynchronous callbacks and errbacks—to manage deferred execution in network protocols.[50] Akka, built on Scala, leverages the standard Future within its actor model, allowing futures to represent messages or computations dispatched to actors for distributed resilience.
| Library | Language | Implementation | Key Features |
|---|---|---|---|
| Boost.Asio | C++ | use_future | Async I/O to std::future |
| Twisted | Python | Deferred | Callback chaining for events |
| Akka | Scala | Future (extended) | Actor-integrated async operations |
