Hubbry Logo
LibratusLibratusMain
Open search
Libratus
Community hub
Libratus
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Libratus
Libratus
from Wikipedia

Libratus is an artificial intelligence computer program designed to play poker, specifically heads up no-limit Texas hold 'em.[1] Libratus' creators intend for it to be generalisable to other, non-poker-specific applications. It was developed at Carnegie Mellon University, Pittsburgh.

Background

[edit]

While Libratus was written from scratch, it is the nominal successor of Claudico. Like its predecessor, its name is a Latin expression and means 'balanced'.

Libratus was built with more than 15 million core hours of computation as compared to 2-3 million for Claudico. The computations were carried out on the new 'Bridges' supercomputer at the Pittsburgh Supercomputing Center. According to one of Libratus' creators, Professor Tuomas Sandholm, Libratus does not have a fixed built-in strategy, but an algorithm that computes the strategy. The technique involved is a new variant of counterfactual regret minimization,[2] namely the CFR+ method introduced in 2014 by Oskari Tammelin.[3] On top of CFR+, Libratus used a new technique that Sandholm and his PhD student, Noam Brown, developed for the problem of endgame solving. Their new method gets rid of the prior de facto standard in Poker programming, called "action mapping".

Concurrently with Libratus's development, an international team from Charles University, Czech Technical University, and the University of Alberta developed DeepStack, which in December 2016 became the first computer program to defeat professional poker players in heads-up no-limit Texas hold 'em.[4] While both systems used counterfactual regret minimization, they differed in their approaches: DeepStack employed neural networks for evaluating game states, whereas Libratus did not use neural networks for leaf evaluation.

As Libratus plays only against one other human or computer player, the special 'heads up' rules for two-player Texas hold 'em are enforced.

2017 humans versus AI match

[edit]

From January 11 to 31, 2017, Libratus was in a tournament ("Brains vs. Artificial Intelligence: Upping the Ante challenge") against four top-class human poker players,[5] namely Jason Les, Dong Kim, Daniel McAulay and Jimmy Chou. In order to gain results of more statistical significance, 120,000 hands were to be played, a 50% increase compared to the previous tournament that Claudico played in 2015. To manage the extra volume, the duration of the tournament was increased from 13 to 20 days.

The four players were grouped into two subteams of two players each. One of the subteams was playing in the open, while the other subteam was located in a separate room nicknamed 'The Dungeon' where no mobile phones or other external communications were allowed. The Dungeon subteam got the same sequence of cards as was being dealt in the open, except that the sides were switched: The Dungeon humans got the cards that the AI got in the open and vice versa. This setup was intended to nullify the effect of card luck.

The prize money of $200,000 was shared exclusively between the human players. Each player received a minimum of $20,000, with the rest distributed in relation to their success playing against the AI. As written in the tournament rules in advance, the AI itself did not receive prize money even though it won the tournament against the human team.

During the tournament, Libratus was competing against the players during the days. Overnight it was perfecting its strategy on its own by analysing the prior gameplay and results of the day, particularly its losses. Therefore, it was able to continuously straighten out the imperfections that the human team had discovered in their extensive analysis, resulting in a permanent arms race between the humans and Libratus. It used another 4 million core hours on the Bridges supercomputer for the competition's purposes.

During the tournament, there was an active betting market among poker players about who would win the tournament. It started with 4:1 against the bot, but on the 8th day it became a betting market on which human would lose the least.[6]: 07:30.

Strength of the AI

[edit]

Libratus had been leading against the human players from day one of the tournament. The player Dong Kim was quoted on the AI's strength as follows: "I didn’t realize how good it was until today. I felt like I was playing against someone who was cheating, like it could see my cards. I’m not accusing it of cheating. It was just that good."[7]

At the 16th day of the competition, Libratus broke through the $1,000,000 barrier for the first time. At the end of that day, it was ahead $1,194,402 in chips against the human team. At the end of the competition, Libratus was ahead $1,766,250 in chips and thus won resoundingly. As the big blind in the matches was set to $100, Libratus' winrate is equivalent to 14.7 big blinds per 100 hands. This is considered an exceptionally high winrate in poker and is highly statistically significant.[8]

For comparison, DeepStack achieved a winrate of 49 big blinds per 100 hands in its study against 11 professionals over 44,000 hands, though the different testing conditions (number of hands, opponent skill levels, and tournament format) make direct comparisons difficult.

Of the human players, Dong Kim came first, MacAulay second, Jimmy Chou third, and Jason Les fourth.

Name Rank Results (in chips)
Dong Kim 1 -$85,649
Daniel MacAulay 2 -$277,657
Jimmy Chou 3 -$522,857
Jason Les 4 -$880,087
Total: -$1,766,250

The bot has an unconventional betting style where under specific situations it would bet $20,000 even when there's $100 in the pot. After its win, this strategy was analyzed and adopted by the poker community.[6]: 55:25


Other possible applications

[edit]

While Libratus' first application was to play poker, its designers had a much broader mission in mind for the AI.[9] The investigators designed the AI to be able to learn any game or situation in which incomplete information is available and "opponents" may be hiding information or even engaging in deception. Because of this Sandholm and his colleagues are proposing to apply the system to other, real-world problems as well, including cybersecurity, business negotiations, or medical planning.[10]

See also

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Libratus is an program developed by researchers at , designed to play heads-up no-limit Texas hold'em poker at a superhuman level, achieving this by defeating four top professional human players in a 20-day competition consisting of 120,000 hands. Developed by Noam Brown and Tuomas Sandholm in the Computer Science Department at , Libratus was created without relying on expert human knowledge or handcrafted heuristics, instead using advanced algorithms to solve the complex imperfect-information game of poker. The program competed in the "Brains vs. Artificial Intelligence: Upping the Ante" event held at Rivers Casino in from January 11 to 30, 2017, where it faced off against professional players Dong Kim, Jimmy Chou, Daniel McAulay, and Jason Les. Libratus emerged victorious, winning by the equivalent of $1,766,250 in chips and achieving a statistically significant margin of 147 millibigblinds per game with 99.98% confidence that the result was not due to chance. At its core, Libratus employed a three-pronged methodology: a precomputed "" strategy derived from counterfactual regret minimization to approximate play, real-time solving of subgames during gameplay to handle the vast action space of no-limit betting, and an end-game self-improvement phase that analyzed opponent actions to patch exploitable weaknesses without . Powered by the Bridges supercomputer at the Supercomputing Center, the AI processed computations equivalent to solving billions of subgames over the course of the match. This breakthrough marked the first time an AI had beaten professional players in this variant of poker, building on prior successes in perfect-information games like chess and Go, and highlighting advancements in handling bluffing, , and incomplete information. The significance of Libratus extends beyond poker, as its techniques for solving imperfect-information games have implications for real-world applications involving strategic under , such as negotiations, auctions, and cybersecurity. The program's success was detailed in a paper published in Science, underscoring its role in advancing toward more generalizable strategic reasoning.

Development and Background

Historical Context in Poker AI

The development of in games has historically progressed from perfect-information domains, where all relevant details are fully observable to both players, to more challenging imperfect-information settings that introduce hidden elements and uncertainty. A landmark in perfect-information games was IBM's Deep Blue, which defeated world chess champion in , demonstrating the power of and evaluation functions in deterministic environments. However, poker represents a significant shift to imperfect-information games, where players must contend with concealed private cards, opponent intentions, and probabilistic outcomes, making it a key benchmark for testing AI's ability to handle deception, bluffing, and strategic depth. This transition highlighted poker's role as a rigorous for game-theoretic principles, contrasting with chess's exhaustive tree search. Poker AI research began in the late 1970s and 1980s with rudimentary rule-based systems that relied on hardcoded heuristics for hand evaluation and basic betting decisions, often limited to simplified variants like . By the 1990s, the field evolved toward game-theoretic approaches, incorporating concepts like Nash equilibria to model optimal play under uncertainty; for instance, the University of Alberta's program in 1997 used probabilistic simulations and calculations to play full-table limit Hold'em, though it performed at a competitive level against human players on servers. This era emphasized techniques to manage poker's , paving the way for more adaptive strategies that balanced exploitation and . Key milestones in the 2000s included the University of Alberta's in 2007, the first AI to challenge professional players in a formal man-versus-machine event at the AAAI conference, where it competed in heads-up limit Texas Hold'em and demonstrated competitive performance despite a narrow loss. Advancing to no-limit variants, Carnegie Mellon's Claudico in 2015 represented a direct step forward, engaging four top professionals in heads-up no-limit Texas Hold'em over 80,000 hands; while it lost slightly—yielding humans a $732,713 profit—it showcased near-expert play through adaptation and introduced tactics like limping to probe opponents. Libratus emerged as Claudico's successor, building on these foundations to address remaining gaps in no-limit mastery. No-limit Texas Hold'em poses profound challenges for AI due to its imperfect information—opponents' hole cards remain hidden, forcing inferences based on incomplete data—and the necessity of bluffing to represent strength deceptively. The game's decision space is astronomically , encompassing approximately 1016110^{161} possible states, far exceeding the of chess (104710^{47} positions) and demanding scalable abstractions and counterfactual reasoning to approximate equilibria without exhaustive .

Team and Computational Resources

Libratus was developed at (CMU) as a successor to the earlier poker AI Claudico, with primary work led by Professor Tuomas Sandholm and PhD student Noam Brown in the Department. Sandholm, a leading expert in and AI, served as the project lead, while Brown acted as the primary developer, handling much of the implementation and algorithmic innovation. The name "Libratus" derives from Latin, meaning "balanced," which reflects the program's focus on approximating strategies in imperfect-information like poker. This etymology underscores the core game-theoretic principle of maintaining a balanced that cannot be exploited, a hallmark of Libratus's design philosophy. The development required substantial computational resources, with Libratus trained using over 15 million core hours on the Bridges at the Pittsburgh Supercomputing Center. This scale represented a significant increase from the 2-3 million core hours used for Claudico, enabling deeper exploration of the game's vast strategy space. Funding and institutional support for Libratus came from multiple sources, including grants (IIS-1617590, IIS-1718457, CCF-1733556) and a U.S. Army Research Office award (W911NF-17-1-0082), which facilitated access to via the Extreme Science and Engineering Discovery Environment (XSEDE). Additional backing was provided by CMU's Department and broader AI research initiatives, along with sponsorships from entities such as Rivers Casino and for the associated Brains vs. AI challenge.

Technical Framework

Core Algorithms

Libratus employs CFR+ as its primary algorithm, a variant of Counterfactual Regret Minimization (CFR) developed by Oskari Tammelin in 2014, which iteratively minimizes regret over possible actions to approximate a Nash equilibrium strategy in imperfect-information games like no-limit Texas hold'em poker. CFR+ enhances traditional CFR by using a regret-matching+ procedure that bounds regrets to non-negative values, enabling faster convergence to low-exploitability strategies without requiring extensive averaging of past iterations. In CFR+, strategies are updated through repeated traversals of the game tree, where at each information set—representing a player's partial of the game state—the algorithm computes counterfactual regrets. These regrets quantify the difference between the expected of taking a specific action and the current strategy's , weighted by the counterfactual value, which is the of the information set if the player could observe the hidden information (such as opponents' cards). The new strategy for the next iteration is then derived by normalizing the positive counterfactual regrets for each action, simulating numerous playouts to average these regrets and refine the policy toward equilibrium. This process handles imperfect information inherent in poker by grouping similar states into information sets and computing regrets conditionally on reaching those sets, ensuring the strategy remains robust against hidden elements. Unlike earlier poker AIs that relied on coarse action abstractions with fixed bet sizes to reduce computational complexity, Libratus eliminates traditional action abstraction by dynamically computing strategies for arbitrary bet sizes during gameplay. When an opponent selects a bet outside the precomputed abstraction, Libratus constructs and solves a new in real-time using CFR+, allowing flexible, precise responses without predefined limits on action . Libratus integrates CFR+ with real-time game tree search for post-flop endgame scenarios, solving nested subgames to refine strategies beyond the initial precomputation. This involves applying CFR+ to finer-grained abstractions in these subgames, ensuring decisions incorporate the latest board information while maintaining equilibrium guarantees. The approach is supported by clusters that enable the millions of CFR+ iterations required for these computations.

Strategy Generation and Adaptation

Libratus began by computing an initial blueprint strategy through extensive pre-match training using an enhanced version of counterfactual regret minimization (CFR+), conducted over millions of iterations to approximate a Nash equilibrium within a highly abstracted . This abstraction reduced the immense complexity of heads-up no-limit hold'em—featuring over 10^161 information sets—by grouping similar hands and limiting action choices to common bet sizes derived from professional play patterns. The training process, which refined strategies across abstracted decision points numbering in the billions, leveraged the Supercomputing Center's Bridges platform for approximately 15 million core hours, establishing a robust baseline that performed near-optimally against itself. During the 20-day match, Libratus refined its blueprint strategy daily through overnight self-improvement cycles, analyzing hands from the previous day's play to detect opponent tendencies, such as frequent use of specific bet sizes. These updates incorporated opponent modeling indirectly by expanding the action abstraction—adding up to three new bet sizes tailored to observed behaviors—and recomputing equilibria using CFR+, ensuring adaptations exploited deviations without overexploiting. This process consumed around 4 million core hours on Bridges across the , allowing Libratus to iteratively strengthen its universal strategy against the rotating team of professionals. In real-time during each hand, Libratus discarded its precomputed and employed a nested endgame solver to generate actions from the current board state, constructing finer-grained abstractions on-the-fly and solving with CFR+ in seconds using 50 nodes on Bridges. This approach incorporated opponent modeling by safely constraining solutions to remain consistent with the while adapting to off-script moves, such as unusual bet sizes, thereby avoiding exploitable rounding errors. For instance, if an opponent bet $101 instead of a standard $100, Libratus would create an augmented to compute an optimal response without simply rounding to the nearest abstracted action. Libratus's handling of bluffing and bet sizing emerged naturally from these equilibrium computations, eschewing predefined mappings in favor of context-specific equilibria that often produced unconventional plays. The AI frequently generated overbets or irregular sizes—such as raising $20,000 into a $100 pot in high-stakes scenarios—when the math of the subgame indicated value, blending bluffs seamlessly with value bets to maintain unexploitable balance. This dynamic sizing, informed by opponent modeling in real-time solves, pressured humans by deviating from intuitive patterns while staying grounded in Nash-optimal principles.

2017 Brains vs. AI Challenge

Event Organization and Participants

The Brains vs. AI challenge, formally titled "Brains Vs. AI: Upping the Ante," was organized by (CMU) in partnership with Rivers Casino and the Pittsburgh Supercomputing Center, with sponsorship from entities including GreatPoint Ventures, Avenue4Analytics, TNG Technology Consulting, the journal , , and Optimized Markets. The event took place at Rivers Casino in Pittsburgh, Pennsylvania, from January 11 to January 30, 2017, spanning 20 days. The competition featured 120,000 hands of heads-up no-limit Texas Hold'em poker, played at a rate of approximately 6,000 hands per day across two tables to facilitate parallel matches. Standard poker rules applied, including no-limit betting where players could wager any amount up to their available chips, with duplicate matches used to minimize the role of luck by ensuring humans faced identical card distributions as the AI opponent. Libratus operated in real-time during play, averaging about 13 seconds of computation per hand, while human players were permitted extended deliberation time as needed. The human participants were four leading professional poker players renowned for their expertise in heads-up no-limit Texas Hold'em: Jason Les, Dong Kim, Daniel McAulay, and Jimmy Chou, all ranked among the top specialists in the discipline. Although they competed individually against Libratus—each playing 30,000 hands—the professionals collaborated outside of matches by analyzing hand histories to identify potential weaknesses in the AI's strategy. The prize structure awarded the human players a shared pool of $200,000, distributed based on their relative performance against Libratus regardless of the overall outcome, while any "winnings" accrued by the AI remained virtual chips with no monetary value to its creators.

Match Results and Performance Metrics

In the 2017 Brains vs. : Upping the Ante challenge, Libratus emerged victorious after 120,000 hands of heads-up no-limit Texas Hold'em against four top human professionals, amassing a total of $1,766,250 in chips. This equated to an average win rate of 14.7 big blinds per 100 hands (bb/100), a decisive margin that surpassed the threshold for (p = 0.0002). The humans, who collaborated to develop a combined strategy aimed at exploiting perceived weaknesses in Libratus, were unable to do so effectively over the 20-day event. Libratus's performance varied against each opponent, with individual chip losses for the humans as follows: Jason Les at -$880,087, Jimmy Chou at -$522,857, Daniel McAulay at -$277,657, and Dong Kim at -$85,649, making Kim the closest competitor. These results highlighted Libratus's robustness, as no single human could mount a sustained challenge despite rotating play and strategic adjustments. The match began with Libratus establishing a small lead in the early days, but the humans narrowed the gap significantly on day six, winning $108,775 in chips after a modest loss of $8,189 the previous day. However, Libratus pulled ahead decisively in the later stages, particularly as its adaptive updates refined its strategy against the opponents' evolving tactics, ending with a commanding overall advantage. This outcome markedly exceeded the performance of the prior AI, Claudico, which had lost to a similar team of humans at approximately -9 bb/100 over 80,000 hands in 2015. Post-event analysis by the organizers and independent review confirmed the results' integrity, with no evidence of cheating or rule violations, attributing the victory solely to Libratus's algorithmic superiority.

Innovations and Analysis

Key Technical Innovations

Libratus introduced a novel endgame-solving method based on nested solving, which computes exact Nash equilibria for the remaining in real time without relying on action abstractions. This approach begins as early as the third betting round and responds to every subsequent opponent action, constructing finer-grained subgames that account for the specific cards and history of the hand, while ensuring the strategy remains provably safe and no more exploitable than the precomputed strategy. Unlike prior methods that applied coarse abstractions throughout, this technique treats each hand individually post-flop, enabling precise adaptation to no-limit variability. To avoid opponent exploitation, Libratus employed "" solving with nested reasoning, which enlarges the strategy space based on observed opponent errors without introducing detectable patterns that humans could counter. By using reach-based maximization and lower bounds on "gifts" from opponent mistakes, guarantees that any deviation from equilibrium does not increase overall exploitability, focusing on robust game-theoretic play rather than aggressive modeling. This exploration mechanism prevented the AI from over-exploiting transient human errors, maintaining unexploitable strategies even under scrutiny. Libratus achieved bet sizing flexibility through real-time subgame solving, generating arbitrary bet amounts tailored to the current subgame rather than rounding to predefined abstraction sizes. This allowed creative bluffs and value bets by recomputing strategies from the point of deviation, such as responding to an off-tree opponent bet like $101 by solving a new subgame with customized sizes, outperforming action translation by over an order of magnitude in experimental exploitability. The integration of abstractions in Libratus involved information-set-based techniques refined iteratively during training, combining no card abstraction in early rounds with coarser ones later to reduce the immense state space from approximately 1016110^{161} infosets to a manageable 101210^{12} decision points without sacrificing near-optimality. An improved Counterfactual Minimization (MCCFR) with accelerated convergence, layering these abstractions atop foundational CFR+ algorithms for blueprint strategy computation. In comparison to predecessors like Claudico, which relied on fixed abstractions across the entire game tree leading to suboptimal handling of no-limit dynamics, Libratus's dynamic, real-time refinement via nested solving and self-improvement modules enabled superior performance against variable strategies.

Post-Match

Following the Brains vs. AI Challenge, professional poker players provided detailed feedback on Libratus's gameplay, highlighting its unpredictable betting patterns and absence of exploitable tells that are common in opponents. Players noted the AI's frequent use of massive overbets and bluffs in unconventional situations, which disrupted their ability to read its intentions effectively. For instance, Dong Kim, the highest-performing participant, described the experience as "extremely tough" due to Libratus's consistent improvement and superhuman precision, stating it felt as though the AI could almost see his cards. Jason Les echoed this, calling the AI's performance "incredibly challenging and demoralising," as its wide range of bet sizes— from tiny probes to aggressive overbets— made it impossible to pattern-match against tendencies. Observations of Libratus's behavior during the match revealed a sophisticated capacity for rapid adaptation to human styles, often exploiting aggressive tendencies by countering with timely bluffs or value-maximizing bets in spots where humans rarely would. Jimmy Chou remarked that the AI improved noticeably each day, adjusting its strategy overnight based on observed human plays, which forced players to continually revise their approaches without gaining a lasting edge. This adaptability manifested in frequent bluffs from unlikely ranges, such as weak hands in multi-street pots, contributing to the AI's overall edge despite not dominating every confrontation. In terms of statistical breakdowns, Libratus secured victory in the 120,000-hand match by winning approximately half the pots contested but excelling in value extraction through optimal bet sizing, achieving a net gain of $1,766,250 in chips against the human team. Its equilibrium play maintained an exploitability error rate below 1 milli-big blind per hand in tested subgames, far surpassing prior AIs and rendering it nearly unexploitable within the time constraints of live play. Post-match evaluations identified theoretical limitations in Libratus, noting that while it remained vulnerable to extreme, coordinated exploits in abstracted game states, the human players could not identify or capitalize on these in real-time during the . Professionals attempted daily to probe weaknesses, such as early-round reliance on precomputed strategies, but the AI's nightly self-improvement module patched these gaps effectively, preventing sustained human advantages. Media and expert reactions portrayed the match as a pivotal breakthrough in AI for imperfect-information games, though not a complete solve of no-limit hold'em due to the game's vast strategic depth. Coverage in outlets like emphasized the AI's superhuman performance against top professionals, with a decisive margin of 147 milli-big blinds per hand at 99.98% . The hailed it as a "landmark in AI game-play," underscoring its implications while acknowledging that poker remains unsolved, as even Libratus approximated equilibria rather than perfect ones. Experts like Tuomas Sandholm, Libratus's co-creator, described the win as historic but cautioned that further advancements were needed for multi-player variants.

Impact on AI Research

Contributions to Imperfect Information Games

Libratus represented a pivotal milestone in solving imperfect-information games, achieving the first superhuman performance by defeating four top human professionals in heads-up no-limit Texas hold'em poker—a benchmark challenge in since the late 1990s. This accomplishment, demonstrated in the 2017 Brains vs. AI challenge through 120,000 hands played over 20 days, underscored the feasibility of AI systems handling vast decision spaces (on the order of 10^161 infosets) with hidden information and adversarial opponents. The system's algorithmic advancements centered on enhancements to counterfactual regret minimization (CFR), particularly CFR+, an optimized variant of CFR that accelerated convergence by pruning suboptimal actions and incorporating positive regret weighting. Complementing this, Libratus introduced safe and nested subgame solving for endgame scenarios, enabling real-time computation of low-exploitability strategies without poker-specific abstractions, which reduced overall exploitability by more than an compared to prior methods. These techniques, detailed in the seminal 2017 paper by Noam Brown and Tuomas Sandholm, have garnered over 1,000 citations by 2025, reflecting their foundational impact. Libratus's methods generalized beyond poker to other imperfect-information domains, such as security games and auctions, due to their domain-independent reliance on minimization for scalable equilibrium approximation under . This work influenced subsequent AI developments, paving the way for Pluribus—the same team's 2019 system that solved six-player no-limit poker—and serving as a contemporary benchmark alongside rival DeepStack, which also reached expert-level play in around the same period. Overall, Libratus demonstrated the scalability of -based algorithms for real-world decision-making in uncertain, information-asymmetric environments, advancing the broader AI landscape.

Broader Applications

The techniques underlying Libratus, particularly counterfactual regret minimization (CFR+), have been adapted to model cybersecurity scenarios as imperfect-information games, where defenders must anticipate hidden attacker intentions, such as in intrusion detection systems facing unknown threats. For instance, real-time strategic decision-making against zero-day vulnerabilities can leverage Libratus-inspired search methods to compute balanced defenses, as highlighted by creator Tuomas Sandholm during discussions on applying poker AI to thwart hackers. In business negotiations and auctions, these methods inform equilibrium strategies for scenarios involving private information, such as in combinatorial auctions or over contracts. Sandholm has noted their relevance to auctions and negotiations, where agents must randomize actions to avoid exploitation while maximizing outcomes. For spectrum auctions managed by regulatory bodies like the FCC, game-theoretic approaches derived from imperfect-information solvers help design mechanisms that encourage truthful under uncertainty about competitors' valuations. Beyond these, the core methods apply to military strategy, where the U.S. Department of Defense contracted Sandholm's team for up to $10 million to integrate poker AI into war games, simulating opponent deceptions in imperfect-information environments. Despite these potentials, Libratus's techniques demand substantial computation, originally requiring a supercomputer for training, which limits scalability in resource-constrained settings. Post-2017 extensions, such as in the Pluribus AI, mitigate this by enabling multiplayer scenarios on modest hardware—like a single 64-core server—facilitating applications in group negotiations or multi-agent systems.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.