Recent from talks
Contribute something to knowledge base
Content stats: 0 posts, 0 articles, 0 media, 0 notes
Members stats: 0 subscribers, 0 contributors, 0 moderators, 0 supporters
Subscribers
Supporters
Contributors
Moderators
Hub AI
Peyton Young AI simulator
(@Peyton Young_simulator)
Hub AI
Peyton Young AI simulator
(@Peyton Young_simulator)
Peyton Young
Hobart Peyton Young (born March 9, 1945) is an American game theorist and economist known for his contributions to evolutionary game theory and its application to the study of institutional and technological change, as well as the theory of learning in games. He is James Meade Professor of Economics Emeritus at the University of Oxford and professorial fellow at Nuffield College Oxford.
Peyton Young was named a fellow of the Econometric Society in 1995, a fellow of the British Academy in 2007, and a fellow of the American Academy of Arts and Sciences in 2018. He served as president of the Game Theory Society from 2006 to 2008 and Centennial Professor at the LSE from 2015-2021. He has published widely on learning in games, the evolution of social norms and institutions, cooperative game theory, bargaining and negotiation, taxation and cost allocation, political representation, voting procedures, and distributive justice.
In 1966, he graduated cum laude in general studies from Harvard University. He completed a PhD in mathematics at the University of Michigan in 1970, where he graduated with the Sumner B. Myers thesis prize for his work in combinatorial mathematics.
His first academic post was at the graduate school of the City University of New York as assistant professor and then associate professor, from 1971 to 1976. From 1976 to 1982, Young was research scholar and deputy chairman of the Systems and Decision Sciences Division at the Institute for Applied Systems Analysis, Austria. He was then appointed professor of Economics and Public Policy in the School of Public Affairs at the University of Maryland, College Park from 1992 to 1994. Young was Scott & Barbara Black Professor of Economics at the Johns Hopkins University from 1994, until moving to Oxford as James Meade Professor of Economics in 2007. In 2004 he was a Fulbright Distinguished Chair at the University of Siena. He was centennial professor at the London School of Economics from 2015-2021 and remains a professorial fellow of Nuffield College, Oxford.
Conventional concepts of dynamic stability, including the evolutionarily stable strategy concept, identify states from which small once-off deviations are self-correcting. These stability concepts are not appropriate for analyzing social and economic systems which are constantly perturbed by idiosyncratic behavior and mistakes, and individual and aggregate shocks to payoffs. Building upon Freidlin and Wentzell's (1984) theory of large deviations for continuous time-processes, Dean Foster and Peyton Young (1990) developed the more powerful concept of stochastic stability: "The stochastically stable set [SSS] is the set of states such that, in the long run, it is nearly certain that the system lies within every open set containing S as the noise tends slowly to zero" [p. 221]. This solution concept created a major impact in economics and game theory after Young (1993) developed a more tractable version of the theory for general finite-state Markov chains. A state is stochastically stable if it attracts positive weight in the stationary distribution of the Markov chain. Young develops powerful graph-theoretic tools for identifying the stochastically stable states.
In an influential book, Individual Strategy and Social Structure, Young provides a clear and compact exposition of the major results in the field of stochastic evolutionary game theory, which he pioneered. He introduces his model of social interactions called 'adaptive play.' Agents are randomly selected from a large population to play a fixed game. They choose a myopic best response, based upon a random sample of past plays of the game. The evolution of the (bounded) history of play is described by a finite Markov chain. Idiosyncratic behavior or mistakes constantly perturb the process, so that every state is accessible from every other. This means that the Markov chain is ergodic, so there is a unique stationary distribution which characterizes the long-run behavior of the process. Recent work by Young and coauthors finds that evolutionary dynamics of this and other kinds can transit rapidly to stochastically stable equilibria from locally stable ones, when perturbations are small but nonvanishing (Arieli and Young 2016, Kreindler and Young 2013, Kreindler and Young 2014).
The theory is used to show that in 2x2 coordination games, the risk-dominant equilibrium will be played virtually all the time, as time goes to infinity. It also yields a formal proof of Thomas Schelling's (1971) result that residential segregation emerges at the social level even if no individual prefers to be segregated. In addition, the theory "demonstrates how high-rationality solution concepts in game theory can emerge in a world populated by low-rationality agents" [p. 144]. In bargaining games, Young demonstrates that the Nash (1950) and Kalai-Smorodinsky (1975) bargaining solutions emerge from the decentralized actions of boundedly rational agents without common knowledge.
Whereas evolutionary game theory studies the behavior of large populations of agents, the theory of learning in games focuses on whether the actions of a small group of players end up conforming to some notion of equilibrium. This is a challenging problem, because social systems are self-referential: the act of learning changes the thing to be learned. There is a complex feedback between a player's beliefs, their actions and the actions of others, which makes the data-generating process exceedingly non-stationary. Young has made numerous contributions to this literature. Foster and Young (2001) demonstrate the failure of Bayesian learning rules to learn mixed equilibria in games of uncertain information. Foster and Young (2003) introduce a learning procedure in which players form hypotheses about their opponents' strategies, which they occasionally test against their opponents' past play. By backing off from rationality in this way, Foster and Young show that there are natural and robust learning procedures that lead to Nash equilibrium in general normal form games.
Peyton Young
Hobart Peyton Young (born March 9, 1945) is an American game theorist and economist known for his contributions to evolutionary game theory and its application to the study of institutional and technological change, as well as the theory of learning in games. He is James Meade Professor of Economics Emeritus at the University of Oxford and professorial fellow at Nuffield College Oxford.
Peyton Young was named a fellow of the Econometric Society in 1995, a fellow of the British Academy in 2007, and a fellow of the American Academy of Arts and Sciences in 2018. He served as president of the Game Theory Society from 2006 to 2008 and Centennial Professor at the LSE from 2015-2021. He has published widely on learning in games, the evolution of social norms and institutions, cooperative game theory, bargaining and negotiation, taxation and cost allocation, political representation, voting procedures, and distributive justice.
In 1966, he graduated cum laude in general studies from Harvard University. He completed a PhD in mathematics at the University of Michigan in 1970, where he graduated with the Sumner B. Myers thesis prize for his work in combinatorial mathematics.
His first academic post was at the graduate school of the City University of New York as assistant professor and then associate professor, from 1971 to 1976. From 1976 to 1982, Young was research scholar and deputy chairman of the Systems and Decision Sciences Division at the Institute for Applied Systems Analysis, Austria. He was then appointed professor of Economics and Public Policy in the School of Public Affairs at the University of Maryland, College Park from 1992 to 1994. Young was Scott & Barbara Black Professor of Economics at the Johns Hopkins University from 1994, until moving to Oxford as James Meade Professor of Economics in 2007. In 2004 he was a Fulbright Distinguished Chair at the University of Siena. He was centennial professor at the London School of Economics from 2015-2021 and remains a professorial fellow of Nuffield College, Oxford.
Conventional concepts of dynamic stability, including the evolutionarily stable strategy concept, identify states from which small once-off deviations are self-correcting. These stability concepts are not appropriate for analyzing social and economic systems which are constantly perturbed by idiosyncratic behavior and mistakes, and individual and aggregate shocks to payoffs. Building upon Freidlin and Wentzell's (1984) theory of large deviations for continuous time-processes, Dean Foster and Peyton Young (1990) developed the more powerful concept of stochastic stability: "The stochastically stable set [SSS] is the set of states such that, in the long run, it is nearly certain that the system lies within every open set containing S as the noise tends slowly to zero" [p. 221]. This solution concept created a major impact in economics and game theory after Young (1993) developed a more tractable version of the theory for general finite-state Markov chains. A state is stochastically stable if it attracts positive weight in the stationary distribution of the Markov chain. Young develops powerful graph-theoretic tools for identifying the stochastically stable states.
In an influential book, Individual Strategy and Social Structure, Young provides a clear and compact exposition of the major results in the field of stochastic evolutionary game theory, which he pioneered. He introduces his model of social interactions called 'adaptive play.' Agents are randomly selected from a large population to play a fixed game. They choose a myopic best response, based upon a random sample of past plays of the game. The evolution of the (bounded) history of play is described by a finite Markov chain. Idiosyncratic behavior or mistakes constantly perturb the process, so that every state is accessible from every other. This means that the Markov chain is ergodic, so there is a unique stationary distribution which characterizes the long-run behavior of the process. Recent work by Young and coauthors finds that evolutionary dynamics of this and other kinds can transit rapidly to stochastically stable equilibria from locally stable ones, when perturbations are small but nonvanishing (Arieli and Young 2016, Kreindler and Young 2013, Kreindler and Young 2014).
The theory is used to show that in 2x2 coordination games, the risk-dominant equilibrium will be played virtually all the time, as time goes to infinity. It also yields a formal proof of Thomas Schelling's (1971) result that residential segregation emerges at the social level even if no individual prefers to be segregated. In addition, the theory "demonstrates how high-rationality solution concepts in game theory can emerge in a world populated by low-rationality agents" [p. 144]. In bargaining games, Young demonstrates that the Nash (1950) and Kalai-Smorodinsky (1975) bargaining solutions emerge from the decentralized actions of boundedly rational agents without common knowledge.
Whereas evolutionary game theory studies the behavior of large populations of agents, the theory of learning in games focuses on whether the actions of a small group of players end up conforming to some notion of equilibrium. This is a challenging problem, because social systems are self-referential: the act of learning changes the thing to be learned. There is a complex feedback between a player's beliefs, their actions and the actions of others, which makes the data-generating process exceedingly non-stationary. Young has made numerous contributions to this literature. Foster and Young (2001) demonstrate the failure of Bayesian learning rules to learn mixed equilibria in games of uncertain information. Foster and Young (2003) introduce a learning procedure in which players form hypotheses about their opponents' strategies, which they occasionally test against their opponents' past play. By backing off from rationality in this way, Foster and Young show that there are natural and robust learning procedures that lead to Nash equilibrium in general normal form games.
