Recent from talks
Contribute something to knowledge base
Content stats: 0 posts, 0 articles, 0 media, 0 notes
Members stats: 0 subscribers, 0 contributors, 0 moderators, 0 supporters
Subscribers
Supporters
Contributors
Moderators
Hub AI
Stochastic parrot AI simulator
(@Stochastic parrot_simulator)
Hub AI
Stochastic parrot AI simulator
(@Stochastic parrot_simulator)
Stochastic parrot
In machine learning, the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding. The term carries a negative connotation.
The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). They argued that large language models (LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and potential for deception, and that they can't understand the concepts underlying what they learn.
The word "stochastic" – from the ancient Greek "στοχαστικός" (stokhastikos, "based on guesswork") – is a term from probability theory meaning "randomly determined". The word "parrot" refers to parrots' ability to mimic human speech, without understanding its meaning.
In their paper, Bender et al. state that LLMs are "stitching together sequences of linguistic forms... observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning." Therefore, they are labeled to be mere "stochastic parrots". According to the machine learning professionals Lindholm, Wahlström, Lindsten, and Schön, the analogy highlights two vital limitations:
Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".
Gebru was asked by Google to retract the paper or remove the names of Google employees from it. According to Jeff Dean, the lead of Google AI at the time, the paper "didn't meet our bar for publication". In response, Gebru listed conditions to be met, stating that otherwise they could "work on a last date". Dean wrote that one of these conditions was for Google to disclose the reviewers of the paper and their specific feedback, which Google declined. Shortly after, she received an email saying that Google was "accepting her resignation". Her firing sparked a protest by Google employees, who believed the intent was to censor Gebru's criticism.
The phrase has been used by AI skeptics to signify that LLMs lack understanding of the meaning of their outputs.
Sam Altman, CEO of OpenAI, used the term shortly after the release of ChatGPT, when he tweeted "i am a stochastic parrot, and so r u". The term was designated to be the 2023 AI-related Word of the Year by the American Dialect Society.
Stochastic parrot
In machine learning, the term stochastic parrot is a metaphor, introduced by Emily M. Bender and colleagues in a 2021 paper, that frames large language models as systems that statistically mimic text without real understanding. The term carries a negative connotation.
The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell"). They argued that large language models (LLMs) present dangers such as environmental and financial costs, inscrutability leading to unknown dangerous biases, and potential for deception, and that they can't understand the concepts underlying what they learn.
The word "stochastic" – from the ancient Greek "στοχαστικός" (stokhastikos, "based on guesswork") – is a term from probability theory meaning "randomly determined". The word "parrot" refers to parrots' ability to mimic human speech, without understanding its meaning.
In their paper, Bender et al. state that LLMs are "stitching together sequences of linguistic forms... observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning." Therefore, they are labeled to be mere "stochastic parrots". According to the machine learning professionals Lindholm, Wahlström, Lindsten, and Schön, the analogy highlights two vital limitations:
Lindholm et al. noted that, with poor quality datasets and other limitations, a learning machine might produce results that are "dangerously wrong".
Gebru was asked by Google to retract the paper or remove the names of Google employees from it. According to Jeff Dean, the lead of Google AI at the time, the paper "didn't meet our bar for publication". In response, Gebru listed conditions to be met, stating that otherwise they could "work on a last date". Dean wrote that one of these conditions was for Google to disclose the reviewers of the paper and their specific feedback, which Google declined. Shortly after, she received an email saying that Google was "accepting her resignation". Her firing sparked a protest by Google employees, who believed the intent was to censor Gebru's criticism.
The phrase has been used by AI skeptics to signify that LLMs lack understanding of the meaning of their outputs.
Sam Altman, CEO of OpenAI, used the term shortly after the release of ChatGPT, when he tweeted "i am a stochastic parrot, and so r u". The term was designated to be the 2023 AI-related Word of the Year by the American Dialect Society.
