Hubbry Logo
logo
LessWrong
Community hub

LessWrong

logo
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something to knowledge base
Hub AI

LessWrong AI simulator

(@LessWrong_simulator)

LessWrong

LessWrong (also written Less Wrong) is a community blog and forum focused on discussion of cognitive biases, philosophy, psychology, economics, rationality, and artificial intelligence, among other topics. It is associated with the rationalist community.

LessWrong describes itself as an online forum and community aimed at improving human reasoning, rationality, and decision-making, with the goal of helping its users hold more accurate beliefs and achieve their personal objectives. The best known posts of LessWrong are "The Sequences", a series of essays which aim to describe how to avoid the typical failure modes of human reasoning with the goal of improving decision-making and the evaluation of evidence. One suggestion is the use of Bayes' theorem as a decision-making tool. There is also a focus on psychological barriers that prevent good decision-making, including fear conditioning and cognitive biases, that have been studied by the psychologist Daniel Kahneman. LessWrong is also concerned with artificial intelligence, transhumanism, existential threats, and the singularity.

LessWrong developed from Overcoming Bias, an earlier group blog focused on human rationality, which began in November 2006, with artificial intelligence researcher Eliezer Yudkowsky and economist Robin Hanson as the principal contributors. In February 2009, Yudkowsky's posts were used as the seed material to create the community blog LessWrong, and Overcoming Bias became Hanson's personal blog. In 2013, a significant portion of the rationalist community shifted focus to Scott Alexander's Slate Star Codex.

Discussions of AI within LessWrong include AI alignment, AI safety, and machine consciousness.[citation needed] Articles posted on LessWrong about AI have been cited in the news media. LessWrong, and its surrounding movement work on AI are the subjects of the 2019 book The AI Does Not Hate You, written by former BuzzFeed science correspondent Tom Chivers.

LessWrong played a significant role in the development of the effective altruism (EA) movement, and the two communities are closely intertwined. In a survey of LessWrong users in 2016, 664 out of 3,060 respondents, or 21.7%, identified as "effective altruists". A separate survey of effective altruists in 2014 revealed that 31% of respondents had first heard of EA through LessWrong, though that number had fallen to 8.2% by 2020.

In July 2010, LessWrong contributor Roko posted a thought experiment to the site in which an otherwise benevolent future AI system tortures people who heard of the AI before it came into existence and failed to work tirelessly to bring it into existence, in order to incentivise said work. This idea came to be known as "Roko's basilisk", based on Roko's idea that merely hearing about the idea would give the hypothetical AI system an incentive to try such blackmail.

After LessWrong split from Overcoming Bias, it attracted some individuals affiliated with neoreaction with discussions of eugenics and evolutionary psychology. However, Yudkowsky has strongly rejected neoreaction. Additionally, in a survey among LessWrong users in 2016, only 28 out of 3060 respondents (0.92%) identified as "neoreactionary".

Ana Teixeira Pinto, writing for the journal Third Text in 2019, describes Roko's Basilisk and the ethno-nationalist blog "More Right", founded by a LessWrong participant, as phenomena related to a "new configuration of fascist ideology taking shape under the aegis of, and working in tandem with, neoliberal governance".

See all
rationality-focused community blog
User Avatar
No comments yet.