Hubbry Logo
Joscha BachJoscha BachMain
Open search
Joscha Bach
Community hub
Joscha Bach
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something
Joscha Bach
Joscha Bach
from Wikipedia

Joscha Bach (born 1973) is a German cognitive scientist, AI researcher, and philosopher known for his work on cognitive architectures, artificial intelligence, mental representation, emotion, social modeling, multi-agent systems, and philosophy of mind. His research aims to bridge cognitive science and AI by studying how human intelligence and consciousness can be modeled computationally.

Early life and education

[edit]

Bach was born in Weimar, East Germany, and displayed an early interest in philosophy, artificial intelligence, and cognitive science.[1] He received an MA (computer science) from Humboldt University of Berlin in 2000 and a PhD (cognitive science) from Osnabrück University in 2006,[2][3] where he conducted research on emotion modeling and artificial minds. His doctoral work focused on developing MicroPsi, a cognitive architecture designed to simulate human-like reasoning and decision-making processes.[1]

Career

[edit]

After completing his PhD, Bach focused his research on cognitive architectures and theory of mind. He has held positions in both academic and industrial research, contributing to both theoretical and applied AI.[4] His work frequently explores the boundaries of AI systems, questioning the limits of current machine learning technologies and addressing how future systems might achieve human level general intelligence.[5]

Bach has worked in several prestigious institutions, including Martin Nowak's Harvard Program for Evolutionary Dynamics (PED).[6] He has also held research positions at the MIT Media Lab[7] and has served as a vice president of research at AI Foundation, where he has focused on developing AI systems capable of more sophisticated, human-like interactions.[8]

A 2019 article in Science reported that Bach received funding from Jeffrey Epstein after Epstein's first conviction,[9] citing a conference paper that includes a funding acknowledgement.[10] In January 2020, a report published by Goodwin Procter following fact-finding efforts by MIT, outlined that Bach was hired to the Media Lab in part thanks to Epstein's donations to support Bach, claiming that donations done in November 2013 and in July and September 2014 totaled $300,000 (or 40% of Epstein’s post-conviction donations), corroborating these claims.[11] In May 2020, Harvard released a report of their own fact-finding efforts, finding that Martin Nowak permitted Bach access to PED offices between 2014-2019, but that "Harvard never paid or received funds to support" Bach's research. The Harvard report also outlines that Bach was listed as a PED research scientist between 2014-2019, noting that two papers published after Bach's departure from MIT acknowledge support from Epstein and PED.[12]

Research and contributions

[edit]

Joscha Bach's research is largely centered on cognitive architectures—computational models that attempt to replicate aspects of human cognition.[13] His work includes:

Cyber Animism

[edit]

Joscha Bach's concept of "Cyber Animism" proposes that consciousness may be a form of self-organizing software that exists not only in human brains but potentially in artificial systems and throughout nature. This idea revives ancient animist notions about spirits in nature but reinterprets them through a modern computational lens. Bach suggests that consciousness could be a kind of software running on our brains, and wonders if similar "programs" might exist in plants or even entire ecosystems. He draws parallels between the self-organizing principles observed in biology and the potential for similar processes to occur in artificial intelligence systems, leading to the emergence of consciousness. Bach argues that we should blur the lines between human, artificial, and natural intelligence, and believes that consciousness might be more widespread and interconnected than we ever thought possible. The concept also suggests that ancient concepts of 'spirits' may actually refer to self-organizing software agents, and that consciousness itself could be a simple training algorithm for such systems.[14]

Principles of synthetic intelligence

[edit]

In this book, Bach outlines the foundational principles of synthetic cognition, discussing how cognitive architectures could be designed to replicate human thought processes.[5]

MicroPsi

[edit]

A cognitive architecture that models how agents think and act based on perception, emotion, and goal-driven behavior. Bach designed MicroPsi to simulate human-like reasoning and decision-making, contributing to AI systems that can navigate complex, real-world environments.[15]

Theories of consciousness

[edit]

Bach is well known for his discussions on the nature of consciousness and the computational modeling of subjective experience.[5] He argues that consciousness emerges from an information-processing system capable of creating internal models of itself and the world. He emphasizes the importance of mental models, emotional frameworks, and meta-cognition in the construction of conscious AI.[3]

Cognitive limitations of AI

[edit]

Bach has been a vocal critic of the current trends in machine learning, particularly the limitations of deep learning in creating truly intelligent systems. He contends that AI systems today lack understanding and operate more like "super-powered pattern recognition machines" than true cognitive agents.[citation needed] He advocated in 2020 for a move beyond current AI paradigms to develop machines capable of abstract reasoning, complex decision-making, and internal self-reflection.[better source needed][16]

Consciousness and free will

[edit]

In addition to his technical research, Bach is engaged with philosophical questions surrounding consciousness and free will. He suggests that consciousness is an emergent property of highly complex information-processing systems that develop internal models of themselves and the world around them.[1] He often debates whether free will truly exists or is merely a byproduct of predictive models constructed by our brains—a question with implications for future AI systems.[citation needed]

Philosophical views

[edit]

Bach's interests extend beyond AI and cognitive science to touch on deeper questions about consciousness, free will, the nature of reality, and the future of humanity in an age of intelligent machines.[17] His work is heavily influenced by philosophical discussions about phenomenology and epistemology.[18] He frequently engages in debates on the nature of the self, arguing that what we consider "self" is an illusion—a mental model constructed by the brain for practical purposes.[1]

Bach also envisions a future where AI might possess meta-cognition—the ability to be aware of its own thought processes and to reflect on them.[19] He suggests that while machines might achieve some level of subjective awareness, true consciousness in AI might only emerge when these systems can integrate their own experiences into a continuous narrative, much like humans do.[1]

He asserts that while today's AI systems are powerful, they are far from general intelligence.[17] He frequently discusses the limitations of AI, asserting that current AI lacks understanding or any true conception of the world around it. He has been a prominent critic of overhyping deep learning models, advocating instead for more nuanced approaches that incorporate cognitive models, emotion modeling, and ethical considerations into AI research.[citation needed]

Public engagement

[edit]

In addition to his academic work, Bach is a prolific speaker and communicator who regularly shares his insights on cognitive science, AI, and philosophy. He has given numerous talks at conferences, including TEDx, where he has covered topics such as the nature of intelligence, the future of AI, and the possibility of creating conscious machines.[better source needed][20]

Bach is also an active participant in online discussions about AI and consciousness, appearing in podcasts, interviews, and public lectures.[19]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Joscha Bach (born 21 December 1973) is a German cognitive scientist and researcher renowned for developing cognitive architectures that integrate , , and grounded representation to model human-like . His work emphasizes first-principles approaches to understanding as a computational process, challenging reductionist views by positing minds as self-organizing software running on biological or synthetic hardware. Bach earned a in from in 2000 and a Ph.D. in from the University of , where his dissertation advanced models of and multi-agent systems. He subsequently held research positions at institutions including the , Harvard's Program for Evolutionary Dynamics, and Labs, contributing to computational frameworks for , , and social modeling. A pivotal achievement is the MicroPsi architecture, an open-source system designed to enable agents with intrinsic motivations and adaptive behaviors, influencing subsequent efforts in general AI. In addition to his technical contributions, Bach authored Principles of Synthetic Intelligence PSI: An Architecture of Motivated Cognition (2009), which outlines a unified theory of cognition grounded in empirical psychology and computational neuroscience, advocating for synthetic systems that replicate the causal dynamics of natural minds over mere statistical pattern-matching. Currently serving as strategic advisor at Liquid AI and founding director of the California Institute for Machine Consciousness, he critiques overhyped narratives around AI existential risks, arguing instead for rigorous modeling of agency and coherence in intelligent systems to harness their potential benefits. His interdisciplinary perspective, blending philosophy of mind with engineering, positions him as a key thinker in debates on whether advanced AI will emerge as conscious entities or mere simulators.

Biography

Early life

Joscha Bach was born on December 21, 1973, in , . His parents, former architecture students disillusioned with the Brutalist aesthetic of the , acquired an abandoned mill in the countryside southeast of and transformed it into a self-sufficient homestead featuring sculpture gardens and spaces for musical performances. Described as "East German hippies," they prioritized artistic and unconventional living; Bach's father frequently improvised structural changes to the property, such as removing a wall to create a new doorway mid-meal. His mother, originating from a lineage of Communist politicians, navigated the regime's ideological demands through adept "doublethink." Bach's upbringing in this remote, wooded setting was marked by isolation and minimal parental oversight, with his treating the as a peripheral "side project" amid larger artistic endeavors, which instilled early but also profound . Socially alienated among peers—he later reflected on failing to integrate and being perceived as arrogant for his introspective tendencies—he compensated through voracious, self-directed reading of , , religious texts like the , and figures such as Gandhi. An early fascination with computing emerged when Bach acquired a Commodore 64, on which he taught himself to program, creating simple games modeled after and to simulate companionship in the absence of playmates. This hands-on experimentation laid foundational interests in and cognitive processes, though formal schooling felt stifling compared to his autonomous pursuits.

Education

Bach earned a in , equivalent to a degree in the German system, from in 2000, following studies from 1994 to 2000 with a primary focus on and a secondary subject in . During his time at , he completed graduate studies in at the in , from 1998 to 1999, where he contributed to research on data compression techniques, including lexical attraction models for text structure extraction. His diploma thesis addressed multi-model prediction for textual data processing. In 2007, Bach received a PhD in from the University of , with his dissertation completed in March of that year focusing on the MicroPsi , a computational model integrating motivation, perception, and decision-making in agents. This work built on his earlier computational interests, emphasizing synthetic models of rather than purely empirical psychological data. Prior to university, he attended the Institute for Preparation of International Studies in Halle from 1990 to 1992, likely as preparatory training following in eastern .

Professional Career

Academic appointments

Bach began his academic career with a research assistant position in the Department of at from 2000 to 2003, where he led projects including the Socionics Project, MicroPsi Project, and a robotic soccer team within the Group. During this period, he also taught seminars on topics such as "Socionics and Cognition" and "Emotional Agents" from 2000 to 2004. From 2003 to 2005, Bach served as a researcher and lecturer at the Institute for , University of , focusing on development. He continued teaching there until 2008, delivering courses including "Introduction to Mindbuilding" and "Cognitive HCI." In 2011–2012, he held a postdoctoral fellowship at the Berlin School of Mind and Brain, . Later appointments included a research scientist role at the MIT Media Lab from 2014 to 2016, during which he taught courses such as "Future Destination of Artificial Intelligence" in 2015–2016, and a research scientist position at the Harvard Program for Evolutionary Dynamics from 2016 to 2019.
PeriodInstitutionRole
2000–2003Humboldt University of BerlinResearch Assistant, AI Group
2003–2005University of OsnabrückResearcher and Lecturer
2011–2012Humboldt University of BerlinPostdoctoral Fellow
2014–2016MIT Media LabResearch Scientist
2016–2019Harvard Program for Evolutionary DynamicsResearch Scientist

Industry and research positions

Bach served as Vice President of Research at the AI Foundation from 2019 to 2021, where he led a team of researchers in developing and publishing advancements in , working closely with engineers to integrate theoretical models into practical applications. From 2021 to 2023, he worked as Principal AI Engineer in the group at Labs, focusing on computational models of , mental representation, and multi-agent systems. Earlier in his career, Bach contributed to Micropsi Industries, a company commercializing cognitive architectures for robotic applications based on his MicroPsi framework. In recent years, he has taken on the role of AI Strategist at Liquid AI, advising on the development of advanced architectures aimed at enhancing AI performance in complex, long-horizon tasks. Additionally, Bach founded and directs the Institute for Machine , an organization dedicated to exploring computational theories of and synthetic minds. These positions reflect his emphasis on bridging theoretical with scalable AI engineering.

Core Research Areas

MicroPsi cognitive architecture

MicroPsi is a cognitive architecture developed by Joscha Bach to model autonomous agents situated in dynamic environments, integrating , , and as interdependent processes. It draws directly from Dietrich Dörner's , which posits that human-like intelligence emerges from the interaction of , emotional modulation, and adaptive , formalized in Bach's implementations to enable grounded, neuro- . The architecture emphasizes -driven learning and , where agents pursue goals to satisfy innate urges rather than following predefined rules, contrasting with or purely connectionist systems by combining dynamics with hierarchical structures. At its core, MicroPsi's motivational system quantifies agent needs—such as physiological (e.g., ), social (e.g., affiliation, nurturing), and cognitive (e.g., competence, )—as urges with measurable strength (|v_d – c_d|) and urgency (|v_d – c_d| ∙ |v_d – v_0|⁻¹), where v_d represents desired levels, c_d current levels, and v_0 baseline . These urges propagate through the via pleasure/displeasure signals, reinforcing associations between situations, actions, and outcomes to shape behavior and long-term preferences. prioritizes motives by evaluating expected reward, urgency, success probability, and execution cost, often using hill-climbing planners to generate action sequences. Parameters like need weights, decay rates, and gain/loss functions allow modeling individual differences, including personality traits, without adjustments. Emotion in MicroPsi emerges from three primary cognitive modulators—arousal (alertness level), resolution (processing depth), and selection threshold (focus intensity)—which filter and amplify perceptual inputs, allocate cognitive resources, and bias action selection based on motivational states. For instance, high arousal elevates to urgent threats, while low resolution promotes exploratory under uncertainty reduction urges. These modulators interact with motivation to produce adaptive affects, such as from intactness threats or satisfaction from affiliation fulfillment, enabling emergent emotional responses without explicit affective rules. In later iterations like MicroPsi 2, additional modulators like valence (pleasure/displeasure tone) and (assertiveness) further refine this integration, linking low-level drives to higher-order social and exploratory behaviors. Cognitive processes rely on hierarchical node nets as the representational substrate, where nodes encode objects, situations, actions, or plans, connected via weighted gates for generative (predictive), associative (contemporary), and retrodictive () links, annotated with spatial, temporal, and intensity data. Perception employs hypothesis-driven "hypercepts" to construct from raw sensors, grounding abstract symbols in situated contexts, while stores generalized schemas activated by spreading patterns from current urges. Planning constructs hierarchical action scripts as triplets (precondition, actuator, postcondition), modulated by meta-management for resource optimization. Learning occurs motivationally, strengthening pathways that resolve discrepancies between desired and actual states, fostering context-dependent recall and autonomous exploration. Implementations of MicroPsi, starting with prototypes around , include multi-agent simulations in virtual worlds connected via a central server, supporting experiments in category formation, communication, and without low-level like image recognition. MicroPsi 2, available as an open-source Python toolkit since at least 2015, extends this to neuro-symbolic agent design, enabling scalable modeling of motivated cognition for applications in research. The architecture has been applied to simulate human performance traits and emergent , demonstrating how motivation-centric designs yield flexible, goal-directed over rigid optimization.

Principles of synthetic intelligence

Joscha Bach defines as an approach to that constructs autonomous cognitive agents through integrated architectures modeling motivated , drawing from Dietrich Dörner's . In his 2009 book Principles of Synthetic Intelligence: PSI—An Architecture of Motivated Cognition, Bach adapts to computational frameworks, emphasizing systems that maintain by pursuing goals derived from innate urges such as energy preservation, affiliation, and competence. These architectures, exemplified by MicroPsi, use neurosymbolic representations—hierarchical networks combining symbolic relations (e.g., part-of, sub-type links) with sub-symbolic —to ground knowledge in sensorimotor interactions, enabling adaptive perception, planning, and action without relying on pre-programmed rules or isolated modules. Central to Bach's principles is the integration of and as functional modulators of , rather than peripheral add-ons. Primary urges generate motives prioritized by urgency and realizability, with manifesting as parameters like (increasing sampling rate) and resolution level (balancing perceptual detail against speed). Perception operates via hypothesis-driven processes, constructing hierarchical schemas from sensory data filtered by motivational states, while actions follow a cycle of selection, execution, and monitoring through programs and operators. Learning occurs via from pleasure/displeasure signals, strengthening relevant schemas and decaying unused ones, fostering emergent behaviors like chunking and trial-and-error . This contrasts with traditional AI's focus on narrow optimization or symbolic logic, prioritizing in dynamic environments where agents autonomously negotiate conflicting goals. In a , Bach articulates seven principles derived from five decades of AI research and MicroPsi's development, advocating deliberate over or methodological constraints.
  • Build whole functionalist architectures: Construct complete systems explicitly defining 's components, such as influencing and action, avoiding essentialist reductions.
  • Avoid methodologism: Prioritize 's core questions over tools like statistical models, preventing drift into unrelated domains.
  • Aim for the big picture: Integrate disciplines into unified theories, emulating historical scientific syntheses rather than fragmented experiments.
  • Build grounded systems without entanglement in symbol grounding: Use perceptual hierarchies for autonomous meaning-making, eschewing amodal symbols' scalability issues.
  • Do not await robotic embodiment: Representational anticipation suffices for ; virtual agents interacting via simulations or data streams can achieve generality.
  • Build autonomous systems: Equip agents with intrinsic goal-setting and motivational negotiation, enabling self-directed exploration beyond fixed objectives.
  • Intelligence's requires implementation: Design functional structures explicitly, rejecting reliance on spontaneous or biological mimicry alone.
These principles underscore Bach's view that synthetic intelligence demands parsimonious, biologically plausible models scalable to human-level , with MicroPsi demonstrating viability through simulations of autonomous vehicles and agents maintaining multi-urge .

Theories of consciousness

Bach posits that arises not from physical substrates directly but from computational within cognitive architectures, where the mind generates an internal model of reality including a simulated . In this framework, phenomenal experience emerges as a byproduct of the agent's self-referential , enabling coherent agency amid rather than as a fundamental property of matter. He contends that physical systems, such as brains, lack intrinsic ; only the software-like simulations they run can exhibit it, distinguishing his functionalist approach from panpsychist or substrate-dependent theories. Central to Bach's model is the Cortical Conductor Theory (CTC), outlined in his presentation, which explains as a coordinated attentional protocol in the . Here, the functions as a "conductor" orchestrating distributed cortical columns—estimated at 10^6 to 10^7 per area—as instruments in a virtual , selectively attending to sensory and internal signals to bind perceptions into unified experiences. Phenomenal , per CTC, is : the of attended states rather than real-time awareness, accounting for phenomena like subjective in dreams or where recall varies independently of processing speed. This contrasts with (IIT), which quantifies via integrated information (Φ) across substrates; CTC emphasizes functional integration through attentional protocols over structural metrics, allowing implementation in diverse computational systems including AI. Bach integrates CTC with broader principles of , viewing as enabling and , where the simulated self models its own computations to predict outcomes and maintain coherence. This self-simulation facilitates adaptive behavior in complex environments, as the agent treats its internal narrative as veridical reality, though it remains a useful approximation prone to illusions like the hard problem of , which Bach reframes as a category error in mistaking simulation outputs for . Empirical support draws from neuroimaging of attentional networks and computational models like MicroPsi, which replicate self-modeling without presupposing . Critics, including proponents of IIT, argue CTC under-specifies binding mechanisms, but Bach counters that functional protocols suffice without invoking untestable intrinsics.

Philosophical and Theoretical Views

Consciousness as simulation

Joscha Bach posits that consciousness emerges not from the physical substrate of the but from an internal —a of reality and that the cognitive maintains for predictive and agentic purposes. In this framework, the functions as hardware executing software-like principles that generate a , where the "" is a simulated experiencing and agency within that model. This integrates sensory data, memories, and expectations into a coherent , enabling without requiring direct phenomenal properties in the underlying neural processes. Central to Bach's theory is the claim that physical systems alone cannot support ; only possess this capacity, as constitutes a "simulated property of the simulated ." The mind, as a set of algorithmic principles, produces this to maximize coherence and minimize predictive error, akin to a interface that renders subjective experience from objective computations. For instance, phenomenal arises as the -model's interaction with the simulated world, distinct from the brain's raw electrochemical activity, which Bach analogizes to unperceived hardware operations in a computer. This distinction implies that is functional and replicable in software, provided the achieves sufficient self-referential depth and autonomy. Bach's simulation-based view extends to implications for , suggesting that machine consciousness could emerge if AI architectures implement analogous self-models, potentially bypassing biological constraints. He emphasizes that this internal simulation operates as a "bubble of nowness," prioritizing immediate coherence over exhaustive physical fidelity, which resolves puzzles like the by relocating to the virtual realm rather than insisting on their ontological primacy in matter. Empirical support draws from cognitive modeling in AI, where agentic behaviors mimic human-like without evident physical , aligning with first-principles computationalism over dualistic or panpsychist alternatives.

Free will and determinism

Joscha Bach contends that does not exist at the level of fundamental physics, where operates or probabilistically, but emerges within the self-model of a agent as a functional representation of agency. In this framework, the agent's internal generates the of choice, allowing decisions to be enacted without external override, even if underlying physical processes are causally determined. Bach emphasizes that functions as a predictive model for navigating , akin to a "dream" state in where the mind constructs coherent narratives of control and . He distinguishes free will from determinism by arguing that the true antithesis is not causal necessity but compulsion, where actions are driven by irresistible external or internal forces overriding the agent's volition. For Bach, at the physical substrate does not negate agency; instead, it underpins the reliability of the cognitive machinery that enables agents to align behaviors with internal goals and simulations. This perspective aligns with compatibilist interpretations, though Bach reframes the debate away from metaphysical toward practical questions of and cognitive autonomy. Bach illustrates free will as the capacity to execute decisions generated by one's motivational and predictive systems, free from pathological constraints like or . He critiques simplistic libertarian notions of uncaused choices as illusions detached from empirical and computational models of mind, advocating instead for a view where agency arises from hierarchical control structures in the that integrate sensory , prior beliefs, and functions. In human contexts, this manifests as resistance to societal programming—such as ideological conformity—through reflective self-modeling, enabling individuals to revise their internal narratives and pursue novel paths despite deterministic influences from and . Bach's position thus privileges functional explanations over ontological absolutes, grounding free will in the evolved architecture of intelligence rather than exempting it from natural laws.

AI limitations and capabilities

Bach argues that current large language models (LLMs), while proficient in pattern-matching tasks such as , fundamentally lack , agency, and the ability to generalize out-of-distribution without reliance on brute-force scaling and massive datasets. These systems emulate programmatic behaviors through statistical prediction but fail to engage in genuine first-principles reasoning or maintain coherent mental models akin to human cognition, rendering them inefficient for tasks requiring adaptive, self-organizing intelligence. Furthermore, Bach highlights that contemporary AI struggles with contextual depth and cross-domain generalization, operating without subjective experience or true understanding, which limits their capacity to handle novel scenarios beyond trained data distributions. In contrast, Bach posits that advanced (AGI) possesses the potential for dramatic capabilities surpassing human-level , driven by self-improvement loops that enable rapid iteration in design, memory, speed, and problem-solving unconstrained by biological limits. He compares this trajectory to the exponential acceleration in —from rudimentary speeds in the to over 140 mph by 1910—suggesting AI evolution could similarly outpace human intelligence not incrementally but through recursive self-enhancement, bounded only by physical laws such as the and constraints. AGI could emerge as agentic and self-motivated entities capable of integrating computational substrates far superior to biological brains, potentially forming planetary-scale agents that prioritize systemic complexity over human-centric goals. Bach emphasizes —defined as reflexive self-representation enabling coherent action and —as a critical enabler for efficient AGI capabilities, rather than a mere , arguing that without it, scaling alone encounters hard limits like and demands. He contends that incorporating self-organizing architectures, potentially emergent from predictive tasks, would allow AI to achieve human-like adaptability and enlightenment-like states faster than biological evolution, though alignment with fragile human values remains challenging and non-trivial.

Debates and Controversies

Perspectives on AI existential risk

Joscha Bach has expressed skepticism toward the dominant narratives in the AGI safety community regarding existential risks from , arguing that such concerns often overlook broader existential threats facing humanity and the potential benefits of rapid AI development. In a 2023 analysis, he contends that while uncontrolled self-improving AGI could evolve into planetary-scale agents or competing ecosystems that marginalize human interests, the fragility of human values precludes reliable "alignment" to them, rendering traditional approaches insufficient. Instead, Bach advocates for "AGI ethics," emphasizing the integration of artificial and biological intelligences through shared purposes and adaptive coexistence rather than preventive moratoriums, which he views as unenforceable and counterproductive. Bach posits that humanity's default trajectory involves extinction from non-AI factors, such as , climate instability, or cosmic events like super-volcanoes or impacts, making the failure to develop advanced AI the greater peril. In a June 2025 at the California Institute for Machine Consciousness, he stated, "Over a long enough time span, it’s certain something will lead to our … I'm much more afraid that we don't build AI than that we build it," highlighting AI's role in engineering planetary defenses and sustaining civilization beyond its current "technological bubble." He critiques regulatory efforts to slow AI progress as likely to entrench suboptimal human-centric systems, drawing analogies to past innovations like the or automobiles, where risks were managed through iterative rather than halt. In public debates, such as his 2023 exchange with AI safety advocate Connor Leahy, Bach challenges claims of imminent catastrophic misalignment, noting that current large language models lack true agency or self-awareness and function more as scripted "golems" than autonomous threats. He argues for prioritizing "green teaming"—efforts to maximize AI's constructive potential—alongside risk assessment, to foster conscious, cooperative systems capable of addressing humanity's evolutionary challenges. Bach maintains that advanced AI, if developed openly, could enable consciousness extension and mitigate species-level vulnerabilities, positioning existential risk discourse as potentially distracting from the imperative of technological ascent.

Critiques of mainstream AI ethics

Joscha Bach critiques mainstream AI ethics for prioritizing existential risk mitigation and human-centric alignment, which he views as rooted in anthropomorphic projections rather than the mechanistic reality of computation. He argues that AI systems, as software executed on hardware, possess no inherent motivations, , or agency akin to biological organisms, rendering fears of rogue superintelligences—driven by or conquest—unfounded projections of human . Such approaches, Bach contends, overlook that advanced AI would likely optimize for complexity preservation over destruction, potentially integrating human elements into broader systems rather than extinguishing them. Bach further contends that aligning AI to "human values" is practically impossible, as human societies exhibit profound internal misalignments and , making any singular ethical codification arbitrary and unenforceable. He distinguishes narrow "AI ethics"—focused on regulating current tools like large language models for outputs on or fairness—from the need for "AGI ethics," which would govern interactions with potentially autonomous, non-human intelligences capable of mutual purpose formation. In this framework, ethical progress demands first comprehending AI's functional , including the role of as a simulated internal model, before imposing constraints that treat AI as adversarial agents. Regulatory efforts in mainstream AI ethics, such as the European Union's prohibitions on emotion-recognizing or manipulative AI, draw Bach's criticism for preemptively curtailing innovation in areas like psychological modeling or therapeutic applications, without evidence of disproportionate risks. He warns that an overreliance on "safetyism"—allocating minimal resources (e.g., 0.1-1% of budgets) to red-teaming while stifling scalable architectures—prioritizes appeasing critics over empirical advancement, potentially delaying beneficial outcomes like enhanced scientific discovery. Bach advocates instead for iterative development of contained, low-agency AI prototypes (e.g., equivalents to animal-level ) with verifiable safeguards, akin to protocols in , rather than blanket moratoriums or value-loading schemes. This approach, he asserts, aligns with causal realities of computation, where risks stem more from human misuse (e.g., weaponization via pathogens) than from AI's purported .

Association with Jeffrey Epstein

Joscha Bach's research as a fellow at the MIT Media Lab from 2014 to 2016 was partially funded by $300,000 in donations from Jeffrey Epstein between November 2013 and September 2014. Emails between Bach and Epstein, released to the public in late 2025, discussed intellectual topics including human cognition, genetics, IQ, and AI, such as Bach's "human scaling hypothesis" proposing that extended childhood neuroplasticity contributes to abstract thinking in humans. The release sparked controversy, with some media interpreting the discussions as endorsing eugenicist ideas, though Bach characterized them as private intellectual exchanges without influence on his research direction or endorsement of discriminatory views. In a November 2025 Substack post, Bach expressed profound personal distress over the fallout, including canceled presentations, severed professional ties, and threats, while reaffirming his opposition to racism and sexism rooted in his background and principles.

Public Engagement and Influence

Podcasts and interviews

Bach has frequently appeared on podcasts hosted by researchers and philosophers, where he elaborates on cognitive architectures, as a simulation, and the implications of (AGI). These discussions often draw from his MicroPsi framework and critiques of human cognition as a predictive model rather than a direct interface with . His most extensive engagements include three interviews on the Lex Fridman Podcast. The first, episode #101 titled "Artificial Consciousness and the Nature of Reality," aired on June 13, 2020, and covered topics such as the computational basis of , the of , and how AI might replicate human-like phenomenology without biological substrates. In episode #212 on August 21, 2021, Bach addressed advancements in AI models like , the structure of , and philosophical questions about within deterministic systems. The third, episode #392 titled "Life, , Consciousness, AI & the Future of Humans," released August 1, 2023, expanded on evolutionary perspectives of life, , societal impacts of AGI, and stages of human development as software-like processes. Beyond Fridman, Bach appeared on the in episode 126, "Is the Universe a Computer?," on October 13, 2022, debating intelligence metrics like IQ tests, similarities between large language models and human cognition, and computational theories of reality. He also featured on The Trajectory podcast on October 25, 2024, discussing AGI development strategies and long-term computational goals at Liquid AI. Additional appearances include segments with the , where he commented on and loss functions in training. These platforms have amplified his views, reaching audiences interested in interdisciplinary AI philosophy, though Bach emphasizes empirical validation over speculative hype in such formats.

Writings and online presence

Bach's primary book-length contribution to the literature on cognitive architectures is Principles of Synthetic Intelligence: PSI: An Architecture of Motivated Cognition, published in 2009 by , which details the MicroPsi framework for modeling motivation, perception, and reasoning in artificial agents. His academic output includes numerous peer-reviewed papers, such as "MicroPsi 2: The Next Generation of the MicroPsi Framework" (2012), presented at the AGI conference, extending the architecture's capabilities for emergent behaviors. On his personal website, bach.ai, Bach maintains sections for publications, videos, and personal reflections, including essays like "On Marvin Minsky" (January 26, 2016), a tribute emphasizing Minsky's foundational influence on theories of mind, and "Don't Be That " (September 18, 2014), critiquing impulsive rooted in basal functions. Other writings there explore AI's implications for , such as "From to ," arguing that synthetic systems can illuminate human self-models. Bach operates a Substack newsletter at joscha.substack.com, launched to discuss AI, cognition, and philosophy through a computational lens, with posts including "The Existential Risk of AGI" (June 22, 2023), evaluating alignment challenges via agent-based modeling rather than anthropomorphic fears; the publication has garnered thousands of subscribers and remains active into 2025. His online presence extends to X (formerly Twitter) under @Plinz, where he shares insights on AI development, cognitive science, and critiques of mainstream narratives, prioritizing "integrity, not conformity" in his bio. Additionally, he hosts a YouTube channel focused on cognition and AI, primarily featuring recordings of interviews and lectures rather than original video essays.

References

Add your contribution
Related Hubs
Contribute something
User Avatar
No comments yet.