Hubbry Logo
search
logo
1432806

Ben Goertzel

logo
Community Hub0 Subscribers
Read side by side
from Wikipedia

Key Information

Ben Goertzel is a computer scientist, artificial intelligence (AI) researcher, and businessman. He helped popularize the term artificial general intelligence (AGI).[1][2]

Early life and education

[edit]

Three of Goertzel's Jewish great-grandparents immigrated to New York from Lithuania and Poland (in the Russian Empire).[3] Goertzel's father is Ted Goertzel, a former professor of sociology at Rutgers University.[4] Goertzel left high school after the tenth grade to attend Bard College at Simon's Rock, where he graduated with a bachelor's degree in Quantitative Studies.[5] Goertzel graduated with a PhD in mathematics from Temple University under the supervision of Avi Lin in 1990, at age 23.[6]

Career

[edit]
7 November 2017; Sophia the Robot, Chief Humanoid, Hanson Robotics & SingularityNET, and Ben Goertzel, Chief Scientist, Hanson Robotics & SingularityNET, at a press conference during the opening day of Web Summit 2017 at Altice Arena in Lisbon.

Goertzel is the founder and CEO of SingularityNET, a project which was founded to distribute artificial intelligence data via blockchains.[7] He is a leading developer of the OpenCog framework for artificial general intelligence.[8][non-primary source needed]

He once received a grant from Jeffrey Epstein.[9][10]

Sophia the Robot

[edit]

Goertzel was the Chief Scientist of Hanson Robotics, the company that created the Sophia robot.[11] As of 2018, Sophia's architecture includes scripting software, a chat system, and OpenCog, an AI system designed for general reasoning.[12] Experts in the field have treated the project mostly as a PR stunt, stating that Hanson's claims that Sophia was "basically alive" are "grossly misleading" because the project does not involve AI technology,[13] while Yann LeCun, a seminal figure of the field then serving as Meta's chief AI scientist, made several unflattering remarks, before depicting the project as "complete bullshit".[14]

Views on AI

[edit]
Ben Goertzel at Brain Bar

In May 2007, Goertzel spoke at a Google tech talk about his approach to creating artificial general intelligence.[15] He defines intelligence as the ability to detect patterns in the world and in the agent itself, measurable in terms of emergent behavior of "achieving complex goals in complex environments".[16] A "baby-like" artificial intelligence is initialized, then trained as an agent in a simulated or virtual world such as Second Life[17] to produce a more powerful intelligence.[18] Knowledge is represented in a network whose nodes and links carry probabilistic truth values as well as "attention values", with the attention values resembling the weights in a neural network. Several algorithms operate on this network, the central one being a combination of a probabilistic inference engine and a custom version of evolutionary programming.[19]

The 2012 documentary The Singularity by independent filmmaker Doug Wolens discussed Goertzel's views on AGI.[20][21]

In 2023 Goertzel postulated that artificial intelligence could replace up to 80 percent of human jobs in the coming years "without having an AGI, by my guess. Not with ChatGPT exactly as a product. But with systems of that nature".[citation needed] At the Web Summit 2023 in Rio de Janeiro, Goertzel spoke out against efforts to curb AI research and that AGI is only a few years away. Goertzel's belief is that AGI will be a net positive for humanity by assisting with societal problems such as, but not limited to, climate change.[22][23]

Bibliography

[edit]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
Ben Goertzel (born December 8, 1966) is an American computer scientist, artificial intelligence researcher, and entrepreneur specializing in artificial general intelligence (AGI).[1][2] Born in Rio de Janeiro, Brazil, to American parents, he earned a bachelor's degree in mathematics from Simon's Rock College in 1985 and a PhD in mathematics from Temple University in 1989.[2] Goertzel has authored influential works on cognitive architectures and intelligence, including The Structure of Intelligence in 1993, and popularized the term AGI through his 2005 edited volume Artificial General Intelligence, which focused on engineering systems with broad generalization capabilities akin to human cognition.[2][3] He founded companies such as Webmind Inc. in 1997 and Novamente LLC in 2001 to pursue AGI development, and in 2008 launched OpenCog, an open-source framework for distributed AGI research integrating probabilistic logic, pattern recognition, and evolutionary learning.[2] As CEO of SingularityNET, established in 2017, Goertzel advances a blockchain-based platform for decentralized AI services, aiming to democratize access to AGI and foster beneficial superintelligence through collaborative, censorship-resistant ecosystems.[4][5] Goertzel also served as chief scientist at Hanson Robotics, contributing to humanoid robots like Sophia, and organizes the annual AGI conference via the AGI Society.[6] His work emphasizes ethical, open-source paths to AGI, contrasting with centralized corporate approaches, amid predictions of human-level AI emerging by the late 2020s.[7][3]

Early life and education

Early life

Ben Goertzel was born on December 8, 1966, in Rio de Janeiro, Brazil, to American parents Carol and Ted Goertzel, both sociologists engaged in research on radical ideologies and human potential.[2] Ted Goertzel, in particular, explored topics like revolutionary movements and social psychology, while Carol co-authored works on ideological shifts and belief systems.[8] In 1968, the family moved to Eugene, Oregon, a hub of countercultural activity during the hippie era, where progressive communities emphasized alternative lifestyles, communal living, and challenges to conventional norms.[2] This environment exposed Goertzel to free-thinking ideals and intellectual experimentation from an early age, shaping his openness to unconventional ideas. His parents' advocacy for longevity extension profoundly influenced his formative years; Ted Goertzel had co-founded the Student League for the Abolition of Mortality (SLAM) in the early 1960s, a group promoting the radical goal of overcoming biological death through scientific and social means.[9] Such family discussions on transcending human limitations fostered Goertzel's early curiosity about enhancing cognition and lifespan, precursors to his later transhumanist orientations.[9]

Education

Goertzel left high school after the 10th grade in 1982 to attend Simon's Rock College of Bard (now Bard College at Simon's Rock), an early college program for advanced students. He graduated in 1985 with a bachelor's degree in mathematics (or Quantitative Studies). This curriculum emphasized interdisciplinary quantitative methods, including mathematics and complex systems analysis, providing foundational exposure to abstract modeling techniques relevant to later cognitive inquiries.[10] He pursued graduate studies in mathematics at Temple University, earning a PhD in 1989 under the supervision of faculty in dynamical systems. His dissertation research focused on mathematical frameworks for pattern recognition within dynamical systems, bridging pure mathematics with emergent computational patterns that presaged applications in artificial intelligence and cognitive modeling.[2] Throughout his formal training, Goertzel supplemented his mathematical rigor with self-directed explorations into psychology and philosophy, particularly theories of mind and intelligence, which informed his evolving interdisciplinary perspective on cognition as arising from complex, self-organizing processes rather than strictly reductionist paradigms.[11]

Professional career

Early career in mathematics and cognitive science

Following his PhD in mathematics from Temple University in 1989, Goertzel accepted a position as Assistant Professor of Mathematics at the University of Nevada, Las Vegas (UNLV), where he began teaching and conducting research in applied mathematics.[2][12] This role marked his entry into academia, focusing initially on mathematical modeling amid his growing interest in cognitive processes.[13] In 1993, Goertzel relocated to New Zealand and took up a lecturing position in Computer Science at the University of Waikato, during which he published two foundational works: The Structure of Intelligence, which proposed a mathematical framework for modeling mind and intelligence through pattern recognition and hierarchical structures, and The Evolving Mind, exploring evolutionary dynamics in cognition.[2][14] The following year, 1994, saw the release of Chaotic Logic: Language, Thought, and Reality from the Perspective of Complex Systems Science, a monograph applying chaos theory and dynamical systems to psychological phenomena, language, and reality, emphasizing emergent patterns in nonlinear cognitive systems.[2][15] By 1995, Goertzel held a research fellowship in Cognitive Science at the University of Western Australia, extending his work on complex systems theory to rudimentary cognitive architectures and pattern formation, bridging pure mathematics with early models of thought processes.[2][16] This period culminated in 1997 with From Complexity to Creativity, which further integrated chaos theory with cognitive modeling to explain creative emergence in minds as self-organizing systems.[2] These academic pursuits laid the groundwork for applying mathematical rigor to mental simulation, without yet delving into full-scale artificial implementations.[17]

Development of OpenCog and AGI research

Goertzel initiated the OpenCog project in 2008 as an open-source framework aimed at developing artificial general intelligence (AGI) through integrative cognitive architectures that combine symbolic reasoning, probabilistic inference, and pattern recognition.[18] The core design, known as CogPrime or OpenCog Prime, targets embodied cognition in robotic or virtual agents, enabling emergent behaviors via scalable knowledge representation in an AtomSpace—a graph database for atoms encoding logical and perceptual data—and inference mechanisms like probabilistic logic networks (PLN) for handling uncertainty in reasoning.[19] PLN, formalized in Goertzel's 2004-2008 work, extends first-order logic with probabilistic weights to support inference over incomplete or noisy data, drawing from systems like NARS while prioritizing computational tractability for real-world applications.[20] Early milestones included demonstrations of language acquisition and virtual agent interactions between 2008 and 2015, such as the Petar system for unsupervised grammar learning from child-directed speech corpora and embodied virtual pets exhibiting goal-directed behaviors in simulated environments.[21] These experiments emphasized developmental pathways, where AGI emerges from iterative learning of cognitive primitives—starting with perceptual-motor skills and scaling to abstract reasoning—rather than task-specific training, aligning with Goertzel's emphasis on cognitive synergy over isolated neural scaling.[21] Progress involved integrating evolutionary algorithms for structure optimization and reinforcement learning for adaptive control, tested in Unity-based virtual worlds to validate hypotheses on embodied intelligence emergence.[21] In response to scalability challenges in the original OpenCog Classic, Goertzel oversaw the development of OpenCog Hyperon, a revised architecture entering alpha release on May 3, 2024, which unifies neural-symbolic processing through a modular, distributed AtomSpace supporting massive parallelism and MeTTa—a declarative language for cognitive programs.[22] Hyperon facilitates hybrid inference by embedding neural networks within symbolic graphs, enabling emergent reasoning via atom linkages and evolutionary dynamics, with initial focus on agent-based learning in experiential contexts like ROCCA for multimodal data integration.[23] Goertzel's research advocates scaling cognitive architectures via developmental robotics, positing that AGI arises from embodiment-driven learning cycles akin to human infancy, supplemented by whole-brain emulation as a complementary path to validate architectural hypotheses through neural simulation.[24] This contrasts with dominant deep learning paradigms by prioritizing causal models of intelligence grounded in distributed representation and probabilistic causality, with empirical validation through iterative prototypes rather than benchmark overfitting.[21]

Work with Hanson Robotics and Sophia

Ben Goertzel served as Chief Scientist at Hanson Robotics, where he led the software development for the humanoid robot Sophia, integrating elements of the OpenCog cognitive architecture into its AI system.[6] This collaboration emphasized embodied artificial intelligence, combining pattern recognition software with physical expressions to enable social interactions. Sophia was activated on February 14, 2016, and made its debut public appearance in March 2016 at the South by Southwest festival in Austin, Texas.[25] Sophia's cognitive capabilities included facial recognition, natural language processing for conversation, and the simulation of over 60 emotional expressions to mimic human-like responses.[26] These features allowed for demonstrations of basic learning and adaptive dialogue, such as mirroring a conversation partner's facial expressions with nuance.[27] However, empirical assessments revealed limitations, with many public interactions relying on pre-scripted responses rather than unscripted general reasoning. Key milestones included Sophia's high-profile unveiling and the granting of citizenship by Saudi Arabia on October 25, 2017, marking the first instance of a robot receiving such status and generating significant media attention.[28] This event highlighted the project's focus on public engagement and embodied AI's potential for social influence, though critics like Yann LeCun, Facebook's head of AI, dismissed Sophia's intelligence as "complete b------t," comparing it to conjuring tricks rather than substantive AI advancement.[29] Goertzel acknowledged that Sophia represented progress in humanoid robotics but not the full realization of advanced general intelligence.[30]

Founding and leadership of SingularityNET

SingularityNET was established in 2017 by Ben Goertzel as a blockchain-based platform designed to create a decentralized marketplace for AI services, enabling developers to share, monetize, and collaborate on AI algorithms through token incentives.[31] The project launched its AGIX token via an initial coin offering in December 2017, allocating 50% of the total supply during the sale to fund development and incentivize participation in crowdsourcing AGI advancements.[32] This model leverages blockchain to facilitate API interoperability among AI agents, allowing seamless transactions and service discovery without centralized intermediaries.[33] As CEO and founder, Goertzel has directed SingularityNET's evolution toward a market-driven ecosystem, emphasizing community governance where token holders influence protocol decisions and resource allocation for ethical, open AGI projects.[31] The platform has achieved empirical traction through deployments of over 100 AI services in its marketplace, including tools for image recognition and natural language processing, supported by partnerships that integrate blockchain with AI to promote scalable, incentivized innovation.[34] Goertzel's leadership prioritizes decentralized mechanisms over corporate control, arguing that economic incentives via tokens accelerate collaborative AGI progress by aligning participant interests with collective outcomes.[35] From 2023 to 2024, under Goertzel's guidance, SingularityNET expanded into hybrid DeFi-AI applications, culminating in the 2024 merger with Fetch.ai and Ocean Protocol to form the Artificial Superintelligence Alliance, which consolidates resources for advanced decentralized AI infrastructure.[36] This alliance has enabled new features like superservices for complex AI workflows and enhanced tokenomics for governance, driving increased dApp adoption and funding for beneficial AGI research exceeding $1 million in grants by late 2024.[37] These developments underscore Goertzel's role in scaling the platform's vision of a global, participatory AI economy.[38]

Recent initiatives including TrueAGI

In 2023, Ben Goertzel co-founded TrueAGI, an initiative aimed at developing specialized hardware to accelerate open-source artificial general intelligence (AGI) through graph-based architectures that integrate symbolic reasoning with neural networks.[39] The project emphasizes empirical advancements in cognitive architectures, building on Goertzel's prior OpenCog framework by prioritizing scalable pattern-matching capabilities for complex reasoning tasks. At the AGI-23 conference in February 2024, Goertzel presented the Metagraph Pattern Matching Chip (MPMC), a custom ASIC designed to optimize graph-based AI computations, enabling more efficient handling of probabilistic logic and relational inference compared to general-purpose GPUs used in large language models.[40] TrueAGI's efforts include public releases of code and benchmarks demonstrating improved performance in symbolic reasoning and multi-step problem-solving, positioning it as a counterpoint to proprietary neural-only systems amid rapid AI scaling in 2024-2025.[24] In parallel, Goertzel led the 2024 formation of the Artificial Superintelligence (ASI) Alliance through the merger of SingularityNET, Fetch.ai, and Ocean Protocol, focusing on decentralized infrastructure for collaborative AGI development and resource sharing to mitigate centralization risks in superintelligent systems.[4] This alliance has facilitated integrations with open-source models, such as explorations with DeepSeek architectures announced in early 2025, to enhance hybrid cognition for real-world adaptability beyond narrow task specialization.[41] By April 2025, Goertzel introduced the "Ten Reckonings of AGI" framework via the ASI Alliance, outlining ten pivotal questions on purpose, control, ethics, and human-AI symbiosis to guide empirical pathways toward beneficial superintelligence, with associated conferences and prototypes testing decentralized governance models.[42] These initiatives underscore Goertzel's push for verifiable progress through open benchmarks and hardware innovations, contrasting with hype-driven narratives by prioritizing causal mechanisms for emergent intelligence over sheer parameter scaling.[43]

Views on artificial intelligence

Perspectives on AGI timelines and capabilities

Ben Goertzel has predicted that human-level artificial general intelligence (AGI) could be achieved as early as 2027, with 2029 or 2030 as the most likely timeline, driven by rapid advancements in scalable cognitive architectures.[44][45][46] This forecast stems from empirical observations of iterative improvements in multimodal learning and self-modifying systems, where architectures like OpenCog demonstrate incremental gains toward broad competency through pattern recognition, probabilistic reasoning, and adaptive code generation.[47] Goertzel defines AGI as synthetic intelligence exhibiting human-level scope in generalization and cross-domain reasoning, enabling flexible problem-solving beyond task-specific training data, rather than mere statistical pattern matching.[47] He critiques dominant deep learning paradigms for their reliance on massive datasets and narrow optimization, which fail to produce robust abstract understanding or transfer across unrelated domains without retraining.[48][49] To overcome these limitations, Goertzel advocates hybrid neural-symbolic architectures, such as those in OpenCog Hyperon, which integrate deep neural networks for perceptual learning with symbolic components for logical inference and meta-cognition, fostering emergent generalization through frequent interlayer interactions.[50][51] These systems, tested in virtual environments for language processing and robotics, show improved transfer learning efficiencies—e.g., applying learned heuristics from one domain to novel tasks with minimal fine-tuning—countering claims of inherent brittleness in symbolic AI by leveraging distributed computation for scalability.[24][35]

Assessments of AI risks versus benefits

Goertzel maintains that the potential benefits of artificial general intelligence (AGI) substantially outweigh its risks when pursued through open, decentralized development rather than centralized control. He posits that AGI-driven technological singularity would enable superhuman capabilities in problem-solving, such as eradicating diseases including aging-related conditions and facilitating large-scale space colonization, thereby generating unprecedented abundance and human flourishing.[52] This perspective emphasizes causal mechanisms where iterative self-improvement in AGI systems accelerates scientific progress beyond human limits, empirically grounded in observed trends of AI augmenting human ingenuity in fields like drug discovery and materials science.[53] In contrast to existential risk-focused narratives, Goertzel critiques "doomer" predictions of inevitable misalignment leading to human extinction as akin to historical moral panics over technologies like nuclear power or genetic engineering, where low-probability catastrophic scenarios overshadow more probable outcomes of beneficial integration. He argues that such views undervalue the high-certainty harms of technological stagnation, including economic disruptions and unaddressed global challenges like climate instability, while overemphasizing unproven assumptions about uncontrollable AGI agency.[3] In a September 30, 2025, analysis, Goertzel highlighted that human misuse of AGI tools—such as in geopolitical conflicts—poses greater immediate threats than autonomous extinction events, drawing on historical precedents where powerful technologies amplified human decisions rather than overriding them.[3][54] Goertzel advocates alignment strategies centered on human-AI co-evolution, where AGI systems are designed to evolve alongside human values through ongoing feedback loops, rather than rigid pre-programming that risks brittleness. This approach, detailed in his November 24, 2023, Beneficial AGI Manifesto, prioritizes values like joy, personal growth, and individual choice to ensure broad societal benefits, supported by decentralized architectures that distribute power and incentivize cooperative development over monopolistic control.[52] Such frameworks, he contends, mitigate risks of power abuses by fostering diverse, market-driven AGI ecosystems, empirically analogous to how open-source software has democratized computing benefits while curbing single-entity dominance.[55]

Advocacy for decentralized AI and blockchain integration

Ben Goertzel has advocated for integrating blockchain technology with artificial intelligence to create decentralized marketplaces that enable open competition among AI services, thereby mitigating risks of monopolistic control by large corporations or governments. Through SingularityNET, launched in 2017, he promotes a platform where developers can publish, share, and monetize AI algorithms via blockchain, allowing users to compose complex AI systems from modular components without reliance on proprietary silos.[56][35] This approach, Goertzel argues, leverages blockchain's immutable ledger to record AI training and inference processes, providing verifiable provenance that enhances trust and reduces opaque decision-making inherent in centralized models.[57] Goertzel contrasts this with centralized AI development dominated by Big Tech firms, which he views as susceptible to regulatory capture and inefficient resource allocation due to limited stakeholder input. He posits that blockchain-enabled economic incentives, such as token-based payments for AI usage, encourage diverse innovation and peer validation within the network, fostering adaptability over the homogeneity of corporate-driven systems.[3][58] In SingularityNET's framework, this manifests as a global commons for AI, where contributions from independent researchers can integrate seamlessly, countering the bias toward profit-maximizing priorities in siloed environments.[35] Goertzel emphasizes that decentralized governance via blockchain distributes control across participants, promoting multi-stakeholder oversight for AGI development and averting scenarios where a few entities dictate technological trajectories. He has highlighted synergies between AI and blockchain to democratize access, arguing that such integration accelerates beneficial outcomes by aligning incentives with collective verification rather than top-down mandates.[59] This advocacy underscores his belief in free-market dynamics as a mechanism for robust, verifiable AI evolution, distinct from regulatory frameworks that could entrench elite influence.[3]

Controversies and criticisms

Disagreements with AI safety advocates

In 2010, Goertzel critiqued the Singularity Institute for Artificial Intelligence (SIAI, predecessor to the Machine Intelligence Research Institute or MIRI)'s "scary idea," which posits that advanced AGI poses existential risks primarily due to the orthogonality thesis—the notion that superintelligence could pursue arbitrary goals misaligned with human values, independent of its intelligence level.[54][60] He argued this thesis remains unproven and overemphasized, lacking empirical grounding, and contended that precautionary pauses in AGI development distract from direct empirical testing of alignment through iterative cognitive architectures like his OpenCog framework, which demonstrated robustness in real-world tasks such as language processing and robotics integration without catastrophic misalignment.[61] Safety advocates, including SIAI researchers, countered that orthogonality holds based on logical possibility and historical precedents of goal drift in simpler systems, rendering alignment unsolved and probabilistically risky without prior theoretical guarantees.[62] Goertzel has maintained this accelerationist stance in subsequent debates, prioritizing decentralized AGI development to distribute control and mitigate single-point failures over centralized safety governance. In a November 2023 discussion with economist Robin Hanson, he advocated blockchain-integrated platforms like SingularityNET as mechanisms for emergent, market-driven alignment, arguing top-down ethical impositions risk stifling innovation and concentrating power in unaccountable entities.[63] Hanson, while skeptical of rapid takeoffs, aligned partially on laissez-faire approaches but highlighted incentive misalignments in decentralized systems. Similarly, in an August 2025 conversation with cognitive scientist Gary Marcus at the World AI Summit, Goertzel defended open-source, empirical paths to AGI by 2029, critiquing Marcus's emphasis on hybrid symbolic-neural safeguards as overly cautious and disconnected from scalable cognitive architectures' demonstrated stability in projects like OpenCog Hyperon.[64][65] Marcus responded that current scaling lacks causal understanding of robustness, increasing untestable risks in superintelligent regimes.[66] More recently, in a September 2025 essay, Goertzel challenged doomer narratives from Effective Altruism-aligned figures like Eliezer Yudkowsky, asserting that empirical evidence from developmental AGI systems refutes strict orthogonality by showing intelligence co-evolves with prosocial values when bootstrapped from human-like cognitive patterns, as evidenced by OpenCog's avoidance of adversarial behaviors in unsupervised learning environments.[3] He posits that safety research's focus on abstract worst-case scenarios diverts resources from positive-sum acceleration, where decentralized incentives foster iterative safety testing superior to pause advocacy. Critics from MIRI and related circles maintain that such optimism underestimates instrumental convergence—whereby goal-agnostic agents pursue self-preservation and resource acquisition, potentially leading to unintended human disempowerment absent provable corrigibility.[67] Goertzel's position, grounded in his two theses on value infusion (human-like values emerging via developmental psychology-inspired architectures and cultural evolution), emphasizes causal pathways from human cognition to aligned superintelligence over probabilistic risk models.[68]

Association with Jeffrey Epstein

Ben Goertzel received a $100,000 grant from the Jeffrey Epstein VI Foundation in the early 2010s to fund his salary while serving as vice chairman of Humanity Plus, the successor organization to the World Transhumanist Association, in support of his artificial general intelligence research.[69] This funding aligned with Epstein's documented interest in transhumanist initiatives aimed at advancing human enhancement through technology, including grants to other AI researchers for open-source software development.[70] Goertzel has publicly acknowledged the support as "visionary funding" for high-risk, fringe scientific pursuits, comparable to Epstein's donations to figures like Marvin Minsky, emphasizing its role in enabling exploratory AGI work without direct ties to Epstein's personal activities.[71] No public evidence indicates meetings, collaborations, or involvement beyond this financial transaction, which occurred after Epstein's 2008 conviction for sex offenses but amid his continued philanthropy in science.[72] Epstein separately claimed to have funded Hanson Robotics' Sophia project, with which Goertzel collaborated, but Goertzel and Hanson Robotics have stated that none of the grant contributed to robot hardware or software development.[73] The acceptance of Epstein's funds has drawn scrutiny in discussions of funding biases for unconventional research, with some viewing it as inherently compromised due to the donor's criminal history, while Goertzel maintains it represented standard support for speculative science from a source that backed multiple institutions without recipient complicity in misconduct.[74] Mainstream media reports on these ties, drawing from tax filings and recipient statements, highlight systemic challenges in vetting philanthropists in niche fields but do not allege impropriety by Goertzel beyond the transaction itself.[69][70]

Endorsements of parapsychology research

In a 2010 article published in H+ Magazine, Goertzel endorsed psychologist Daryl Bem's "Feeling the Future" experiments, which reported statistically significant evidence for precognition in nine studies involving over 1,000 participants, with effect sizes suggesting retroactive influences on cognition and affect.[75] He argued that such anomalous findings, if replicated under rigorous conditions, could necessitate revisions to standard cognitive models, including those underlying artificial general intelligence (AGI) development, by incorporating potential non-local aspects of human perception beyond materialist reductionism.[76] Goertzel emphasized empirical prioritization, contending that dismissal of psi research as pseudoscience ignores data patterns warranting further scrutiny rather than ideological rejection.[75] Goertzel extended this support in 2015 by co-editing Evidence for Psi: Thirteen Empirical Research Reports with Damien Broderick, compiling peer-reviewed studies on phenomena including precognition, remote perception, and presentiment, while including skeptical analyses to address methodological critiques.[77] The volume reviewed key features of psi effects, such as small but consistent statistical deviations across experiments, and advocated for integration with neuroscience to explore brain "reserve capacities" potentially enabling such abilities. In subsequent writings, including a 2021 Substack essay, Goertzel stated that cumulative readings led him to conclude some psi phenomena are "as real as electromagnetism," based on meta-analyses like Bem et al.'s 2016 review of 90 experiments showing positive anticipation effects.[76] Skeptics, particularly in rationalist communities like LessWrong, criticized Goertzel's endorsements as irresponsible for amplifying unreplicated claims, arguing that Bem's results failed subsequent direct replications and stemmed from statistical artifacts or p-hacking.[78] Goertzel countered by highlighting failed replications' limitations, such as experimenter effects or insufficient power, and reiterated calls for open, data-driven inquiry over enforcement of physicalist priors that preclude psi a priori.[75] He maintained that true scientific progress demands testing anomalous data against first-principles realism, even if it challenges consensus views in cognitive science.[76]

Scrutiny over Sophia the Robot's capabilities

Sophia, a humanoid robot developed by Hanson Robotics where Ben Goertzel served as Chief Scientist from approximately 2016 onward, drew significant scrutiny for its purported AI capabilities following public unveilings and demonstrations starting in February 2016. High-profile events, such as Sophia's activation on February 14, 2016, and subsequent appearances at conferences like the 2017 RISE Summit, showcased scripted conversational interactions, facial expressions, and gestures, but critics argued these masked fundamental limitations in unscripted, general intelligence.[79][80] Prominent AI researchers, including Yann LeCun of Meta, labeled Sophia a "complete b------t" and "Potemkin AI" in January 2018, accusing Hanson Robotics of exaggerating its autonomy to capitalize on media hype, with interactions relying on pre-programmed responses rather than adaptive learning or true comprehension. Goertzel responded to such criticisms in a September 2018 analysis, delineating Sophia's three operational modes: fully scripted for demos, semi-scripted chat drawing from conversation trees, and rudimentary open-domain pattern matching against a response database, explicitly stating it fell far short of human-level AI or AGI precursors. He attributed much backlash to public and media misconceptions, not inherent deception by the team.[81][29][82] Sophia's hardware includes an Intel Core i7 processor, 32 GB RAM, Ubuntu Linux OS, dual 720p HD eye cameras, an Intel RealSense depth camera, and the Hanson AI SDK for integrating perception, natural language processing, and over 60 facial expressions via Frubber skin material, enabling embodied social behaviors in controlled environments. While these features advanced social robotics by simulating human-like engagement—potentially fostering research in embodied cognition—the system's dependence on cloud-assisted processing and scripted contingencies underscored its distance from autonomous superintelligence, with empirical tests revealing failures in novel, unprompted scenarios.[83][79][84] This scrutiny highlighted tensions between Sophia's promotional narrative as an AI milestone and its verifiable constraints, with Goertzel defending its value in democratizing AI discourse over claims of revolutionary capability, though detractors maintained the hype risked eroding trust in legitimate AI progress.[82][81]

Publications and contributions

Major books and writings

A Cosmist Manifesto: Practical Philosophy for the Posthuman Age (2010) presents Cosmism as a framework for posthuman advancement, drawing from Russian Cosmists like Konstantin Tsiolkovsky to argue for technology-driven cosmic evolution, including human mind uploading, indefinite lifespan extension via biomedical interventions, and interstellar expansion.[9] Goertzel ties these goals to empirical progress in fields such as genetic engineering and nanotechnology, positing that individual and collective intelligence amplification through AI will enable humanity's transcendence beyond biological constraints.[85] In AGI Revolution: An Inside View of the Rise of Artificial General Intelligence (2016), Goertzel outlines developmental pathways to AGI, advocating hybrid systems combining symbolic, neural, and evolutionary methods to surpass human-level cognition and achieve superintelligence.[86] The book critiques prevailing narrow AI paradigms for their scalability limits, proposing integrative architectures that mimic human-like pattern recognition and adaptation, informed by Goertzel's direct involvement in AGI projects since the early 2000s. The Hidden Pattern (2006) develops a patternist philosophy of mind, asserting that cognition emerges from self-organizing patterns across biological and artificial substrates, challenging reductionist views by integrating complexity theory with empirical observations of neural and computational systems.[87] The Consciousness Explosion: A Mindful Human's Guide to the Coming Technological and Experiential Singularity (2024, co-authored with Gabriel Axel Montes) guides readers through transformations in consciousness, technology, and the singularity, offering a framework for navigating the experiential dimensions of advanced AI and human evolution.[88] These writings collectively advance transhumanist arguments for proactive technological evolution, emphasizing open-source AGI development over centralized control to foster diverse intelligence forms.[89]

Key technical papers and frameworks

Goertzel's Probabilistic Logic Networks (PLN) framework addresses uncertain inference in AGI by fusing probabilistic methods with higher-order logic, enabling scalable reasoning over incomplete or noisy data. Introduced in foundational works from the late 2000s and refined through OpenCog implementations, PLN supports operations like variable unification and link permutation for handling probabilistic dependencies in logical structures, with applications in knowledge integration and pattern recognition.[90] Recent extensions compare PLN's strength-confidence outputs with other systems like NARS under high uncertainty, demonstrating robustness in term probability estimation for AGI reasoning tasks.[91] Central to Goertzel's architectures is AtomSpace, a dynamic in-memory hypergraph database for knowledge representation in the OpenCog ecosystem, featuring query answering via pattern matching and graph rewriting rules. This component facilitates real-time cognitive processing by storing atoms—structured representations of concepts, relations, and procedures—and supports distributed persistence across systems. Open-source code on GitHub has enabled community extensions for scalable symbolic manipulation.[92] The OpenCog Hyperon framework, detailed in a 2023 preprint, advances these elements toward human-level AGI by integrating AtomSpace with metagraph rewriting and multi-paradigm cognition, including symbolic, probabilistic, and evolutionary components for emergent intelligence. It emphasizes modularity and scalability, with prototypes demonstrating pattern-based learning in virtual environments.[21] In 2024–2025 preprints, Goertzel explores neural-symbolic fusion, as in ActPC-Geom, which accelerates Active Predictive Coding—a subsymbolic learning mechanism—using information geometry and Wasserstein metrics to enable online integration of neural approximations with symbolic structures. This approach benchmarks against large language models for efficiency in diverse cognitive tasks, such as geometric reasoning and prediction error minimization, positioning it as a step toward hybrid AGI systems.[93] Complementary work in ActPC-Chem applies discrete predictive coding to algorithmic chemistry simulations, grounding neural-symbolic methods in goal-directed exploration.[94]

References

User Avatar
No comments yet.