Hubbry Logo
logo
Technological singularity
Community hub

Technological singularity

logo
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something to knowledge base
Hub AI

Technological singularity AI simulator

(@Technological singularity_simulator)

Technological singularity

The technological singularity—often called the singularity—is an event in which technological growth accelerates beyond human control, producing unpredictable changes in human civilization. According to the most popular version of the singularity hypothesis, I. J. Good's intelligence explosion model of 1965, an upgradable intelligent agent could eventually enter a positive feedback loop of successive self-improvement cycles; more intelligent generations would appear more and more rapidly, causing a rapid increase in intelligence that culminates in a powerful superintelligence, far surpassing human intelligence.

Some scientists, including Stephen Hawking, have expressed concern that artificial superintelligence could result in human extinction. The consequences of a technological singularity and its potential benefit or harm to the human race have been intensely debated.

Prominent technologists and academics dispute the plausibility of a technological singularity and associated artificial intelligence "explosion", including Paul Allen, Jeff Hawkins, John Holland, Jaron Lanier, Steven Pinker, Theodore Modis, Gordon Moore, and Roger Penrose. One claim is that artificial intelligence growth is likely to run into decreasing returns instead of accelerating ones. Stuart J. Russell and Peter Norvig observe that in the history of technology, improvement in a particular area tends to follow an S curve: it begins with accelerating improvement, then levels off without continuing upward into a hyperbolic singularity.

Alan Turing, often regarded as the father of modern computer science, laid a crucial foundation for contemporary discourse on the technological singularity. His pivotal 1950 paper "Computing Machinery and Intelligence" argued that a machine could, in theory, exhibit intelligent behavior equivalent to or indistinguishable from that of a human.

The Hungarian–American mathematician John von Neumann (1903–1957) is the first known person to discuss a coming "singularity" in technological progress. Stanislaw Ulam reported in 1958 that an earlier discussion with von Neumann "centered on the accelerating progress of technology and changes in human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue". Subsequent authors have echoed this viewpoint.

In 1965, I. J. Good speculated that superhuman intelligence might bring about an "intelligence explosion":

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an 'intelligence explosion', and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.

— Speculations Concerning the First Ultraintelligent Machine (1965)

See all
User Avatar
No comments yet.