Singularity Timing Predictions
There are a wide variety of opinions about the likelihood and timing of the technological singularity (a generalized human-surpassing artificial intelligence) among those who take this concept seriously.
Some theorists, such as Brandon Carter, Richard Gott, John Leslie, Nick Bostrom, and Robin Hansen, consider the coming transition essentially a matter of chance and choice, a test that we might easily not pass, given either poor decisionmaking or simply a run of bad luck in a chaotic universe (see the "Great Filter" or the "Doomsday Argument"). Others, such as myself, suspect the arrival of autonomous technological intelligence to be a virtually statistically inevitable development (e.g., extremely probable as a physical event) but propose that the manner and timing of the transition remain key choices under the influence of human beings.
We'll skip further discussion of the probability of the technological singularity for now. Assuming its likelihood, we will next consider the range of existing predictions on the approximate time of arrival of the event. A few brief but fundamental observations should be made before we begin.
First, we must recognize that the "technological singularity" can only be an aggregation of a chain of smaller singularities (human-surpassing modules of machine intelligence), many of which have already arrived on the planet. For example, a "calculational singularity" occurred in isolated Earth environments in the late 19th century when a few human beings began using complex mechanical calculators whose inner workings were not, for all practical purposes, comprehensible to their slow-switching human brains. A generalized calculational singularity then gradually occurred on the planet during the mid-to-late 20th century, when digital calculating devices became ubiquitous.
Today, primitive neural network software programs, functionally replacing human eyes, drive most modern research telescopes in the search for supernovas in the night sky (a relatively simple pattern recognition problem), so a generalized "supernova scanning singularity" has also occurred, one that took roughly a decade to unfold. Thus, using John Koza's valuable definition of instances or modules of "human-competitive machine intelligence," we can see that the arrival of human-surpassing machine intelligence must be a very broadly incremental and "modular" (in the cognitive science sense) emergence. A long chain and large collection of specific singularities will subtly lead us to a condition where many, then eventually most higher human capacities are being functionally represented in realtime within our planetary machine intelligence.
As Ray Kurzweil and others have noted, this will be a surprisingly subtle process. Some time this century we will eventually start arguing over just how smart our machine intelligences are, and as Jane Stevenson has observed, generalized human-surpassing artificial intelligence emergence will very likely be, for most people, a truly "silent singularity." Something the vast majority of us don't care or think about, as we go about our daily lives. Something only very mildly disruptive, and a very natural systems transition.
Second, we can see that any particular definition of the technological singularity's arrival (e.g., the Turing Test) must be arbitrary, imprecise, and incomplete, as it will be based on an understanding of human and machine intelligence that can only be highly rudimentary from the human perspective. Nevertheless, such definitions are never trivial, given the broad implications of this transition for the nature and capabilities of Earth's dominant form of local intelligence.
Third, it is worth noting that tomorrow's machines will increasingly possess many "alien intelligences" and idiot-savant-like features whose human equivalency cannot be easily measured. I have argued elsewhere that these intelligences will be very unlikely to disrupt the nature of the local environment when seen from our perspective, due to intrinsically interdependent, resilient, and self-balancing features of all emergent higher intelligence. It seems very likely that greater the "plasticity" of physical intelligence and the lower the cost of understanding and modelling other universal systems (as well as one's own alternative mindsets), the more integrative and ethical all intelligent systems become. Fortunately, we'll have plenty of time to see whether this hypothesis is correct, as humans engage in "artificial selection for symbiosis" with our increasingly powerful, helpful, and inscrutable (except in postmortem caricature) machine intelligence systems in coming decades.
Technological singularity timing predictions have a general strategic value, but I would argue that they are probably much less important, both in terms of their specificity and their ultimate impact on human society (and human economics), than is commonly considered by lay futurists. By contrast, successfully predicting any of the specific singularities that will lead us incrementally closer to the tech singularity, such as the arrival of affordable broadband or a functional conversational user interface, will usually have great economic value in a world where appropriate timing is everything, and while planetary innovation is still largely driven by biological human beings.
This said, today's leading tech singularity timing predictions can be usefully grouped into three campsshort-, mid-, and longer-range prediction groups, each representing 30, 50, and 70 year periods respectively
Tech Singularity Timing Predictions Short-Range (Now to 2029)
In the short-range prediction group (human-surpassing autonomous intelligence arriving now to 2029), we find technology historian Henry Adams, apparently the first technological singularity theorist, who in 1909 proposed a phase change to instantanous progress and physical thought some time between 1921 and 2025. In this realm we also find Nick Hoggard, "Evolution and the Feigenbaum Number", 2000, who advanced an interesting paper and mathematical model that this global "phase change" would arrive as early as 2001-2004. Also on the hyperaccelerated end are Millenialists, in various religions, who expect technology to imminently trigger some form of scriptural transformation or Armageddon. This group includes students of Mayan calendrics (e.g., "Novelty Theory" of Terrence McKenna and associates) who have proposed a December 21, 2012 singularity.
Mathematician and science fiction author Vernor Vinge belongs to this group, as he has stated ("The Coming Technological Singularity", 1993) that he would be surprised to see this event occur "before 2005 or after 2030." Vinge's 2003 update of this essay (Whole Earth Review, Spring 2003) reiterates this time period as reasonable, though he leaves open the possibility that human inability to help machines discover valuable bottom-up, embryologic-developmentalist designs might lead to a significant delay, or even to no technological singularity at all, due to the "large project software problem." For more a response to this perspective, see our critiques page.
The transhumanist philosopher Nick Bostrom also belongs mostly to the short-range group, as he has made a case for superhuman machine intelligence arriving some time in the "first third" of the 21st century, but most particularly, 2004-2024 (see "How Long Before Superintelligence?," 1997). Still others with "singularity is very near" projections are a number of the "singularitarians", such as Eliezer Yudkowsky, who has predicted the event to occur some time between 2005-2020. I find it revealing that, with a few notable exceptions, those who propose extreme nearness of the event are most commonly either 1) in the throes of the predictable radicalism of youth ("the world depends on me"), or 2) of an advanced age and hoping to see the transcendental event occur before their demise ("no world will exist after me").
Tech Singularity Timing Predictions Mid-Range (2030-2080)
In the mid-range prediction group (2030 to 2080), we find the majority of present predictions, including technology analysts and futurists such as Vernor Vinge (circa 2030), Hans Moravec and Ray Kurzweil (circa 2040), myself (circa 2060), artificial intelligence pioneer Marvin Minsky ("Will Robots Inherit the Earth?", 1994) (circa-2070) and physicist James Wesley (Ecophysics, 1974) (circa 2075). Two additional analyses in this group are especially noteworthy, as they employ valuable mathematical models of finite-time singularity development.
In 2000, Laurent Nottale (an astrophysicist), Jean Chaline (a paleontologist), and Pierre Grou (an economist) published an admirably interdisciplinary paper, "On the Fractal Structure of Evolutionary Trees," which applies log-periodic analysis to the main crises of evolutionary civilizations. This was followed by a groundbreaking book, Les Arbres de l'Evolution (Trees of Evolution), 2000, which models universal, life, and economic development all on a fractal, log-periodic acceleration. Their acceleration model reaches a macro-scale singularity, a global time critical, at 2080 ± 30 years (2050-2110). The trio continue to publish (note this 2002 essay) on their fractal model for acceleration, and with luck their interesting ideas will gain wider critical consideration in coming years.
In 2001, Didier Sornette (a complex systems researcher), and Anders Johansen (a physicist) published a paper, "Significance of log-periodic precursors to financial crashes." They noted that hierarchical emergence to new regimes often involve an accelerating approach to a finite-time singularity, followed by a phase transition, which may or may not be locally "catastrophic," as in a financial crash. They started to believe that this pattern could be used to predict some stock market crashes months before they actually happen. This led Dr. Sornette to publish a fascinating work, Why Stock Markets Crash, 2003, which gives a tour of the theory of critical phenomena, and then applys a log-periodic model to historical economic crashes. Sornette and Johansen's model gives a critical time for global phase change at 2050 ± 10 years (2040-2060), and they offer three scenarios for the meaning of this change: 1) economic collapse, 2) a transition to economic sustainability, or most interestingly, 3) superhumanity. More discussion of their work can be found in our Brief History of Intellectual Discussion of the Singularity.
Tech Singularity Timing Predictions Longer-Range (2081-2150+)
In the longer-range prediction group (2081 to 2150+), we find the systems theorist Richard Coren, (The Evolutionary Trajectory, 1998) who projects a singularity in 2140, and the economist Robin Hansen who makes a similar prediction with regard to an "economic singularity" circa 2150. It is quite possible that a number of additional credible longer range predictions will be proposed by those who presently consider the idea very speculative, once better methodologies are brought to bear on this complex predictive challenge.
Today, most estimates in the singularity discussion community predict a generalized human-surpassing machine intelligence emerging in the mid-range period, 2030-2080. Many singularitarians remain in the short-range group, and some of the older, more conservative prognosticators (like Marvin Minsky, Didier Sornette, Laurent Nottale and myself) are either in the upper end of the mid-range or in the longer-range groups.
In 1999, I originally considered 2040 ± 20 years as a broadly reasonable range, placing me in the early part of the mid-range period. But in subsequent inquiry, I have revised my estimate to 2060 ± 20 years, placing me in the latter half of this group.
Motivating this revision is a better understanding that a mature, distributed, planetwide network of semiautonomous, evolutionary developmental hardware and software processes is likely to be required for the technological singularity "phase change" to occur on Earth. Evolutionary and genetic programming pioneers such as John Koza (see Genetic Programming IV: Routine Human-Competitive Machine Intelligence, 2004) are showing us that a new era of computer-aided creativity is already upon us in a number of domains. This work is very important and should be actively supported by the mainstream programming community. Nevertheless, the tech singularity is very unlikely to be precipitated by today's early and limited forms of evolutionary computing but by tomorrow's massively parallel, cyclical, highly interactionist, and mostly bottom up evolutionary developmental processes. Today, metal oxide semiconductor dedicated ASIC systems are the dominant computer manufacturing paradigm. As the International Technology Road Map for Semiconducters (ITRS) notes, these dramatically miniaturizable but rather brittle and simplistic systems are very likely to retain their dominance until at least 2020.
Circa 2020, I expect a highly useful set of Conversational User Interface (CUI)-equipped interfaces, built on top of an increasingly parallel but still weakly biologically-inspired set of computer architectures, to begin to emerge. The CUI is a preliminary step before high-level machine intelligence. Therefore, understanding and measuring the process of CUI emergence can give us insight into the dynamics of the technological singularity to follow.
Why might a distributed system transition be necessary for the CUI to emerge? See On Phase Change Singularities: The Nature and Timing of CUI Emergence for more on this fascinating topic.
My confidence interval presently remains rather wide, at 20 to 40 human years (1-2 standard deviations) on either side of 2060, as I realize local progress is never certain in a fault-prone world. I suspect the actual arrival time depends quite substantially, within a human generation or two either way, on the quality of the choices we make in our lifetimes. To significantly accelerate its arrival, most important may be our political, economic, social, and personal choices in regard to science and technology education, innovation, research, and development. To significantly delay its arrival, we have many more possibilities, none of which we need reiterate here.
If any of these latter time ranges are even approximately true, our ongoing technological acceleration is most definitely a central issue for our generation to consider. Unlike growth curves in particular physical systems, which always saturate, computational trend curves have been shown to be consistently hyperexponential and substrate independent in the known history of the universe. We overlook them at the risk of our own ignorance and misunderstanding of our apparently universally constrained future: the local emergence of postbiological intelligence.
We could as little prevent the coming singularity as we could stop using mathematics, or language, or electricity, or uninvent the computer. We now see the outline of a major transition we can either accelerate or delay, approach wisely or foolishly, but one we seemingly cannot avoid.
Here are a few groups that have conducted some interesting inquiries into the phenomenon of accelerating change and the hypothesis of the technological singularity.
Introductory Links (alpha by author)
Michael Anissimov's thoughtful Accelerating
AtomJack's unique artistic-analytic "Singularity"
Johnathan Bethel, Omega Point Institute, (interface between consciousness and the technological singularity).
Dan Clemmensen, "Paths to the Singularity"
Steve Edwards, "Surviving the Singularity"
Alan Kazlev, "The Singularity"
Marvin Minsky, Sci. American 1994, "Will Robots Inherit the Earth?"
Charles Platt interviews Hans Moravec on the Future of Computers and Humanity, HotWired, 1995
Anders Sandberg's, "The Singularity"
Please let us know if you discover other generalist introductory resources that should be on this page.