Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques

Singularity Timing Predictions

There are a wide variety of opinions about the likelihood and timing of the technological singularity (a generalized human-surpassing artificial intelligence) among those who take this concept seriously.

Some theorists, such as Brandon Carter, Richard Gott, John Leslie, Nick Bostrom, and Robin Hansen, consider the coming transition essentially a matter of chance and choice, a test that we might easily not pass, given either poor decisionmaking or simply a run of bad luck in a chaotic universe (see the "Great Filter" or the "Doomsday Argument"). Others, such as myself, suspect the arrival of autonomous technological intelligence to be a virtually statistically inevitable development (e.g., extremely probable as a physical event) but propose that the manner and timing of the transition remain key choices under the influence of human beings.

We'll skip further discussion of the probability of the technological singularity for now. Assuming its likelihood, we will next consider the range of existing predictions on the approximate time of arrival of the event. A few brief but fundamental observations should be made before we begin.

First, we must recognize that the "technological singularity" can only be an aggregation of a chain of smaller singularities (human-surpassing modules of machine intelligence), many of which have already arrived on the planet. For example, a "calculational singularity" occurred in isolated Earth environments in the late 19th century when a few human beings began using complex mechanical calculators whose inner workings were not, for all practical purposes, comprehensible to their slow-switching human brains. A generalized calculational singularity then gradually occurred on the planet during the mid-to-late 20th century, when digital calculating devices became ubiquitous.

Today, primitive neural network software programs, functionally replacing human eyes, drive most modern research telescopes in the search for supernovas in the night sky (a relatively simple pattern recognition problem), so a generalized "supernova scanning singularity" has also occurred, one that took roughly a decade to unfold. Thus, using John Koza's valuable definition of instances or modules of "human-competitive machine intelligence," we can see that the arrival of human-surpassing machine intelligence must be a very broadly incremental and "modular" (in the cognitive science sense) emergence. A long chain and large collection of specific singularities will subtly lead us to a condition where many, then eventually most higher human capacities are being functionally represented in realtime within our planetary machine intelligence.

As Ray Kurzweil and others have noted, this will be a surprisingly subtle process. Some time this century we will eventually start arguing over just how smart our machine intelligences are, and as Jane Stevenson has observed, generalized human-surpassing artificial intelligence emergence will very likely be, for most people, a truly "silent singularity." Something the vast majority of us don't care or think about, as we go about our daily lives. Something only very mildly disruptive, and a very natural systems transition.

Second, we can see that any particular definition of the technological singularity's arrival (e.g., the Turing Test) must be arbitrary, imprecise, and incomplete, as it will be based on an understanding of human and machine intelligence that can only be highly rudimentary from the human perspective. Nevertheless, such definitions are never trivial, given the broad implications of this transition for the nature and capabilities of Earth's dominant form of local intelligence.

Third, it is worth noting that tomorrow's machines will increasingly possess many "alien intelligences" and idiot-savant-like features whose human equivalency cannot be easily measured. I have argued elsewhere that these intelligences will be very unlikely to disrupt the nature of the local environment when seen from our perspective, due to intrinsically interdependent, resilient, and self-balancing features of all emergent higher intelligence. It seems very likely that greater the "plasticity" of physical intelligence and the lower the cost of understanding and modelling other universal systems (as well as one's own alternative mindsets), the more integrative and ethical all intelligent systems become. Fortunately, we'll have plenty of time to see whether this hypothesis is correct, as humans engage in "artificial selection for symbiosis" with our increasingly powerful, helpful, and inscrutable (except in postmortem caricature) machine intelligence systems in coming decades.

Technological singularity timing predictions have a general strategic value, but I would argue that they are probably much less important, both in terms of their specificity and their ultimate impact on human society (and human economics), than is commonly considered by lay futurists. By contrast, successfully predicting any of the specific singularities that will lead us incrementally closer to the tech singularity, such as the arrival of affordable broadband or a functional conversational user interface, will usually have great economic value in a world where appropriate timing is everything, and while planetary innovation is still largely driven by biological human beings.

This said, today's leading tech singularity timing predictions can be usefully grouped into three camps—short-, mid-, and longer-range prediction groups, each representing 30, 50, and 70 year periods respectively

Tech Singularity Timing Predictions — Short-Range (Now to 2029)

In the short-range prediction group (human-surpassing autonomous intelligence arriving now to 2029), we find technology historian Henry Adams, apparently the first technological singularity theorist, who in 1909 proposed a phase change to instantanous progress and physical thought some time between 1921 and 2025. In this realm we also find Nick Hoggard, "Evolution and the Feigenbaum Number", 2000, who advanced an interesting paper and mathematical model that this global "phase change" would arrive as early as 2001-2004. Also on the hyperaccelerated end are Millenialists, in various religions, who expect technology to imminently trigger some form of scriptural transformation or Armageddon. This group includes students of Mayan calendrics (e.g., "Novelty Theory" of Terrence McKenna and associates) who have proposed a December 21, 2012 singularity.

Mathematician and science fiction author Vernor Vinge belongs to this group, as he has stated ("The Coming Technological Singularity", 1993) that he would be surprised to see this event occur "before 2005 or after 2030." Vinge's 2003 update of this essay (Whole Earth Review, Spring 2003) reiterates this time period as reasonable, though he leaves open the possibility that human inability to help machines discover valuable bottom-up, embryologic-developmentalist designs might lead to a significant delay, or even to no technological singularity at all, due to the "large project software problem." For more a response to this perspective, see our critiques page.

The transhumanist philosopher Nick Bostrom also belongs mostly to the short-range group, as he has made a case for superhuman machine intelligence arriving some time in the "first third" of the 21st century, but most particularly, 2004-2024 (see "How Long Before Superintelligence?," 1997). Still others with "singularity is very near" projections are a number of the "singularitarians", such as Eliezer Yudkowsky, who has predicted the event to occur some time between 2005-2020. I find it revealing that, with a few notable exceptions, those who propose extreme nearness of the event are most commonly either 1) in the throes of the predictable radicalism of youth ("the world depends on me"), or 2) of an advanced age and hoping to see the transcendental event occur before their demise ("no world will exist after me").

Tech Singularity Timing Predictions — Mid-Range (2030-2080)

In the mid-range prediction group (2030 to 2080), we find the majority of present predictions, including technology analysts and futurists such as Vernor Vinge (circa 2030), Hans Moravec and Ray Kurzweil (circa 2040), myself (circa 2060), artificial intelligence pioneer Marvin Minsky ("Will Robots Inherit the Earth?", 1994) (circa-2070) and physicist James Wesley (Ecophysics, 1974) (circa 2075). Two additional analyses in this group are especially noteworthy, as they employ valuable mathematical models of finite-time singularity development.

In 2000, Laurent Nottale (an astrophysicist), Jean Chaline (a paleontologist), and Pierre Grou (an economist) published an admirably interdisciplinary paper, "On the Fractal Structure of Evolutionary Trees," which applies log-periodic analysis to the main crises of evolutionary civilizations. This was followed by a groundbreaking book, Les Arbres de l'Evolution (Trees of Evolution), 2000, which models universal, life, and economic development all on a fractal, log-periodic acceleration. Their acceleration model reaches a macro-scale singularity, a global time critical, at 2080 ± 30 years (2050-2110). The trio continue to publish (note this 2002 essay) on their fractal model for acceleration, and with luck their interesting ideas will gain wider critical consideration in coming years.

In 2001, Didier Sornette (a complex systems researcher), and Anders Johansen (a physicist) published a paper, "Significance of log-periodic precursors to financial crashes." They noted that hierarchical emergence to new regimes often involve an accelerating approach to a finite-time singularity, followed by a phase transition, which may or may not be locally "catastrophic," as in a financial crash. They started to believe that this pattern could be used to predict some stock market crashes months before they actually happen. This led Dr. Sornette to publish a fascinating work, Why Stock Markets Crash, 2003, which gives a tour of the theory of critical phenomena, and then applys a log-periodic model to historical economic crashes. Sornette and Johansen's model gives a critical time for global phase change at 2050 ± 10 years (2040-2060), and they offer three scenarios for the meaning of this change: 1) economic collapse, 2) a transition to economic sustainability, or most interestingly, 3) superhumanity. More discussion of their work can be found in our Brief History of Intellectual Discussion of the Singularity.

Tech Singularity Timing Predictions — Longer-Range (2081-2150+)

In the longer-range prediction group (2081 to 2150+), we find the systems theorist Richard Coren, (The Evolutionary Trajectory, 1998) who projects a singularity in 2140, and the economist Robin Hansen who makes a similar prediction with regard to an "economic singularity" circa 2150. It is quite possible that a number of additional credible longer range predictions will be proposed by those who presently consider the idea very speculative, once better methodologies are brought to bear on this complex predictive challenge.

Today, most estimates in the singularity discussion community predict a generalized human-surpassing machine intelligence emerging in the mid-range period, 2030-2080. Many singularitarians remain in the short-range group, and some of the older, more conservative prognosticators (like Marvin Minsky, Didier Sornette, Laurent Nottale and myself) are either in the upper end of the mid-range or in the longer-range groups.

In 1999, I originally considered 2040 ± 20 years as a broadly reasonable range, placing me in the early part of the mid-range period. But in subsequent inquiry, I have revised my estimate to 2060 ± 20 years, placing me in the latter half of this group.

Motivating this revision is a better understanding that a mature, distributed, planetwide network of semiautonomous, evolutionary developmental hardware and software processes is likely to be required for the technological singularity "phase change" to occur on Earth. Evolutionary and genetic programming pioneers such as John Koza (see Genetic Programming IV: Routine Human-Competitive Machine Intelligence, 2004) are showing us that a new era of computer-aided creativity is already upon us in a number of domains. This work is very important and should be actively supported by the mainstream programming community. Nevertheless, the tech singularity is very unlikely to be precipitated by today's early and limited forms of evolutionary computing but by tomorrow's massively parallel, cyclical, highly interactionist, and mostly bottom up evolutionary developmental processes. Today, metal oxide semiconductor dedicated ASIC systems are the dominant computer manufacturing paradigm. As the International Technology Road Map for Semiconducters (ITRS) notes, these dramatically miniaturizable but rather brittle and simplistic systems are very likely to retain their dominance until at least 2020.

Circa 2020, I expect a highly useful set of Conversational User Interface (CUI)-equipped interfaces, built on top of an increasingly parallel but still weakly biologically-inspired set of computer architectures, to begin to emerge. The CUI is a preliminary step before high-level machine intelligence. Therefore, understanding and measuring the process of CUI emergence can give us insight into the dynamics of the technological singularity to follow.

Why might a distributed system transition be necessary for the CUI to emerge? See On Phase Change Singularities: The Nature and Timing of CUI Emergence for more on this fascinating topic.

My confidence interval presently remains rather wide, at 20 to 40 human years (1-2 standard deviations) on either side of 2060, as I realize local progress is never certain in a fault-prone world. I suspect the actual arrival time depends quite substantially, within a human generation or two either way, on the quality of the choices we make in our lifetimes. To significantly accelerate its arrival, most important may be our political, economic, social, and personal choices in regard to science and technology education, innovation, research, and development. To significantly delay its arrival, we have many more possibilities, none of which we need reiterate here.

If any of these latter time ranges are even approximately true, our ongoing technological acceleration is most definitely a central issue for our generation to consider. Unlike growth curves in particular physical systems, which always saturate, computational trend curves have been shown to be consistently hyperexponential and substrate independent in the known history of the universe. We overlook them at the risk of our own ignorance and misunderstanding of our apparently universally constrained future: the local emergence of postbiological intelligence.

We could as little prevent the coming singularity as we could stop using mathematics, or language, or electricity, or uninvent the computer. We now see the outline of a major transition we can either accelerate or delay, approach wisely or foolishly, but one we seemingly cannot avoid.


Discussion Groups

Here are a few groups that have conducted some interesting inquiries into the phenomenon of accelerating change and the hypothesis of the technological singularity.
A new nonprofit chaired by myself (John Smart). As our initial event we produced Accelerating Change 2003, the world's first multidisciplinary conference on accelerating change and the implications of an imminent technological singularity. More can be found at our site above.
Run by transhumanist editor Amara Angelica, this is Ray Kurzweil's excellent site. It has accumulated the largest collection of articles on the singularity at the present time. Be sure to read Ray's 60 page precis, "The Singularity is Near." When his book is published (est. Dec 2004), it will do much to increase public awareness of these topics.
The oldest and most far-minded nanotechnology think tank. See Chris Peterson, Eric Drexler, and the Foresight Group's Spring 2000 Conference, "Confronting Singularity."
The World Transhumanist Association, the most important discussion community for those considering the way humanity will be transformed by tomorrow's technologies. See Nick Bostrom's "What is the Singularity?" and "How Long Before Superintelligence?" See Hans Moravec's "When Will Computer Hardware Match the Human Brain?" and Commentary.
The oldest transhumanist organization. See Robin Hansen's "A Critical Discussion of Vinge's Singularity Concept." See also Natasha Vita-More's "Vinge's View of the Singularity". This article describes one of the rare attempts, by Vinge himself, to imagine scenarios in which the singularity does not occur. Such efforts are extremely difficult among those aware of modern progress in such areas as computational neuroscience and evolutionary computation, and the hundred year pattern of computer hardware development. Other admirable but ultimately unconvincing future scenarios are Gunther Stent's The Coming of the Golden Age: A View of the End of Progress, 1969, and Owen Paepke's The Evolution of Progress: The End of Economic Growth and the Beginning of Human Transformation, 1992. Paepke's future allows a singularity, but in a postmaterialist regime of declining human economic progress, which just doesn't work for human beings. Paepke doesn't realize that intangible economic value will continue to accelerate right up to and past the singularity, because human psychology sets those values (what is a Pez dispenser worth? what the market will bear).

Principia Cybernetica Web
An excellent resource for systems theory approaches to understanding complexity, hosted at the Free University of Brussels. See Francis Heylighen's "The Socio-Technological Singularity". Heylighen has also introduced a Global Brain Workshop to advance academic discussion of the emergent connectivity in the technological substrate, and to consider the larger implications for humanity.
John Brockman's
excellent philosophy and futures site. See Danny Hillis, "Close to the Singularity." Hillis, a giant of computer science, developed a novel paradigm for emergent computation in the early 1980's, when he built a new type of massively parallel computer architecture at his company, Thinking Machines, Inc. (The Connection Machine, 1985). This was one of the first approaches that definitively showed that connectionist computational systems could increase their own adaptive complexity, if given adequate hardware "evolutionary space."
The leading "Singularitarian" website. See Eliezer Yudkowsky's "An Introduction to the Singularity," "What is Seed AI?," and "What is General Intelligence?." Yudkowsky has web-published extensively on the topic of the singularity, and I concur with many of his basic positions. Furthermore, his opinions are thoughtfully revised from year to year. At the same time, it is my opinion that the Bayesian, top-down model of artificial intelligence construction which he advances as a method of "creating the singularity" is incomplete and incorrect on several counts. Such approaches, variations of which have been proposed for several decades now, seem to me to represent only a small fraction of the elements necessary for self-catalyzing technological complexity.

Top down designs are quite valuable ways to make small performance gains in very limited computational domains, but in all top-down schemes to capture the "essence" of general A.I., I suspect it is inevitable that the designer's understanding of the depth and creative necessity of evolutionary developmental processes must always lacking, and their overestimation of the importance of human rational comprehension of the details of the process seems entirely unsupportable. They remind me of the 19th century naturalists' complex and hierarchical top-down theories on the essence of life, long before we'd even discovered it must be constructed bottom up, nonlinearly, and contingently in a massively parallel process of locally chaotic self-organization, guided only broadly by a DNA "recipe" which stores little information beyond the parametric fine tuning constraints on the evolutionary developmental system.

I see efforts at understanding "general artificial intelligence" as often useful first generation philosophical efforts, but not representative of the bulk of ongoing A.I. development, which is occurring across the planet in an incremental, bottom up, and generally unforesighted manner in myriad software, hardware, robotics, instrumentation and automation efforts by a host of only weakly communicating scientists, engineers, and entrepreneurs. For more on this perspective, see Self-Organizing and Self-Replicating Paths to Autonomous Intelligence (A.I.): An Overview.


Introductory Links (alpha by author)

Michael Anissimov's thoughtful Accelerating Future.

AtomJack's unique artistic-analytic "Singularity"

Johnathan Bethel, Omega Point Institute, (interface between consciousness and the technological singularity).

Dan Clemmensen, "Paths to the Singularity"

Steve Edwards, "Surviving the Singularity"

Alan Kazlev, "The Singularity"

Marvin Minsky, Sci. American 1994, "Will Robots Inherit the Earth?"

Charles Platt interviews Hans Moravec on the Future of Computers and Humanity, HotWired, 1995

Anders Sandberg's, "The Singularity"

Please let us know if you discover other generalist introductory resources that should be on this page.