Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques

 

No Apparent Limits: Addressing Common Arguments
Against Continuous Computational Acceleration

 

Outline

No Miniaturization Limit: Moore's Law and the Developmental Spiral

No Resource Limit: Understanding STEM Compression

 

 

No Design Limit: Understanding Computational Autonomy

No Demand Limit: Autonomy and Incompleteness

 

 

 


No Miniaturization Limit: Moore's Law and The Developmental Spiral

First, let's consider the miniaturization limit argument against continual acceleration. Many of the stunning advances we have seen in computer technologies in the last century have been aided by continuous miniaturization of computational architecture. This miniaturization has been particularly dramatic in the computer manufacturing paradigm known as metal oxide semiconductor (particuarly, silicon oxide) integrated circuit ("IC") design, a paradigm that has been operating since the early 1960's, and one that is still dominant today. In 1964, in a now-famous feat of prognostication, Gordon Moore, co-founder of Intel, coined "Moore's law," a prediction that growth in IC transistor density would continue on a very reliable doubling period of every 18-24 months, allowing a rough halving in the price performance of computational systems over the same time period. Moore's law has applied faithfully ever since, and recent data suggest that the price performance of computation actually grows slightly faster than Moore's curve, though this data is controversial. But periodically, distinguished scientists and engineers propose that we are running into fundamental limits to miniaturization, and encountering these limits may take us to a new and perhaps slower mode of computational development. For example, Andy Grove, another co-founder of Intel, has recently proposed that gate current leakages in IC's may cause us to fall off Moore's curve circa 2010, and others have said we will run into quantum tunneling problems (electrons randomly jumping to neighboring circuits) around the same time, as IC circuit sizes are now only a few molecules thick.

It is true that Moore's law has essentially been a committee expectation, a recurring project manager's deadline, since its inception. But what's truly interesting is the way that the physics of the microcosm has allowed this manufacturer's expectation to hold up over almost fifty years of engineering history. In spite of, and perhaps as a direct result of, our engineers' incessant concerns and expectations that physical and resource limits might dash Moore's law at any time, this doubling function has been permitted, by the confluence of human desire and the special structure of the physical universe, to continue through a long and relentless series of advances in solid state physics and materials science.

In the latest concern, today's chip designers worry about gate current leakage in silicon oxide semiconductors as a threat to continuing longstanding miniaturization trends. But we have recently discovered that hafnium oxide, a more exotic metal oxide, has a thousand times less leakage than silicon oxide. Such are the surprising and ever more computationally-permissive physical properties of the microcosm. Modern chip designers are also concerned about quantum tunneling. But have you heard of quantum dots, electron corrals, optical waveguides, quantum computers, or quantum cascade lasers? Such already functional and semi-functional systems show us the amazing ability of the microcosm to route around miniaturization problems. The physics of small devices, apparently due to preexisting, intrinsic universal structure, has continued to reveal surprising paths toward ever faster, more efficient, and more powerful computational architectures.

Perhaps even more importantly, when we finally do hit an intrinsic limit to circuit miniaturization as we move inexorably closer to computing at the quantum level, several insightful observers have noted that we'll finally be able to switch our miniaturization goals at that time to process and architecture miniaturization, trends which may be a far more important measure for understanding the coming singularity. Once it is no longer cost effective to shrink logic gates and circuit designs circa 2015, the fabrication industry can move en masse to shrinking higher-level computational systems, and thence to more biologically-inspired forms of computing. Such systems will involve closely connected, massively parallel, but widely differentiated circuits, similar to the construction of the human brain, circuits that can internalize high-level computational algorithms and engage in increasingly sophisticated physical processes.

Massively parallel, biologically-inspired computing systems are not yet economically feasible to develop in our present environment of accelerating circuit miniaturization, and cannot be for at least a few decades hence, as long as miniaturization and automation at the circuit level continues to yield such easy and powerful performance gains. Few have today realized the importance of that developmental reality in limiting the forms of artificial intelligence that Earth's sociotechnological infrastructure can presently afford to produce. MEMS (micro-electro-mechanical systems), SOC's (systems on a chip, such as the coming cellphone-on-a-chip), molecular, optical, magnetic, and even quantum computing are all dramatic examples of tomorrow's broader and more complex types of miniaturization, as we begin to move beyond the "die size" (circuit miniaturization) paradigm in early but unmistakable ways, even today.

Considering this panoply of related miniaturization trends allow us to understand that a generalized Moore's law appears to be in operation. In other words, given past history, it presently seems most reasonable to assume that technological systems will continue to increase their computational capacities at Moore's amazing rate, or faster, as far ahead as we can see into our extraordinary future.

Just how long have miniaturization trends held, when they are generally defined to include the emergence of different physical computational systems? Moore's curve of exponential growth has been continuing since the mid 1960's, when conventionally defined circuit miniaturization on metal oxide semiconductors. But subsequent to his observations, futurists and systems theorists beginning in the 1970's noted that this exponentiation has held since at least the invention of the transistor, in 1948.

Perhaps the first popular global analysis of accelerating trends in computer technology, their expnential growth through a variety of computing substrates (manual, mechanical, electromechanical, vacuum tube, transistor, integrated circuit, and microprocessor) and their broad implications for the future of humanity, was made by Hans Moravec in Mind Children, 1988. This slim, revolutionary work introduced many computer scientists, technologists, and lay readers to the idea that biological intelligence may be a transitory phase in a universal program of exponential growth in computational capacity.

More recently, Ray Kurzweil (Age of Spiritual Machines, 1999) valuably updated Moravec's work by documenting a double exponential growth in computational capacity since at least 1890, through five separate computer engineering substrates (mechanical, electromechanical, vacuum tube, transistor, and integrated circuit). Kurzweil's data demonstrate that the complexity doubling times themselves are gently shrinking with each doubling (e.g., the acceleration is a gently "double exponential", "hyperexponential" or perhaps even "asymptotic" process), a feature that had been suspected for decades by several observers of the Moore's law phenomenon.

For Moravec and Kurzweil, one key metric of acceleration is the price performance of computation (computing power purchasable every year, in constant dollars). Yet while this is a valuable and accessible measure, one that shows the uniqueness of computing systems, perhaps an even more fundamental metric is the physical resource efficiency of computation. What is most curious about the developmental history of computation is that each new system discovers how to use much less matter, energy, space, and time to compute any equivalent information. This creates accelerating emergences of increasingly local (e.g., more miniaturized, at both the systems level and the logic gate level) and more resource efficient computational systems over universal time.

This idea of universal acceleration was first popularized by Carl Sagan (in Dragons of Eden, 1977) with his powerful metaphor, the Cosmic Calendar (a.k.a., the "accelerating cosmic timeline"), which highlights the ever-faster emergence of important physical-computational events throughout the last six billion years of universal history. More recently, Eric Chaisson (Cosmic Evolution, 2001) has cleverly given this cosmologic acceleration a clear thermodynamic basis, in terms of the quantity Phi, the free energy rate density of emergent complex systems.

The pioneering work of Moravec, Sagan, Kurzweil, Chaisson, and other systems theorists strongly implies that there is something about the construction of the universe itself, something about the nature and universal function of local computation that permits, and may even statistically mandate, continously accelerating computational development in our local environment. This fascinating process of accelerating emergence of increasingly localized and miniaturized systems, a phenomenon that has been called the developmental spiral, has been noted by an increasing number of careful thinkers since Darwin's age. An overview of the long history of discussion of accelerating change can be found in our Brief History of Intellectual Discussion of the Singularity, recommended for those seeking a broader context for these fascinating ideas.

While the developmental spiral metaphor is commonly known to systems theorists, it is also somewhat psychologically disturbing to our slow-switching biological brains and still poorly understood, so the spiral remains, like a naked emperor, a phenomenon seen by many but discussed by few. This website, and with your help, our growing ASF community will seek to rectify that oversight in coming years.

The acceleration is the most common feature noted when the developmental spiral is discussed. The trends toward miniaturization and localization are much less commonly noted, but are also fundamentally important, from my perspective.

Today, fat-fingered early 21st century human beings are running programs on a single atom of calcium. We have managed to create functional seven qubit quantum computers, and to instantaneously teleport a beam of light. These advances are breathtaking, and entirely unprecedented. The extreme permissiveness of the universe to accelerating computational miniaturization as far down as we can probe may not be accidental, but central to its special, self-organized structure. Ongoing miniaturization may be a necessary element of the larger purpose, or teleology, of computational intelligence in the universe. Popularizing and interpreting trends and implications of accelerating miniaturization is one of my own contributions to the ongoing dialog.

 


No Resource Limit: Understanding STEM Compression

Resource limits are another proposed constraint on computer hardware development. In IC fabrication, Both Arthur Rock and Gordon Moore have long observed ("Rock's law," or "Moore's second law"), that capital requirements for developing new chip fabrication plants experience their own exponential growth, with fabrication plant costs doubling approximately every four years. Thus the computer manufacturing industry requires significantly more capital each year to deliver our ever-more-miniaturized computational technology. Moore estimates that at historical cost growth rates, fabrication plants will become too expensive to build circa 2020, even with very optimistic growth projections for our IT economy.

There is apparent further support for the resource limits argument when we consider the nature of biological growth within any particular species. The classic pattern is called logistic or "sigmoidal" ("S curve") growth, where population growth is initially exponential, but matter, energy, or space limits and competitive species interaction (another form of resource limits) always slow down this growth, leading to a "saturation" in a population size over finite time. Other natural non-exponential dynamics in biological systems are oscillation around a mean, as in predator-prey relationships, or periodic "crashes" ("i curves") that are seen in biological catastrophes. Interestingly, such castrophes never occur in the "average distributed complexity" of biological systems, only in particular species environments.

As one might expect, the developmental spiral metaphor is early evidence that popular "limits to growth" arguments are astonishingly irrelevant to the developmental record of information processing itself, as information processing appears to be a phenomenon that is entirely independent of specific matter-energy computational systems. This is apparently because information arises out of and controls the continuous reorganization of matter-energy systems in the physical universe. In the known history of the universe, the most computationally complex local information processing systems have always discovered ever-more-clever ways to rearrange themselves using less space, less time, less energy, and less material resources (the phenomenon of STEM compression) during their ongoing evolutionary development. At the same time, they accelerate their local densities of space, time, energy, and matter flow in computational activities.

STEM compression appears a central mechanism or physical "driver" of accelerating universal change. As mentioned in our brief history, Buckminster Fuller may have been the first to describe this eloquently in the 1980's in his enlightening concept of the "ephemeralization" of economic processes over time. The physicist Carver Mead, myself, and a few others have noted "the unreasonable efficiency of the microcosm," the fact that physical miniaturization has consistently led not only to accelerating computational power, but to astoundingly greater physical and computational efficiencies over time. Note, for example this report (Physical Review Letters, 93, 2004) on holey fibers. This breakthrough, typical of advances in the microdomain, involves a million-fold greater efficiency, and promises new applications of lasers and optoelectronic systems dramatically more useful and flexible than previously possible.

Milan Cirkovic, among others, has observed this curious and counterintuitive effect in the theory of quantum computation, specifically in the notion of vanishing interaction energy, where reversible nonorthogonal quantum computations become faster the lower their average energy of interaction. Cirkovic's insights suggest our descendents will create a femtotechnological computational environment where they are doing astronomical calculations with near-zero energy cost of information processing. This violates the expectations of standard information theory, but is in line with a long line of empirical observations such as superconductivity and other stunning, counterintuitive emergent efficiencies and accelerations of the microdomain.

With the recent advent of nano and quantum computational research devices, we see no end to this process of accelerating miniaturization in the 21st century. In other words, trends in STEM compression, efficiency, or density since the birth of computational machines have made the growth rate of computation as a general process effectively "matter-independent", or free of the specific limits to growth which must affect each particular material substrate and computational paradigm.

While STEM compression may be the fundamental physical driver of accelerating change, STEM dynamics are certainly not the whole story. Information, and the emergent attributes of information processing, appear to be an equally important "psychical" driver (e.g., a process concentrating intentionality, see Teilhard de Chardin, The Human Phenomenon, 1955/99), and these emergent informational properties can be seen as their own powerful constraint on the nature and future of universal change. As Matthew Fenton has remarked, we expect tommorrow's computers to be not only dependably smarter (intelligent), and smaller (matter- and space-efficient in computation), but also friendlier (human-interdependent). For more on these concepts, and the unresolved question of the relevance and future of "infodynamics," in a universe that we today understand only physically, not informationally, see Understanding STEM, STEM+I, and STEM Compression in Universal Change.

I would argue it has become evident that they will also be faster (time-efficient), leaner (energy-efficient), more ecological (planet-interdependent), stabler (immune, redundant, resilient), and more focused in their search for universal meaning (computationally closed, a feature of all intelligence) the more they learn about the world. At the same time, they will discover special problems for which they can find no local solutions (informationally incomplete). If history is any judge, these problems will drive them to seek new iterations of evolutionary developmental complexity in new, nonlocal environments.

All of these qualities are well summarized as "Meta-Trends" of technological and computational evolutionary development that have apparently accelerated across all human history: informational intelligence, interdependence, immunity, and STEM compression and efficiency. More on the history, implications, and future of these trajectories in my forthcoming book.

Unlike the growth of systems of relatively time-fixed complexity, such as replication occurring within a particular species population, we have never seen a general case of diminishing marginal returns in computational technology development for society as a whole.

This is because computational systems, as generally defined, are always able to discover new resource-efficient substrates and architectures that reliably increase their average distributed complexity (ADC, as measured by intelligence, interdependence, immunity, and STEM compression/efficiency), even as specific systems or paradigms, such as vacuum tube computers, follow "S" curves of logistic development or rarer "i" curves of catastrophe, and become increasingly quickly outmoded in a world of accelerating technology.

Thus computation, as generally defined in all physical systems, involves the continual near-random evolutionary search for new, more complex, and more matter-energy and space-time efficient "computational substrates" (complex adaptive systems that can replicate, vary, and encode world models) over time. To the extent that our particular universe allows more STEM efficient architectures to emerge due to its particular physical laws, coupling constants, and boundary conditions, we can say that a preexisting universal developmental trajectory also exists, involving an ever-accelerating STEM density of computation.

This concept has to date been discussed by only a handful of careful observers of computational miniaturization. (Please let me know of any thinkers you know in this area). It will be explored in greater detail in my forthcoming book on accelerating change.

Consider carefully our planet's history of accelerating creation of first pre-biological "evolutionary computational" systems (galactic-atomic and planetary-molecular-based), then genetic systems (DNA and cell-based), then neurologic systems (neuron-based), then memetic systems (linguistically-communicable mental pattern-based), and presently, technologic systems (extra-cerebral pattern based). Each of these evolutionary epochs has required substantially less matter, energy, space, and time to perform any standardized "computation," when defined as an encoded internal representation of specific laws or information gleaned from the external environment.

One example of the increasing efficiency in such comparative computation might be the matter, energy, space, or time required to replicate an environmentally transmissive (e.g., "inherited", in the broadest sense) computational structure, be it a galaxy, a planetary type, a cell, a brain, a verbally communicated idea, a human written symbol, or an electronic representation of that symbol. Each of these forms can be shown to be dramatically more STEM efficient in replication. In other words, the structure of our universe has continually produced new, more local, more accelerated, more STEM efficient physical representational systems within which to encode and process the local environment.

The brief history of digital computers, which have themselves moved ever faster through five separate substrates over the last century, as Ray Kurzweil has observed, makes this process of "accelerating representational rearrangement" even clearer. In summary, the universal evolutionary development of information involves the continuous emergence of new physical-computational representational substrates, with the most computationally complex of these new emergent forms always exponentially increasing their information processing pace and universal understanding over time. There are occasional pauses and brakes on this progression, but again, the record shows that these pauses become rapidly briefer with time. Observing from a distance, we see a mostly smooth and unmistakable progression of exponential growth in local computational complexity.

But perhaps most amazingly, this growth process is most accurately described as gently hyperexponential (or "double exponential," or "hyperbolic"), because even the doubling times themselves gently shrink with each successive doubling. Surprisingly then, beyond Moore's law, or even Kurzweil's more generalized law of Accelerating Returns, present data suggest an undiscovered law of Locally Asymptotic Computation, an imminent post-nanotechnological environment where local computational capacity becomes essentially (practically, proximally, but never "infinitely") unlimited, within a sharply finite amount of future time. We'll revisit this understandably controversial speculation, the developmental singularity, at a later time. These arguments help us to understand that we may continually evade miniaturization and resource limits as long as the universe is structured to permit such evasion. It is possible, though still not known, that we will see STEM efficiency gains all the way to the Planck scale, the apparent fundamental granularity of our universe's construction fabric, spacetime itself.

 


No Design Limit: Understanding Computational Autonomy

Finally, there's the impressive design limit or "Complexity Wall" argument in software development, which proposes that while our software complexity exponentiates, human ability to understand and manage this complexity is rapidly nearing saturation, implying that current and future human-designed software projects will only be able to reach a relatively fixed, low level of complexity, before they must undergo major and costly redesign. Anyone involved with software engineering in large projects is very familiar with the realities of this observation.

To understand the way that locally developing technological systems route around these problems, we must turn to another topic, computational autonomy. Just like the developmental spiral metaphor, the rapid growth of autonomy in technological systems is a topic that makes some systems theorists uncomfortable, so it has long been overlooked. Again, we at ASF hope to rectify that oversight in coming years.

In the known history of computer manufacture, every new generation of computational technology has become steadily more human-independent, autonomous, self-directing, self-improving, self-repairing, self-protecting, self-modelling, self-provisioning, self-replicating, and self-organizing than the last. In fact, since WWII and the advent of digital computers, humans are better characterized not as controllers, but as selective catalysts of accelerating technological development, as our increasingly automated systems learn to internalize the intelligence necessary to their own reproduction, variation, interaction, and selection in the natural environment.

For decades now, we've been designing technological systems that are, in real practice, beyond any single individual's ability to manage. Today it takes large teams of people not only to build, but to understand and operate a 747 or a supercomputer. But even more importantly, in the last decade we've begun to move beyond the modular, reductionist, non-interactionist paradigm of the 747 into chaotic, evolutionary computational approaches to technological development. Systems like neural nets, genetic algorithms, and evolutionary programs, systems that increasingly direct their own self-improvement in both hardware and software have begun to appear. The internal workings of such systems are as unclear to us as the workings of our own unconscious mind. Their continual improvement is fueled much less by human agency than by environmental and physical opportunity.

Furthermore, while we have seen a "Complexity Wall" emerge in software, none has emerged in hardware design, again for still-poorly-understood reasons. Perhaps this is because much of today's software is human designed, and thus subject to human conceptual limits. But intrinsic hardware capacities are perhaps better characterized as human discovered, so they can take much more direct advantage of inherent STEM efficiency gradients built into universal structure. As computational autonomy increases, we are seeing human designed hardware and software give way to machine designed versions of these products, and such progress has been particularly impressive at the hardware level. For more on autonomy trends and the apparent future of accelerating design, read Increasingly Autonomous Technological Evolutionary Development Will Lead to the Singularity.

Autonomously replicating "complex adaptive systems" are intrinsically natural, and we can consider galaxies, suns, planets, plants, animals, and human beings to be among them. As futurist John McHale (The Future of the Future, 1969) reminds us, our technology is as organic as a snail's shell, a spider's web, or a dandelion's seed, and we are only now beginning to understand this. Likewise, our current human-computer symbiosis is as natural as the flower-bee symbiosis, where the pollinating bees, built on top of the far slower plant life, nevertheless stabilize their existence, and receive nectar for their effort. In our case, computers receive the "nectar" of artificial selection toward increasingly high level function, while humans receive the benefit of increasingly more powerful and fundamental solutions to human problems (as well as the creation of new technological problems that cannot be ignored). Computers, like busy bees, are developing their own complexity millions of times faster than us in this symbiotic relationship, and will very soon far exceed historical human capacity.

Furthermore, certain of these natural, bottom up technological emergences (e.g., the plow, the wheel, the baked-mud brick, the telegraph, the telephone, the railroad, electricity, the internal combustion engine, the transistor, the integrated circuit, the internet) are responsible for profound improvements to the human condition. As the next great leap forward, we suggest that the coming Conversational Interface (CI), a transitional developmental emergence that we predict will arrive on Earth circa 2020, will usher in a new human social environment that will be so different from that which we see today, that we suggest it needs a new descriptive phrase. We propose the conversational interface will move us from our present Information Age to what may be best-called a "Symbiotic Age" of human-machine interaction.

As we argue elsewhere, it seems likely that a functional but primitive conversational interface network must arrive a number of decades before a true A.I. singularity can emerge. As for the coming A.I. itself, it is becoming increasingly clear that it will not be "artificial" in any relevant sense, but is perhaps better re-labeled as Autonomous Intelligence, a self-improving, self-stabilizing computational system rapidly developing its own volitional self-understanding within the technological substrate.

 

 

No Demand Limit: Autonomy and Incompleteness

Then there's the demand limit argument, which suggests that we are rapidly approaching a period where human society is just not going to need all that extra computational capacity, and as our demand for exponential price-performance slows down, so will the exponentiation. After all, aren't we all buying less computer power these days, as we find our latest computer systems to be more and more basically functional for our limited human needs?

It is true that there are numerous specific instances where further computation has declining utility within a particular environment. Consider, for example, that once you have obtained a rough map of the Earth you'll pay much less for another, slightly more detailed map. Your primary computational interests will shift at that point to other domains. This controversial concept, computational closure, will be discussed further at a later time. But even given closure, local computational demand has never been observed to be exhausted as a general process within all computationally accessible local environments.

To date the record has indicated that the most successful local developmental complex adaptive systems can never saturate their need for local computation. So while it is likely true that at present computational growth rates human organisms may soon have access to more technological computational capacity than we will need to live comfortable lives, it appears entirely untrue that autonomous technological systems will exhaust their own need for this perennially-scarce local resource. There will likely be a continually increasing, self-catalyzed demand for new computational capacity by such systems, as long as increased computational complexity provides a competitive advantage in the digital environment. In other words, as long as there are still computationally accessible elements of this universe which remain unknown. As the mathematician and logician Kurt Godel demonstrated in 1930, informational incompleteness appears to be a fundamental feature of analytical reality. It is not something that will disappear when local intelligence is a million, or a trillion times more sophisticated. The clarity and trajectory of our search, however, becomes highly more refined, which seems to be purpose enough for the quest.

In summary, the human-dependent "Complexity Wall" and other proposed near-term limits may have great economic significance for several decades hence, while we remain grounded in primarily human constructed, and secondarily evolutionary computational systems. These issues are currently very strategically important to information technology developers and society at large, and human factors and constraints will certainly form temporary barriers to the emergence of machine intelligence of a particular nature by a specific near-future date. But at the same time, it is becoming abundantly clear that such proximate factors will in no way block the smooth hyperexponential development of computational complexity as a general phenomenon of our local environment. There are deep and still poorly understood universal statistical developmental processes at work.

As the record of the developmental spiral shows, even the great catastrophes that have occured in the past have only stimulated, rather than slowed, the development of global resiliency and the increase in average distributed complexity of the most sophisticated computational systems on our planet. We will explore this very optimistic realization in more depth at a later time.

The closer we look, the more we discover the astonishing, surprising, and (for some at least) alarming irrelevancy of all currently proposed limits to the ongoing acceleration of local computation. As we have said, something very curious appears to be occurring, and if true, ours is the generation that will no longer be able to ignore the phenomenon of continuing technological acceleration.

We, and more particularly, our technological creations, are on a wild ride to an interesting destination—the technological singularity—a local rate of computational change so fast and powerful that it must have a profound and as-yet-unclarified universal effect.

As a side effect of this hypergrowth, biological human beings will not be able to meaningfully understand the computer-driven world of the near future unless they are able to make some kind of transition to "transhumanity," an environment with greater-than-human computational capacity, and a new, as yet undetermined human-machine symbiosis. How this transition will and should occur, and how it is presently occurring, is a subject of spirited and insightful debate.