![]() |
![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() ![]() |
Interview
with John Smart, 2003
Questions
by Phil Bowermaster,
Speculist.com
Phil Bowermaster publishes speculative essays, news, and opinion — along with interviews with some of the world's leading futurists — on his website, The Speculist. John Smart is a developmental systems theorist who studies science and technological culture with an emphasis on accelerating change, computational autonomy and a topic known in futurist circles as the technological singularity. He is chairman of the nonprofit Acceleration Studies Foundation (ASF) whose websites (Accelerating.org, Acceleration Watch.com) aim to help individuals better understand and manage the physical and computational phenomenon of accelerating change. John lives in Los Angeles, CA and may be reached at feedback{at}accelerating{dot}org. If you have an interest in multidisciplinary understanding of accelerating change, you are invited to join ASF's free quarterly newsletter, Accelerating Times, edited by John, and presently read by 2,000 future-oriented professionals in twenty three countries. |
1. The present is the future relative to the past. What’s the best thing about living here in the future?
This amazing state of affairs is due almost entirely to advances in science and technology, and the profoundly civilizing way that science and technology interact with us.
There are both "psychical"/informational and "physical"/material dimensions to every complex system. de Chardin's perspective emphasizes the holistic, informational yang to the reductionist, materialist yin of science and technology. Both perspectives are needed to help us understand change and how the world improves. 2. What's the biggest disappointment? For me, it is our continuing collective lack of understanding of the central, unequalled role that science and technology plays in making a better future. We also still fail to realize, in general, that Information and Communication Technologies (ICT) are the central drivers of all scientific and technologic change. If we really understood this, every government with resources to spare would have "great goals" in sci-tech citizen education and ICT development driving their public agenda. Of course, most science and technology research and development occurs in a "bottom up" fashion, and can't be made to order. But national policy can create conditions that are ideal for bottom up experimentation, and great goals, if chosen wisely, can also accelerate global development. What is the greatest goal currently unifying the United States's efforts in science and technology development? I don't have an answer to that question, and I believe that's a failure of appropriate vision.
The internet, transforming before us, will soon become a planetnet, a system so rich, ubiquitous, and natural to use that it will be a semi-intelligent extension of ourselves, available to us at every point on this sliver of planetary surface that we call home. That will be very empowering and liberating, and at the same time, civilizing. Our human biology doesn't change, but we are creating an ever more intelligent "house" to surround and sublimate the impulsive human, a house that will just a few decades hence hold unimaginable subtlety and sophistication.
Our goals should try to reflect this natural developmental process as much as our collective awareness will allow. It is my contention that the internet is territory within which our most achievable and important current great goals lie. A number of technologists have proposed that there are two remaining bottlenecks to the internet's transformation into a permanent, symbiotic appendage to the average citizen. The first is the lack of ubiquitous affordable always on, always accessible broadband connectivity for all users. The second is the lack of an alternative to a keyboard-based interface for the user's typical interaction with the system.
Creating virtual persistent worlds that mirror the physical world, and simulation software that allows global human collaboration within digital worlds are also important challenges, but in this case the electronic gaming industry has been developing these technologies more rapidly than issues of internet infrastructure and CUI software. Just like the transcontinental railroad was a national great goal of the 1800's, getting affordable broadband to everyone in this country by 2010, and a first generation CUI by 2015 appear to be the greatest unsung ICT development goals of our generation. Bringing broadband and basic CUIs everywhere within a generation would throw gasoline on the fire of global human innovation. This level of internet access would link all our wisest minds, including even those elders who little use computers today, into one real-time community. The linguistic control of all our machines (appliances, infrastructures, cars, homes, internet e-commerce systems) afforded by the CUI would dramatically improve our average productivity. This is a planetary issue, given the unprecedented human productivities being unleashed by internet-aided manufacturing and services globalization since the mid 1990's. In fact, a case can be made that we might economically benefit more in the U.S., even today, by getting greater broadband penetration not even to all our own adult citizens, but to the youth of a number of trade-oriented, pro-capitalist countries in the developing world. Are we able to think that globally and pragmatically about our collective future? Are we ready to move beyond the fiction of national development to the reality of the accelerating species development that is occurring? That degree of self-interested acceleration-aware prioritization may not yet be politically salable as a goal to be funded by U.S. tax dollars, at least as a publicly stated agenda. But I predict that it increasingly will be, in a world that already pools its development dollars for a surprising number of transnational projects, such as international DoD-funded work in quantum computing. At any rate, we can at least push for accelerated efforts in international technology transfer in internet related areas, concurrent with our domestic development agenda. If you've never heard of a CUI before, take a browse through the links above. Your father used a CUI (command-based user interface). You use a GUI (graphical user interface). Your child (and you) will primarily use a CUI (language user interface) to talk to the computers in your environment. Tomorrow's CUIs will very likely not be based on high level artificial intelligence, but rather on simple, statistically driven algorithms searching large bodies of archived human conversation, the way that Google and other search programs work today. These technologies, following conservative trends of performance improvement, will allow us to carry on conversations with our computers using a simple but effective pidgin grammar as early as ten years from today. Quite a ways after the CUI emerges, your grandchildren may also use a NUI (neural user interface), a biologically-inspired, self-improving, machine intelligence. More on that later.
Even with the inefficiencies of large government, just a few billion dollars of annual public targeted grants, with private matching funds and global public relations, might accelerate the emergence of a functional CUI by a decade. That would likely be the best spent money in our nation's entire R&D budget. A less politically likely but still plausible "International Manhattan Project," involving a number of cooperating and competing international centers and a multi-billion dollar annual public-private commitment, might accelerate a mature CUI's arrival by twice this amount. Many of my computer scientist colleagues, knowing the infant state of the field today, think that developing and deploying a CUI powerful enough to be used by most people for most of their daily computer interactions by 2020 is a very challenging vision. The CUI is a problem we have been attacking for fifty years, but it has also been demonstrated that this problem benefits greatly from such accelerating developmental trends as increasing storage of human conversation on the internet, increasing processor and dynamic memory (cache) capacity, increasing connectivity, and increasing parallelism. The more context specific grammars we get, the more computers can learn how to use them to have statistically useful conversations with humans about a host of different tasks.
Now it seems time to assemble a lobby to replicate this excellent effort. The longer we choose not to declare broadband and the CUI as developmental goals and support them with escalating innovation and consistent funding, the longer we delay their arrival. Another benefit of declaring this goal, better collective insight and foresight, may be even more important than the time we save. By declaring good developmental goals, we learn to see the world as the multidimensional information processing system that it truly is, not simply as the collection of human-centric dramas we often fancy it to be. This makes it easier for us to find ways to catalyze the beneficial accelerations occurring in almost all of our technologies, and ways to block and delay harmful applications long enough for stabilizing immune systems to mature. With better foresight we can also discover the common technological innovations and infrastructures that improve the human environment. For example, just about all of our cherished social goals seem dependent on the quality and quantity of information getting to the individual. You can't fix an antiquated, politically deadlocked educational system, for example, without a functional CUI, which would educate the world's children far more cheaply and broadly than any human-based academy ever could. You can't create a broadly accessible, effective, or affordable global health care system without this infrastructure. You can't create a broadly effective, preventive security system (consider the deterrent effect of a simple conversation between law enforcement, and the lawbreaker, in the right context). Computer networks, through the minds they connect and the social and digital ecologies they foster, will soon educate and make us more productive than we ever dreamed we could be. I think it's time to we give the appropriate credit to the global ICT transformation now taking place, and align our national priorities with that transformation. If we don't, other countries will take the lead. Look to China, whose infotechnological revolution is now well under way, or even to India, who recently declared a 2.7 billion, four-year program to build an achievable proto-CUI by 2007. 3. Assuming you die at the age of 100, what will be the biggest difference between the world you were born into and the world you leave? This is a complex question. The world seems to progress by fits and starts, rapid punctuations separated by long droughts of less revolutionary equilibrium states. Fortunately, the equilibrium periods always seem to get shorter with time, apparently because our planet's technological intelligence is learning in an increasingly autonomous fashion, at a rate that is at least ten millionfold faster than our own. So what will be the biggest punctuation of my own lifetime? It seems to me that we are presently chugging along through the equilibrium flatlands in the last third of an Information Age. This is a time where the punctuations we see (e.g. minicomputer, personal computer, internet, etc.) while very important, are not yet fully global or "systemic" in their effect, a transitory period that will likely be seen in hindsight as running for about seventy years, from 1950 to 2020. I expect this to be followed by a punctuated transition to a shorter and even more productive Symbiotic Age, running perhaps thirty years, from 2020-2050, followed by an even shorter Autonomy Age of perhaps ten years, one which has the computational power to finally precipitate the technological singularity. My divisions are arbitrary, but the general compression is apparently not. I see each equilibrium era as part of an accelerating spiral of punctuated evolutionary development. Consider skimming my web page on the Developmental Spiral if you'd like to explore one basic proposal, among many, for a spiral of accelerating emergences leading to generalized human-surpassing artificial intelligence within this century. To answer the question, I think that the coming transition to linguistically symbiotic computing systems, the period surrounding and just after our entry to the CUI era circa 2020, will be the biggest "state change" difference I'll see in my biological lifetime. The Symbiotic Age will be a time when almost all of us will consider computers as actually useful (many today don't), where the large majority of us want computers with us all the time, and when we begin to feel naked without them, as naked as a hominid without clothing technology would be today in every culture on the planet. This will occur when we when we all have what futurist Alex Lightman (Brave New Unwired World, 2002) calls "wireless everywear" always on wearable access to our talking computer interface, and when computers start to do very useful, high level things in our lives.
At that point, our computers will become our best friends, our fraternal twins, and human beings will be intimately connected to each other and to their machines in ways few futurists have fully grasped to date. Read Ray Kurzweil's The Age of Spiritual Machines, 1999 for one excellent set of longer term scenarios. Read Rosalind Picard's Affective Computing, 1997, and B.J. Fogg's Persuasive Technology, 2002 for some nearer term ones. Today's early modeling systems, like FACS for reading human facial emotion, will be improved and integrated into your personalized CUI, which will monitor both internal and external biometrics to improve our health, outlook, and performance. We'll communicate intelligently with all our tools, giving constant verbal feedback to their designers. We'll spend most of our waking lives exploring a simulation space (simspace) that is so rich, educational, entertaining, and productive, that we will call today's mostly non-virtual world "slowspace" by comparison, a place many of us will drop back into only when we need a break from working, learning, networking, creating, and exploring. Slowspace will remain sacred, and close to our hearts, but it will begin to become secondary and functionally remote, like the home of our youth. Presently, evolutionary computation, as represented by such annual conferences as GECCO, is a very promising but still bleeding-edge domain pursued by perhaps 30,000 programmers, primarily for research applications. For those who have any systems understanding of biology, it is clear that our current genetic and evolutionary systems are, unfortunately, only nominally "biologically inspired". But some time after 2020, when massively parallel computing becomes economically feasible, I expect we'll see this field grow to millions of programmers fielding some highly autonomous, self-correcting architectures, primarily for commercial applications. I presently expect it will take about thirty years of active experimentation, 2020-2050, for these systems to become deeply biologically inspired. For this I believe they will need to be not simply evolutionary, but evolutionary developmental, self-tuning their parameters and self-unfolding their architectures in continuously iterating developmental cycles. By the time that becomes a mature approach, perhaps circa 2050, we can expect another punctuation to an Autonomy Age, a time when our large scale computing systems will begin to exhibit surprisingly advanced features of higher level human intelligence. That will be a very exciting time. During this era, even in our largest research and commercial labs, we can expect machine intelligence to continue to blunder into dead ends everywhere, the cul-de-sacs that are the typical result of chaotic evolutionary searches. But such systems will very quickly be able to reset themselves with little human assistance, to try a new evolutionary developmental approach in the search space. Also, the extra hardware and software resources they can recruit to solve a difficult problem are likely be enormous by that time. I wouldn't expect that very fruitful period of self-catalyzing discovery to last very long.![]() We will then have arrived at the technological singularity, a phase change, a place where the technology stream flows so fast that new global rules emerge to describe the system's relation to all the slower-moving, simpler elements in its vicinity, including our biological selves. That doesn't mean we won't be able to understand the general rules that emerge. On the contrary, most of these may be obvious to us, even now. And it doesn't mean the transition will be anxiety-ridden, disruptive, or alienating. We need to realize that long beforehand we will consider many of the most complex technologies in our environment in highly personalized, intimate terms, as cybernetic extensions of our biological selves. But it means that many of the particular mental states occurring within our technological architectures will become impenetrable to biologically-based intelligence. The machines, for their part, will have to take an increasingly active role in interpreting those states to our pre-singularity minds. We can also expect that a human-surpassing general artificial intelligence will be a physical system, and if it is physical, much of its architecture must be simple, repetitive, and highly understandable even by biological minds. Consider, for example, just how much we know about the neural architecture that creates our own consciousness, without being able to predict consciousness emergence, or to comprehend the nature of conscious "qualia" from first principles in physics or information theory. So it must be with the A.I.'s to come—while much of their structure will be tractable and tangible to us in a reductionist sense, much of their holistic intelligence will become impenetrable to our biological minds. This impenetrability is nothing mystical, we already see it in the way the emergent features of any complex technology such as a supercomputer, an automated refinery, robotic factory, or supply chain management system are already poorly comprehended by all but those few of us involved its analysis or design. The difference will be that the emergent intelligence of virtually all planetary technology will begin to display this inscrutability, not just to average users, but even to the experts involved in its creation. This is a slow, soft transition to inscrutability that we are already well within. Now it's time for some deeper speculation. Consider for a moment the following presently unprovable assertion: If ethics are a necessary emergence from computational complexity, then it follows that these systems will be ethically compelled to minimize the disruption we feel in the transition. Yes, most of the self improvement of self-aware A.I.s will occur on the other side of an event horizon, beyond which biological organisms cannot directly perceive, only speculate. Yet at the same time, we will notice that our technologies continue to gently become ever more seamlessly integrated with our biological bodies, so that when we say we don't understand aspects of the emergent intelligence, it will increasingly be like saying we don't understand emergent aspects of ourselves. But unlike our biological inscrutabilities, the technological portions of ourselves that we don't understand will be headed very rapidly toward new levels of comprehension of universal complexity, playing in fields forever inaccessible to our slow-switching biological brains. My current estimate for that transition would be around 2060, but that is a guess. We need funded research to be able to achieve better insight, something that hasn't yet happened in the acceleration studies field. The generation being born today will find that a very interesting time. At the same time, as I have said, I expect it they won't consider it to be a perceptually disruptive time, at least any more than prior punctuations. A time of massive transformation, but very likely significantly less stressful than prior punctuations, given the way computational complexity creates its own increasingly fine-grained stability, if one looks closely at the universal developmental record. The tech singularity certainly has a lot of significance to human beings, as after that date our own biology becomes a second-rate computational system in this local environment. This emergence, obvious to many high school students today, still irritates, angers, and frightens many scholars, who have attempted to dismiss it by calling it "techno-transcendentalism," "cybernetic totalism," "hatred of the flesh," "religious belief," "millennialism," or any number of other conveniently thought-stopping labels. But from a universal perspective, we can note that every singularity seems to be built on a chain of prior singularities. Considering the chain that has led to human emergence, each appears to have rigorously preserved the local acceleration of computational complexity. The coming technological singularity looks like just another link in a very fast, steep climb up a nearly vertical slope on the way to an even more interesting destination. My best present guess for that destination, as I'll discuss later, is the developmental singularity, a computational system that rapidly outgrows this universe and transitions to another domain. Fortunately, there are many practical insights we can gain today from developmental models, as they testably predict the necessary direction of our complex systems. Our nonprofit organization, the Acceleration Studies Foundation, hopes to see more funding and institutional interest in these topics in coming decades. We invite you to join us in this task if you also find these topics fascinating and potentially important.
What I have just said goes against the dominant dogma in some futurist and transhumanist circles, promoted by indiscriminately optimistic thinkers and a complicit biotech industry, both of which are strongly motivated to believe that we will see a powerful "secondary acceleration" in biotech, carried along by our primary acceleration in infotech. But while we are already seeing a very dramatic acceleration in biotech information (or more generously, knowledge), I humbly suggest that our existing knowledge of biological development already tells us that we will be able to use this information to make only very mild and underwhelming changes in biological capabilities and capacities, almost exclusively only changes that "restore to the mean" those who have lost their ability to function at the level of the average human being. As I explain in Understanding the Limitations of Twenty-First Century Biotechnology, there are a number of fundamental reasons why biotech, aided by infotech, cannot create accelerating capacity or performance gains within biological environments. It is true that with some very clever and humane commercializations of caloric restriction and a handful of other therapies we might see twenty times more people living past 100 than we see today, people with fortuitous genes who scrupulously follow good habits of nutrition and exercise. That is certainly a noble and worthwhile goal. But we must also remember that virtually no one lives beyond 100 today, so a 20X increase is still only very mild in global computational and humanitarian effect. This will add to our planetary wisdom, and is something to strive toward, but this is not a disruptive change, for deep reasons to do with the limitations of the biological substrate. Furthermore, genetic engineering, as I discuss in the link above, cannot create accelerating changes using top-down processes in terminally differentiated organisms like us. This intervention would have only mild effects even if it could get beyond our social immune systems to the application stage, which in most cases it thankfully cannot. Perhaps the most disruptive biotech change we can reliably expect, a cheap and effective memory drug that allows us temporary, caffeine-like spikes in our learning ability, followed by inevitable "stupid periods" where we must recover from the simplistic chemical perturbation, would certainly also improve the average wisdom of human society. But even this amazing advance, should it transpire, would not even double our planetary biological processing capacity, something that happens in information technologies every 18-24 months. In summary,
many decades before the tech singularity arrives I expect to either be
chemically recycled (most likely), or to be in some kind of suspended
animation (significantly less likely, at present). At present, I'd consider it for myself only if a number of presently unlikely conditions transpire: 1) neuroscience comes up with a model that tells us what elements of the brain need to be protected to preserve personality (we currently have no such model), 2) cryonics researchers can prevent or show the irrelevance of the extensive damage that presently occurs during freezing (they currently cannot), 3) most of my friends are doing it (they are currently not), and 4) I expect to be revived at least to some degree by intelligent machines not in some far future, but soon after I die, while many of my biological friends or children are still alive. [2005 Note: I add an overlooked fifth criterion: 5) It would have to be cheap enough to be a negligible fraction of my net worth upon death, so that it didn't materially affect the fortunes of my descendants (at $30,000 for the lowest cost option today it is presently much more than I'd be interested in diverting from my inheritors, either in lump sum or through life insurance). The second and the fourth conditions deserve a bit more explanation. As to the second condition, we do not yet know to what extent the brain's complexity is dependent on the intricate three dimensional structure in which it emerges. That structure, today, is grossly deformed and degraded in the freezing process, which currently leads both to destruction (via stochastic fusion) of at least some neural ultrastructure, and to intense cellular compression (and erasure of at least some membrane structure, again by fusion) as ice forms in the extracellular neural interstices. Will we come up with new preservation protocols? We can always hope. The reason the fourth condition of rapid reanimation is important to me is because I suspect that once I woke up from any A.I.-guided reanimation procedure, in order to usefully integrate into a significantly post-singularity society I would soon choose to change myself so utterly and extensively that it would be as if I never existed in biological form. So what would be the point? If, on the other hand, I had reasonable expectation that I could be brought back, at least in part, while those who cared about me could benefit, that would be a different story. [2005 Note: It is beginning to look like this condition might be satisfied in coming years. Recent models of the brain suggest our cerebral cortex stores the memories of our lives in a potentially machine readable form. By 2050, we might see some people preserving their brains, the most accurate records of their lives, primarily so that not their conscious self, but their life memories could be "read" and uploaded into their digital selves, to provide valuable experience and advice for their loved ones relatively soon after their own death. Those who make detailed scrapbooks of their lives today, those who believe full transparency (including the sharing of one's "secrets") improves social wisdom, and those who are concerned with the loss of all the rich stories of their ancestors might see the value in this idea.] Even as cryonics advances in coming decades, I think we are nearly ready to move beyond the fiction of our own biological uniqueness having some long term relevance to the universal story. I expect our future information theory will inform us of the suboptimality of personal biological immortality. For those die-hard individualists who say "screw optimality," I suggest that we'll eventually be educated out of that way of thinking. For me, the essence of individual life is to use one's complexity in the matrix in which it was born. Attempts to transmit it more than a short distance away from that environment are bound to be exercises in frustration, missing one of the basic motives of life, to do great things with your contemporaries. Ask any Fourth World adult who is suddenly transplanted to New York City and he'll tell you the same. 4. What future development that you consider most likely (or inevitable) do you look forward to with the most anticipation? I look forward greatly to the elimination of the grosser forms of coercion, dehumanization, violence and death that occur today. Admittedly, these seem to be processes that will always be with us at some fundamental level. Computational resources will very likely remain competitive battlegrounds in the post-singularity era, because we inhabit a universe of finite-state computational machines pitted against all the remaining unsolved problems, in a Gödelian-incomplete universe. And bad algorithms will surely die in that environment, far more swiftly than less fit organisms or ideas die today. But when
a bad idea dies in our own minds, we see that as a lot less subjectively
violent than our own biological deaths. Over time, love, resiliency, and
consciousness win. In many ways, I think the collective consciousness of our species has come to understand that we have already achieved a very powerful degree of informational immortality. By and large, our evolutionary morality guides us very strongly to act and think in that fashion. I look forward to the individual consciousnesses of all species on this planet gaining that victory in coming decades. Including the coming cybernetic species we are helping to create. Sci-tech systems are not alien or artificial in any meaningful sense. As John McHale said (The Future of the Future, 1969), technology is as natural as a snail's shell, a spider's web, a dandelion's seed—many of us just don't see this yet. Digital ecologies are the next natural ecology developing on this planet, and technology is a substrate that has shown, with each new generation, that it can live with vastly less space, time, energy, and matter (what can be called STEM compression, and formerly referred to as "MEST compression") than we biological systems require. Our neural wetware, a much more terminally differentiated substrate, nearing the end of its useful lifespan, has a relatively fixed STEM efficiency from generation to generation. Technology is the next organic extension of ourselves, growing with a speed, efficiency, and resiliency that must eventually make our DNA-based technology obsolete, even as it preserves and extends all that we value most in ourselves. I can't stress enough the incredible efficiencies that emerge in the miniaturization of physical-computational systems. If STEM compression trends continue as they have over the last six billion years, tomorrow's A.I. will soon be able to decipher substantially all of the remaining complexities of the physical, chemical, and biological lineage that created it, our own biological and conscious intricacies included, and do all this with nano and quantum technologies that we consider almost impossibly, "magically" efficient. In the same way that the entire arc of human civilization in the petrochemical era has been built on the remains of a small fraction of the decomposing biomass that preceded us, the self-aware technologies to come will build their universe models on the detritus of our own twenty first century civilization, perhaps even on the trash thrown away by one twenty first century American family. That's how powerful the STEM compression of computation appears to be in our universe. It continually takes us by surprise. I am optimistic that these still poorly characterized physical trends will continue to promote accelerating intelligence, interdependence, and immunity in our informational systems, and look forward to future work on understanding this acceleration better than we do today. 5. What future development that you consider likely (or inevitable) do you dread the most? I worry that we will not develop enough insight to overcome our mounting fear of the technological future, both as individuals and as a nation. To paraphrase Franklin Roosevelt, speaking at the depths of the American great depression, the only thing we have to fear is fear itself. Many in our society have entered another "great depression" recently. This one is existential, not economic. A century of increasingly more profound process automation and computational exponentiation has helped a growing number of systems thinkers to realize that humanity is about to be entirely outpaced by our technological systems. We are fostering a substrate that learns multi-millionfold faster than us, one that will apparently soon capture and exceed all that we are. Again, Roosevelt's credo is applicable. If we ignore the grand transition we are in, if we push it into our subconsciousness, we will end up being dragged by the universe into the tech singularity, still kicking and screaming and still fighting petty battles with each other all the way through, never looking up from our shallow and egoistic self absorption and understanding our appropriate place in the universal scheme of emergences. We should be walking into this grand transition upright, proactive and self-controlled, picking our own path through the accelerations ahead. We should understand how small we are in the scheme of things, and our unique role as a link in a chain of accelerating developments. We should be ready to pass on the baton to our electronic progeny, and we should be crafting them to be even better than we are at all the things we hold valuable. I'm concerned that we will decide later, rather than earlier, to learn deeply about the developmental processes involved in accelerating change. That we will rely on our own ridiculously incomplete egos and partial, mostly top-down models to chart the course, rather than come to understand the mostly bottom-up processes occurring all around us. I'm concerned we won't realize that humans are and must always be like nearsighted termites, building this massive mound of technological infrastructure that is already vastly more complex than any one human understands, and unreasonably stable, self-improving, self-correcting, self-provisioning, energy and resource minimizing, and so on. Soon a special subset of these systems will be self-aware, and the caterpillar of technological intelligence will turn into a butterfly, freeing the human spirit. Gaining such knowledge about the developmental structure of the system would surely allow us to chart a better evolutionary course on the way.
We may rise to recognize the vision-setting responsibility that comes with our place in the world. Or we may continue to subconsciously fear science and technology, as we have intermittently over the last century, and tell stories that shift the blame, rather than cut through to the reality. Technology, rather than human choice, has been simplistically scapegoated for the World Wars, the Great Depression, the Cold War, Vietnam, Rich/Poor Divides, Global Pollution, Urban Decay, you name it. If we wish to rise above reactionary dialog, we may decide that the wise use of science and technology must be central to our productivity, educational systems, government and judicial systems, media, and culture, the way they so obviously were when we were a new nation. Fortunately, there are signs that other countries, such as Taiwan, China, Japan, South Korea, Thailand, and Singapore are choosing this course of action. Several of these countries continue to operate with glaring deficits in the political domain. Yet they are experiencing robust growth due to enlightened programs of technological and economic development. Nevertheless, none of these countries are yet successfully multicultural enough, or have sufficiently well developed political immune systems (institutionalized pluralism, pervasive tort law, independent media, mature insurance systems, tolerant social norms) to qualify as leaders of the free world, at the present time. It is telling that the owners of today's rapidly-growing Chinese manufacturing enterprises find it most desirable to keep their second homes in the United States, due to our special combination of both unique social advances and technological development. Much of the world's capital still flows first to the U.S., to seek the highest potential return. But for how long can this continue if we remain lackluster in our technological leadership, riding on our prior political and economic advances?
We must
lead on a platform of proactive sociotechnological change, not simply
security, or we remain guilty of resting on our accomplishments. As long as we define ourselves by our fear of transformational technologies, and our dread of being exceeded by the future, we will continue in ignorance and self-absorption, rather than wake up to our purpose to understand the universe, and to shape it in accord with the confluence of our desires and permissible physical law. For over a century we've seen successive waves of increasingly more powerful technologies empower society in ever more fundamental ways. Today's computers are doubling in complexity every 12-18 months, creating a price-performance deflation unlike any force previously seen on Earth. Yet we continue to ignore what is happening, continue to be too much a culture of celebrity and triviality, continue to make silly extrapolations of linear growth, and bicker over concerns that will soon be made irrelevant, continue to engage in activities that delay, rather than accelerate the obvious developmental technological transformations ahead. I am also concerned that we may continue to soil our own nests on the way to the tech singularity, continue to take shortcuts, assuming that the future will bail us out, forgetting that the journey, as much as the destination, is the reward. Consider that once we arrive at the singularity it seems highly likely that the A.I.s will be just as much on a spiritual quest, just as concerned with living good lives and figuring out the unknown, just as angst-ridden as we are today. No destination is ever worth the cost of our present dignity and desire to live balanced and ethical lives, as defined by today's situational ethics, not by tomorrow's idealizations. If I can't convince the Italian villager of 2120 of the value of uploading, then he will not willingly join me in cyberspace until his entire village has been successfully recreated there, along with much, much more he has not yet seen. I applaud his Luddite reluctance, his "show me" pragmatism, for only that will challenge the technology developers to create a truly humanizing transition. Finally, I'm concerned that we may not put enough intellectual and moral effort into developing immune systems against the natural catastrophes that occur all around us. Catastrophes are to be expected, and they accelerate change whenever immune systems learn from them. It is maximizing immune learning from inevitable catastrophes that's the real challenge. In my own research, there has never been a catastrophe in known universal history (supernova, KT-meteorite, plague, civilization collapse, nuclear detonation, reactor meltdown, computer virus, 9/11, you name it) that did not function to accelerate the average distributed complexity (ADC) of the computational network in which it was embedded. It is apparently this learning of our immune systems that keeps the universe on a smooth curve of continually accelerating change. If there's one rule that anyone who studies accelerating change in complex adaptive systems should realize, it is that immunity, interdependence, and intelligence always win. This is not necessarily so for the individual, who charts his or her own unique path to the future but is often breathtakingly wrong. But the observation holds consistently for the entire amorphous network. Nevertheless, there have been many cases of catastrophes where lessons were not rapidly learned, where immune systems were not optimally educated to improve resiliency, redundancy, and variation. And in the case of human society, our sociotechnological immune systems work best when they are aided by committed human beings, the most conscious and purposeful nodes in our emerging global brain. Consider our public health efforts against pathogens such as SARS and AIDS, and the strategies for success become clear. Anything that economically improves social, political, technological, and biological immune systems is a very forsighted development.
Every sniper and serial killer should be countered today with the installation of another set of shot microphones hooked up to public cameras. By their very actions they are building the social cages that will eventually catch them, and all others like them, so we might as well publicly acknowledge this state of affairs, for maximum behavioral effect. Ideally, ninety five percent of these cameras will remain in private, not public hands, as is the current situation in Manhattan. When will we see RFID in all our products? When will we finally live in a world were every citizen transmits an electronic signal uniquely identifying them to the network at all times? When will we have a countervailing electronic democracy, ensuring this power is used only in the most citizen-beneficial manner? Today we see early efforts in these areas, but as I've written in previous articles, there is still far too much short term fear and lack of foresight. If we think carefully about all this, we will realize that a broadband CUI network must be central to the creation of tomorrow's national and global technological immune systems. I am hopeful that our Departments of Defense, Homeland Security, Education, Commerce, and business and institutional leaders will all do their part to accelerate its development in coming years. 6. Assuming you have the ability to determine (or at least influence) the future, what future development that you consider unlikely (or are uncertain about) would you most like to help bring about?
Experience in the U.S. has shown that the digital divide has closed the fastest and most equitably of all the famous divides. The access divide no longer even exists in this country due to the massive price deflation of computing systems (e.g. $200 Wal-Mart PCs, free internet accounts). Meanwhile, other divides, such as wealth, education, political power, even health care, will likely continue to persist for generations. We can learn this lesson in the unique power of ICT, what Buckminster Fuller once called "technological benevolence," and increasingly use technology, like Archimedes' lever, to move the world. We certainly have the available manpower, with the 50,000 NGOs that have sprung up like wildflowers out of nowhere over the last two generations. We have the finances, with innovative programs like Grameen microloans. Now we just need the technological will, a first world culture that prioritizes both second world (communist) and third world (emerging nations) development . We are already doing this mostly admirably with economic policy, as we rapidly globalize our trade and even our service jobs. While temporary subsidies and centralized fiscal interventions will likely continue unabated, at least our trade restrictions seem to be going the way of nuclear arms, following a slow and steady course of dismantling. Now we need technology transfer, development, and innovation policies and programs to match our other commitments. Again, getting a broadband CUI to cellphones and embedded ubiquitous computers for all six billion of us by 2050, the middle of this century, would be a tremendous goal for world development. To really see this, we have to grow beyond the old fears that aggressively contributing to development of "the other" necessarily comes at our own cost. In many cases, as multinational corporations discovered early in the last century, the marginal utility of plowing dollars into our own development is already far less than spending those dollars in global environments. As Nathan Mhyrvold notes, the underfunded Chinese biomedical researcher today who discovers an effective treatment for my cancer tomorrow invariably becomes one of my best allies. Technological benevolence, accelerating compassion, and what I have referred to elsewhere as an "Era of Magic Philanthropy" must happen sooner or later, in the coming decades, from my perspective. I'd prefer to see this development happen more consciously, cleverly, and quickly than many development pessimists currently expect. There are also critical questions of priority. Is it most important to help the third world politically (e.g., freedoms, human rights), economically (e.g., trade, market reform), or technologically? By now it should be clear where my own sympathies lie. Technology, appropriately assessed and deployed, has become the greatest global lever of change. Each of these three fundamental systems has evolved hierarchically from the former. I think this gives us a major clue to their relative power as a world system. Politics was the most powerful system of change through most of human history, then in the 19th century economics became the dominant system, and early in the 20th century, with mass production, technology. The critic's adage "It's all about the power," eventually became "It's all about the money" and since the 1920's has become "It's mostly about the technology, and secondarily about who has the money, and lastly about who has the power". Those stuck in the older dialogs are increasingly mystified by today's disruptive transformations, are endlessly surprised by the sudden emergence and inordinate power of the Microsoft's and Ikea's and Dell's and Google's of the present day. Today, the technology policy a country is able to pursue, followed secondarily by its economic liberalization, and lastly, by its political structure seem to me the best indicators of its general state of health. Consider that in all of the fastest growing, most resilient nations on our planet, attitudes toward technology innovation and diffusion are highly similar, attitudes toward economic competition, property, trade and globalization are the second most similar, and finally, attitudes toward personal freedoms and political ideology are by far the least homogenous.
Consider also that China, in the 21st century, is very likely to replicate Singapore's many successes at an even greater scale, long before it becomes democratic, or tolerant of significant personal political dissent. Of course, we shouldn't just point the finger at others political shortcomings, as every nation has them. Here in the U.S., I would predict that internet voting capabilities and secure digital identity technologies, when they finally arrive, will be around for a long time (another twenty years? forty?) before we become a significantly more participatory, more "direct" democracy. That would be a sad but expectable outcome of our currently highly plutocratic American political climate. We are all in need of political change, but it rarely comes as fast as we imagine it might. Even when it does, as in revolution, it often brings unintended consequences that are themselves very slow to change. Fortunately, political change is less and less relevant not only to economic growth, but to the production of human surpassing technological intelligence with each passing year. That's simply the nature of computational development on this planet, and we need only look at the record to admit this to ourselves.
These are certainly important issues, but the way technology interfaces with culture, business, and government, as discussed in books like Everett Rogers' Diffusion of Innovations, 2003, Clayton Christiansen's The Innovator's Dilemma, 1997, and Shiela Jasanoff's Comparative Science and Technology Policy, 1997, has become the dialog of primary importance, in my opinion. This remains true even when we do not consciously realize it, which is the case for many in positions of nominal authority who remain most comfortable engaging in antiquated, primarily political and economic ways of thinking. We here at ASF hope to do our small part to illuminate the changing landscape of transformational power in coming years. 7. Why is it that in the year 2003 I still don’t have a flying car? When do you think I’ll be able to get one? This is a delightful question. I'm lucky that this is an area I've thought about a little bit. To put flying cars into the air in any number while still respecting human life, it seems likely that we'd have to develop a cheap, fuel-efficient vertical or short take off and landing (VTOL or STOL) vehicle. It would have to reliably recover from mechanical failure (e.g. the new plane parachutes, which have already successfully saved a few pilots). It would need affordable onboard radar for cloudy days (still unacceptably expensive, and Loran is not sufficient).
Even the first problems are still a few decades away from inexpensive solutions. Aerospace technology just does see the jaw-dropping efficiency increases of ICT, because it is a technology of outer space, not inner space. Inner space is where the universe is relentlessly driving us, whether we realize it or not. That's why for thirty years we haven't seen a commercial plane that flies faster than the now defunct Concorde or is noticably bigger than the 747. That's why, as futurist Lynn Elen Burton notes, local light rail systems, a more energy-efficient (and inner space) solution than planes, have replaced many plane flights in Europe, and she predicts they will increasingly do so in the denser areas of the U.S. as well. It may not yet be obvious, but I propose that we are swimming against the natural developmental tide of computation trying to implement this individualistic, frontier-era vision. Self-piloting autos, subways and segways, not skycars, are the future of transportation. Unfortunately, I expect Paul Moller's daring flying car, for example, to be like the nuclear powered submarine, an inspired curiosity that doesn't make it beyond the limited production stage. OK, Paul… Prove me wrong! If you'd like more on the near term future of urban transportation, I've written on this issue with regard to automated highway systems (AHS). I think urban AHS networks, including some being built underground, are likely to arrive before the singularity. That may not sound as fun as skipping across the clouds, but it seems much more economically and technologically plausible to me. But for the sake of argument, let's say with luck, genius and persistence we have solved the first problems. That still leaves us with the last problem, distributed air traffic control, a problem that has seen little work to date. All our current control systems are big, brittle, top-down megasoftware projects, designed for local airports. We've played with agent-based models, but these are is still very early in research, not development. To deploy skycars in any number we'd need something bulletproof and redundant, located onboard the flying car, a system that could autoroute and autoresolve the flight paths of a whole bunch of these vehicles in real-time, all shuttling around in 3D space, only seconds away from each other in travel time. That's much more computationally difficult that 2D automated highway car navigation, so I submit that it has to come afterward in the developmental hierarchy. It is a worthy computational problem, and I'm sure we would eventually get around to it, if given time, but I'm not at all sure we will have sufficient time or interest to solve this problem before the tech singularity. And after the singularity, I suspect there may not be very many human beings who will continue to have the urge to fly around the planet in a physical way. By then, there will probably be far more interesting things to do in inner space, as strange an idea as that may seem to us today. One hard sign that I am wrong about the near term future of flying car development would be someone making an agent-based air traffic control system capable of replacing our current clunky top-down models in high density environments. Keep your eyes peeled.
Designing such highly autonomous navigational systems may end up being a job for post-singularity intelligences, and by then, as I've written elsewhere, while there will likely be some continuing demand for physical travel, it may not last for long. I expect that technologically-enhanced people will naturally develop a very different set of urges. Consider the way that human reproduction has fallen below replacement levels in every technologically developed nation on Earth, due to rising desires for personal development, including a natural desire to maximize the developmental potential of one's offspring. In a post-singularity society there will be very different and far more interesting enticements for personal development than physical travel in an increasingly small, teleimmersive, and very well-simulated physical world. At root, these enticements will probably involve moving beyond our biological selves by degrees. If so, once we have entirely entered the technological world, it is possible that only the travel of our attention, through a planetary network of shared sensor and effector mechanisms, not the travel of our physical bodies, will make any long-term sense in that highly developed planetary environment. I hope this glimpse of a postbiological society doesn't seem shocking or alienating. If it does, remember that we would never make the biology-to-technology transition if it weren't fully reversible, in principle. In practice, however, I think we will soon find biology to be a tremendously more confining and less complex place than our minds, hearts, and spirits require. |
Technological Singularity Questions
A. I'm familiar with the idea of a singularity from reading about black holes. As I understand it, the event horizon of a black hole is the point beyond which no light can escape. Perceived time slows to an absolute standstill at the event horizon. At the singularity, gravity becomes infinite, and what we normally think of as the "laws of nature" cease to function the way we expect them to. The singularity seems to be the ultimate physical enigma. What then is this technological singularity, and in what way is it analogous to the singularity of a black hole? This last question may be the most important of our time, with regard to understanding the future of universal intelligence. Or it may be a greased pig chase. Only posterity can decide. I've been chipping away at the topic since the seventh grade in high school, when I had a series of simple and elegant intuitions in regard to accelerating change, speculations that I'd love to see seriously researched and critiqued in coming years. In 1999 I started a website on the subject, AccelerationWatch.com (formerly SingularityWatch.com), and in 2003 a group of us formed a nonprofit, the Acceleration Studies Foundation (Accelerating.org), to further public discussion and scholarly inquiry in this area. Last year we produced our first conference at Stanford, Accelerating Change 2003, which was very well-received. Finally, I'm writing a book, Journey and Destiny, on the subject of accelerating change, which I hope to get out sometime this century.
But before we go further, I shall lay my biases on the table. I am a systems theorist. The systems theorist's working hypothesis—and fundamental conceit—is that analogical thinking is more powerful and broadly valuable than analytical thinking in almost all cases of human inquiry. This doesn't excuse us from bad analogies, which are legion, and it doesn't make quantitative analysis wrong, it just places math and logic in their proper place as powerful tools of inquiry used by weakly digital minds. Today's quantitative and logical tools are enabled by the underlying physics of the universe, which are much more sublime, and such tools often have no relation to real physical processes, which may use quanta and dimensionalities entirely inaccessible to our current symbolisms. Furthermore, I take the "infopomorphic" (as compared to "anthropomorphic") view, that all physical systems in the universe, including us precious bipeds and even the universe itself, are engaged in computation, in service to some grander purpose of self- and other-discovery and creativity. This philosophy has also been described as "digital physics," and one of several variants can be found at Ed Fredkin's Digital Philosophy website. It has also been elegantly introduced by John Archibald Wheeler's "It from Bit," 1989 (see Physical Origins of Time Asymmetry, 1996). Finally, I am an evolutionary developmentalist, one who believes that all important systems in the world, parsimoniously including the universe itself, must both evolve unpredictably and develop predictably. That makes understanding the difference between evolution and development one of the most important programs of inquiry we could engage in today. The meta-Darwinian paradigm of evolutionary development, well described by such innovative biologists as Rudolf Raff (see The Shape of Life, 1996), Simon Conway Morris, Wallace Arthur, Stan Salthe, William Dembski, and Jack Cohen, is one that situates orthodox neo-Darwinism as a chaotic mechanism that occurs within (or in some versions, in symbiosis with) a much larger set of statistically deterministic, purposeful developmental cycles. There are now a number of scientists applying this view to both living and physical systems, including those exploring such topics as self-organization, convergence, hierarchical acceleration, anthropic cosmology, intelligent design, and a number of other subjects that are very poorly explained by the classical Darwinian theory as championed by such advocates as Stephen Jay Gould and Richard Dawkins. During the seventeenth century, with Isaac Newton's Principia (1687), it seems fair to say that humanity awakened to the realization that we live in a fully physical universe. During the early twentieth century, with Kurt Gödel's Incompleteness Theorem (1931) and the Church-Turing Thesis (1936) we came to suspect that we also live in a fully computational universe, and that within each discrete physical system there are intrinsic limits to the kinds of computation (observation, encoding) that can be done to the larger environment. Presumably, the persistence of these limits, and their interaction with the remaining inaccessible elements of reality, spurs the development of new, more computationally versatile systems, via increasingly more rapid hierarchical "substrate" emergences over time. At each new emergence point a singularity (a physical or computational phase change) is created, and new physical-computational system suddenly and disruptively arises. At this point, a new local environment, or "phase space" is created wherein very different local rules and conditions apply. That's one predominant systems model for singularities, at any rate. From this physical-computational perspective, replicating suns, spewing their supernovas across galactic space, can be seen as rather simple physical-computational systems that, over billennia, nevertheless encode a "record" of their exploration of physical reality, in their computational "phase space." Part of this record appears to us in the form of the ninety two standard elements of the periodic table (the "developmental genetics" of this physical-computational substrate), and another part in the increasingly complex types of suns and planets that emerge (the "developmental soma" of the system). Once that elemental matrix becomes rich enough, and the uniquely promiscuous carbon, as well as nitrogen, phosphorous, sulfur, and friends have emerged, we notice a new singularity occur. In specialized local environments, the newest computational game becomes replicating organic molecules, chasing their own tails in protometabolic cycles (see Stuart Kauffman, At Home in the Universe, 1996). Again, these systems developmentally encode their evolutionary exploration by constructing a range of complex polymerizing systems, including autocatalytic sets. Once a particular set becomes complex enough, we again see another phase change singularity, with the first DNA-guided protein synthesis emerging on the geological Earth-catalyst, even before its crust has begun to cool. As precursors to fats, proteins, and nucleic acids have all been found in our interplanetary comet chemistry, and as we suspect that chemistry to be common throughout our galaxy, it is becoming increasingly plausible that every one of the billions of planets (in this galaxy alone) that are capable of supporting liquid water for billions of years may be primed for our special type of biogenesis. This proposed transition, a singularity in an era of accelerating molecular evolutionary development, is what A.G. Cairns-Smith calls "genetic takeover," which I suggest is a particularly appropriate phrase. Such unicellular emergence very likely leads in turn to multicellularity, then to differentiated multicelluar systems encoding useful neural arborization patterns, another singularity (570 million years ago), which leads to big-brained mammals encoding complex mimicry memetics (10 million years ago), then to the first extrabiological technology (soft-skinned Homo habilis collectively throwing rocks at more physically powerful predators (leopards, etc.), 2 million years ago), then to Cro-Magnon hominids encoding and processing oral linguistic memetics (100,000-50,000 years ago), then to written versions of these memetics (the first "papyrus computers", 7,000 years ago), then to today's semi-autonomous digital technological systems, encoding their own increasingly successful algorithms and world models. (Forgive me if we skipped a few steps in this cartoonish sketch). Systems thinkers, since at least Henry Adams in 1909, have noted that each successive emergence is vastly shorter in time than the one that preceded it. Some type of global universal acceleration seems to be part and parcel to the singularity generation process. Note also that each of the computational systems that generates a singularity is incapable of appreciating many of the complexities of the progeny system. A sun has little computational capacity to "understand" the organic chemistry it engenders, even as it creates and interacts intimately with that chemistry. A bacterium does not deeply comprehend the multicellular organisms which spring from its symbiont colonies, even as it adapts to life on those organisms, and thus learns at least something reliable about their nature. Humanity, in turn, can have little understanding of the subtle mind-states of the A.I.s to come, even as we become endosymbiotically captured by and learn to function within our increasingly intelligent technological cocoons, in the same way that bacteria (our modern mitochondria) were captured by the eukaryotic cell. Remember the theory of endosymbiosis? It was originally quite controversial, but now, molecular genetics has provided a lot more evidence for the model. Read Lynn Margulis, Symbiotic Planet, 1998, for more on this way nature uses cooperation just as powerfully as competition in the pursuit of adaptation and complexity construction. Humans are in a deep partnership with our machines, one whose ramifications are not yet widely understood. In a post-CUI Symbiotic Age, I suggest this partnership will become a lot more plainly apparent. Yet even with the computational limits every organism has, the more complex any system becomes, the better it models the universe that engendered it, and the better it seems to understand its own history and environment, including the physical chain of singularities that created it. See Rod Swenson's insightful "Thermodynamics, Evolution, and Behavior," 1997 for more on this. The computational systems in a bacterium model the chemo-osmotic gradients in its immediate vicinity. A mouse models a habitat the size of an acre. You and I model the birth and death of the observable universe. At some point, the size of the perception-action cycles we model reaches the size of the universe itself, and a form of computational closure (a map of a phase space that gets no larger, only more detailed) emerges. That also implies, if you consider the recursive, self-similar nature of the singularity generation process, the better complex systems at the leading edge can understand their own developmental future as well. If our entire universe is evolutionary developmental, which is an elegantly simple possibility, then it is constrained to head in some particular direction, a trajectory that we are beginning to see clearly even today. For a very incomplete outline of this trajectory, we can propose that the universe must invariably increase in average general entropy (in practice, if not in theory), with islands of locally accelerating order, that each hierarchical system must emerge from and operate within an increasingly localized spacetime domain (one form of STEM compression of computation), and that the network intelligence of the most complex local systems must always accelerate over time. The simplicity of such macroscopic, developmental rules and of developmental convergence in general, by comparison to the unpredictable complexity of the microscopic, evolutionary features of any complex system, is what allows even twenty-first century humans to see many elements of the framework of the future, even if the evolutionary details must always remain obscure. This surprising concept, the "unreasonable effectiveness" of simple mathematics, analogies, and basic rules and laws for explaining the stable features of otherwise very complex universal systems has been called Wigner's Ladder, after Eugene Wigner's famous 1960 paper on this topic. As I will explore later, a developmentalist like myself begins his inquiry by suspecting that the universe has self-organized, over many successive cycles, to create its presently stunning set of hierarchical complexities, in the same manner as my own complexity has self-organized, over five billion years of genetic cycling, to create the body and mind that I use today. Furthermore, if emergent intelligence can be shown to play any role in guiding this cycling process, then it seems quite likely that if the universe could, it would tune itself for Wigner's Ladder to be very easy to climb by emerging computational systems at every level during the universal unfolding. This process would ensure that intelligence development, versus all manner of destructive shenanigans, is a very rewarding, very robust, strongly non-zero-sum game, at every level of universal development. Certainly there seems evidence for this at any system level we observe. The developing brain is an amazingly friendly environment for our scaffolding neurons to emerge within. They seem to discover, with very little effort, the complex set of signal transductions necessary to get them to useful places within the system, all with a surprisingly simple agent-based model of the environment in which they operate. In another example, a non-linguistic proto-mammal of 100 million years ago (or today's analog), if placed in a room with you today, would develop a surprisingly useful sense of who you are and what general behaviors you were capable of after only short exposure to you, even though it might never figure out your language or your internal states. Even a modest housefly, after a reasonable period of exposure to 21st century humans, is rarely so surprised by their behavior that it dies when poaching their fruit. So it is that all the universe's pre-singularity systems internalize quite a bit of knowledge concerning the post-singularity systems, even if they never understand their internal states. I contend that human beings, with the greatest ability yet to look back in time to the processes that create us, have a very powerful ability to look forward as well with regard to developmental processes. I think we can use this developmental insight to foretell a lot about the necessary trajectory of the post-singularity systems on the other side.
I call this the developmental singularity hypothesis, and it is admittedly quite speculative. It is also known as the transcension scenario, as opposed to the expansion scenario, for the future of local intelligence. The expansion scenario, the expectation that our human descendants will one day colonize the stars is, today, an almost universal de facto assumption of the typical futurist. I consider that model to be 180 degrees incorrect. We are certainly likely to see a few more human missions, as President George W. Bush has recently intimated, including eventual return to the moon and a mission to Mars. But in an era of increasingly intelligent and rapidly miniaturizing robotics, these missions are much more for public relations and inspirational value than for the future of science. Let's also hope they are not inordinately expensive, as those goals are not worth very much in a world where three billion of us still live on less than $2 a day. Space within our solar system is a frontier of sorts, but it is one with rapidly declining informational utility. To understand the apparently deep inviolability of STEM compression processes, it helps to realize that hierarchically emergent life has never autonomously colonized any spatial frontier in its entire developmental history. In fact, the opposite is true: more complex forms always move to increasingly spatially restricted environments, apparently as a key strategy to sustain the computational acceleration of their emergent ecologies. The first land plants and animals were only upgrading a subset of the space that had previously been occupied by bacteria, and they did not colonize it as extensively. They inhabit only the surface of the soil, for example, not the deep soil and the water table, as do our oldest forms of life, anerobic bacteria. We can note with increasing amusement that the entire history of human exploration to date has been to carve out even an more localized subset of paths through this preexisting verdant environment. It is true that humans have unrealized dreams of space colonization, and that we have created a series of space stations above our atmosphere, but I suggest that these will not become complex enough to be autonomous before the singularity. It takes a formidable amount of computational intelligence to design autonomous systems. If some group were able to make machine-human systems capable of mastering ecological autonomy, economic forces would reappropriate that intelligence to create widespread systems of human-machine symbiosis that solve real problems here, on Earth. These systems in turn would perpetuate an Earth-based singularity long before they'd be used to create a moon colony that didn't need constant, expensive resupply from the home planet. No matter how you play the cards we seem to be on a developmental trajectory toward inner space, not outer space, for reasons of apparent deep universal design. Let me publicly propose, until I have a chorus of voices joining me, that outer space for human science, seems increasingly likely to become an informational desert, by comparison to the simulation science we can run here, in inner space. I suggest that the cosmic tapestry that we see in the night sky may be most accurately characterized as the "rear view mirror" on the developmental trajectory of physical intelligence in universal history. It provides a record of far larger, far older, and far simpler computational structures than those we are constructing here, today, in our increasingly microscopic environments. Let me relate my personal story of how I came to understand this curious idea, the natural universal developmental trajectory toward inner space. It is so simple and elegant that it came to me as a child, which makes me suspect it as a basic and still generally undiscovered truth about the developmental physics of the universe. I was fortunate to have grandparents (Thank you, Karl and Muriel Williams) who provided me with a subscription to National Geographic magazine in primary school, and I began reading the new issues dutifully around the age of ten. On entering middle school I discovered that the school library had much older issues, dating back to the beginning of the century (Thank you, Chadwick School). Thenceforth the library became a favorite haunt, and it was somewhere within the pages of these magazines, comparing astronomy articles to stories on human emergence, that I noticed the accelerating and increasingly local nature of the universal developmental progression. These ideas were reinforced in a very unusual seventh grade history class (Thank you, Mr. Bullin) where we discussed universal and human development, and in an English class where the summer reading was Charles Darwin's Voyage of the Beagle, 1909. I was an inconsistent daydreamer of a student in those days, but when I finally got around to reading the Beagle, the story of a young Darwin and the environment that helped him develop the knowledge that inexorably led him to his Great Idea, I realized that I had also discovered a similarly great idea myself during all those lazy afternoons in my seventh grade year, flipping through magazines and thinking. The idea was essentially this: every new system of intelligence that emerges in the universe clearly occupies a vastly smaller volume of space, and plays out its computational drama using vastly smaller amounts of matter, energy, and time. At the same time, any who are aware of the amazing replicative repetitiveness of astronomical features would suspect that there are likely to be billions of intelligences like ours within our universe. Yet we have had no communication from any of them, even from those Sun-like stars, closer to our own galactic center, which are billions of years older than ours. This curious situation is called the Fermi Paradox, after Enrico Fermi, who in the 1940's, asked the famous question, "Where Are They?," in relation to these older, putatively far more technologically advanced civilizations. Contemplating this question in 1972, it struck me that the entire system is apparently structured so that intelligence inexorably transcends the universe, rather than expanding within it, and that black holes, those curious entities that exist both within and without our universe, probably have something central to do with this process. These simple ideas were the seed of the developmental singularity hypothesis, and I've been tinkering with it ever since. All this brings us to the interesting question of the future of artificial intelligence.
From our perspective this may be an entirely natural, incremental, and reversible (at least temporarily) development, and if it occurs, we will very likely all be taken along for the ride as well, in a voluntary process of transformation. This "inclusive" feature of the transition seems reasonable if one makes a chain of presently thinly-researched assumptions, including: 1) that the A.I.s will have significantly increased consciousness at or shortly after their emergence, 2) that once they have modeled us, and all other life forms to the point of real-time predictability they will be ethically compelled to ubiquitously share this gift, 3) that all life forms will find such a gift to be irresistible, and 4) by the simple act of sharing they will turn us into them. This convergent planetary transition to the postbiological domain would comprise a local "technetic takeover" as complete as the "genetic takeover" that led to the emergence of DNA-guided protein synthesis as the sole carrier of higher local intelligence after biogenesis. I'll forgive you if you think at this point that I've taken leave of my senses, and I'm not going to try to defend these perspectives further here, as that would be beyond the scope of this interview, and more appropriate to my forthcoming book. But if you are interested in conducting your own research, consider exploring the link above, and reading some helpful books that each explore important pieces of the larger idea. You might start with Lee Smolin's The Life of the Cosmos, 1994, Eric Chaisson's Cosmic Evolution, 2001, and James Gardner's Biocosm, 2003. You could also peruse Sheldon Ross's Simulation, 2001, though that is a technical work. If you have any feedback at that point, send me an email and let me know what you think. B. I remember I first encountered this idea in a science fiction story that I considered to be entertaining, but closer to fantasy than true science fiction. It did not appear to be grounded in reality. A short time later I was given a copy of Vernor Vinge's essay on the singularity and I began to reconsider whether there might not be something to it. Does the idea of the singularity originate with Vinge or elsewhere?
Readers are referred to our Brief History of Intellectual Discussion of Accelerating Change for more on the fascinating singularity discussion story, which includes a number of careful thinkers who have illuminated different pieces of the elephant in the century since. Since 1983, as you mention, the mathematician, computer scientist, and science fiction author Vernor Vinge has given some of the best brief arguments to date for this idea. His eight-page internet essay, "The Coming Technological Singularity," 1993, is an excellent place to start your investigation of the singularity phenomenon. I would also recommend my introductory web site, AccelerationWatch.com, and a few others, such as KurzweilAI.net, which are referenced at my site. C. Here's a quote from your Acceleration Watch web site: "[Research suggests that] there is something about the construction of the universe itself, something about the nature and universal function of local computation that permits, and may even mandate, continuously accelerating computational development in local environments." This sounds like metaphysics to me. How could a universe with such properties come to exist? Does this imply some kind of intelligent design? That depends very much on what you consider "intelligence," I think. One initially suspects some kind of intelligence must be involved in the continually accelerating emergences we have observed. In the phase space of all possible universes consistent with physical law, one wouldn't find our kind of accelerating, life-friendly universe in a random toss of the coin, or as various anthropic cosmologists have pointed out, even in an astronomically large number of random tosses of the coin. Some deep organizing principles are likely be at work, principles that may themselves exhibit a self-organizing intelligence over time. Systems theorists look for broad views to get some perspective on this question, so bear with me as we consider an abstract model for the dynamics that may be central to the issue. Everything really interesting in the known universe appears to be a replicating system. Solar systems, complex planets, organic chemistry, cells, multicellular organisms, brains, languages, ideas, and technological systems are all good examples. Each undergoes replication, variation, interaction, selection, and convergence, in what may be called an RVISC (replication, variation, interaction, selection, convergence) developmental cycle. Given this extensive zoology, it is most conservative, most parsimonious to assume that the physical universe we inhabit is just another such system. Big bang theorists tell us the universe had a very finite beginning. Since 1998, lambda energy theorists have told us that our 13.7 billion year universe is already one billion years into an accelerating senescence, or death. Multiverse cosmologists tell us that ours is just one of many universes, and some, such as Lee Smolin, Alan Guth, and Andrei Linde, have suggested that black holes are the seeds of new universe creation. If so, that would make this universe a very fecund replicator, as relativity theory predicts at least 100 trillion black holes to be in existence at the present time. For each of the above reproducing complex adaptive systems (CASs, in John Holland's use of the term), there are at least two important mechanisms of change we need to consider: evolution and development. Evolution involves the Darwinian mechanisms of variation, interaction, and selection, the VIS in the middle of the RVISC cycle. Development involves statistically deterministic mechanisms of replication and convergence, the replication and convergence "boundaries" of the RVISC reproduction cycle for any complex system. Perhaps the following picture will help: Now consider human beings. Our intelligence is both evolutionary and developmental. Each of us follows an evolutionary path, the unique memetic (ideational) and technetic (tools and technologies) structures that we choose to use and build. (As individuals we also follow a genetic evolutionary path, but this is so slow and constrained that it has become future-irrelevant in the face of memetic and technetic evolution.) At the same time, we must all conform to the same fixed developmental cycle, a 120-year birth-growth-maturity-reproduction-senescence-death Ferris wheel than none of us can appreciably alter, only destroy. The special developmental parameters, the DNA genes that guide our own cycle, were tuned up over millions of years of recursive evolutionary development to produce brains capable of complex behavioral mimicry memetics, and then linguistic mimicry memetics, astonishing brains that now cradle our own special self-awareness. Now contemplate
our own universe and imagine, as Teilhard de Chardin did with his
elegant "cosmic embryogenesis" metaphor, that it is an evolutionary
developmental entity with a life and death of its own. In fact, heat death
theorists have known the universe has a physical lifespan for almost two
centuries, but we, thinking like immortal youth, still commonly ignore
this. In other words, if encoded intelligence usefully influences the replication that occurs in the next developmental cycle, and we can make the case that it always would, by comparison to otherwise random processes, then universes that encode the emergence of increasingly powerful universe-modeling intelligence will always outcompete those that don't, in the multiversal environment. When I relay these thoughts to patient listeners, a question commonly occurs. Why wouldn't universes emerge which seek to keep cosmic intelligence around forever? This question seems equivalent to asking why it is that our genes "choose" to continue to throw away our adult forms in almost all higher species in competitive environments. The answer likely has to do with the fact that any adult structure, including an entire universe based on discrete physical laws, has a sharply fixed developmental capacity, based on the potential of its genes, and once the capacity has been broadly expressed and accelerating intelligence is no longer occurring in the adult form, the adult structure is just not that smart in relation to the future computational potentialities of the evolutionary developmental system. At that point, recycling becomes a more resource-efficient computing strategy than attempting to continue with the terminally differentiated adult form. Let us propose that the A.I.'s to come, even as they rapidly learn what they can within this universe, remain of sharply fixed complexity, while operating within a much larger, Gödelian-incomplete multiverse. As long as that multiverse continues to represent a combinatorial explosion of possibilities, universal computing systems will likely remain stuck on a developmental cycle, trading off between phases of "genetic" parameter-tuning reproduction and phases of "somatic" intelligence unfolding. We should be careful here of oversimplifying, as each phase of this cycle will likely involve both random evolution and predictable development. Another way that systems theorists have explored the yin-yang of evolutionary developmental cycles is in terms of Francis Heylighen and Donald Campbell's insights on downcausality (including parameter tuning) and upcausality (including hierarchical emergence), useful extensions of the popular concepts of holism and reductionism. If we live in a universe populated by an "ecology of black holes," as I suspect, then we will soon discover that most of them, such as galactic and stellar gravitational black holes, can only reproduce universes of low-grade computational complexity. In one model of self-organization, of iterative evolutionary development, these cycling complex adaptive systems might serve as the stable base, the lineage out of which our much more impressively intelligence-encoding universe has emerged, in the same way that we metazoans have been built on top of a stable base of cycling bacteria. How long our own universe will continue cycling in its current form is anyone's guess, at present. But we may note that in living systems, while developmental cycles can continue for long periods of time, they are never endless in any particular lineage. So it seems very likely that recurrence of the "type" of universe we inhabit would also have a limited lifespan, before it has explored its opportunities extensively enough to become another "type." Fortunately, all of this should become much more tractable to proof by simulation, as well as by limited experiment, in coming decades. As you may know, high energy physicists are already expecting that we may soon gain the ability to probe the fabric of the multiverse via the creation of so-called "extreme black holes" of microscopic size in the laboratory (e.g., CERN's Large Hadron Collider), possibly even within the next decade. At the same time, black hole analogs for capturing light, electrons, and other quanta are also in the planning stages. With regard to microcosmic reality, I find that truth is always more interesting than fiction, and often less believable, at first blush. Using various forms of the above model, James N. Gardner, Bela Balasz, Ed Harrison, myself, and a handful of others have proposed that our human intelligence may play a central role in the universal replication cycle. In the paradigm of evolutionary development, that would make our own emergence—but not our evolutionary complexities—developmentally tuned, via many previous cycles, into our universal genes. This gene-parameter analogy is quite powerful. You wouldn't say that any reasonable amount of your adult complexity is contained in the paltry 20,000-30,000 genes that created you. In fact the developmental genes that really created you are a small subset of those, numbering perhaps in the hundreds. These genes don't specify most of the complexity contained in the 100 trillion connections in your brain. They are merely developmental guides. Like the rules of a low-dimensional cellular automata, they control the envelope boundaries of the evolutionary processes that created you. So it may be with the 20-60 known or suspected physical parameters and coupling constants underlying the Standard Model of physics, the parameters that guided the Big Bang. They are perhaps best seen as developmental guides, determining a large number of emergent features, but never specifying the evolution that occurs within the unfolding system. As anthropic cosmologists (those who suspect the universe is specifically structured to create life) are discovering, a number of our universal parameters (e.g., the gravitational constant, the fine structure constant, the mass of the electron, etc.) appear to be very finely tuned to create a universe that must develop life. As cosmology delves further into M-Theory, anthropic issues are intensifying, not subsiding. Some theorists, such as Leonard Susskind, have estimated that there are an incredibly large number of string theory vacua from which our particular universal parameters were somehow specified to emerge.
That is an amazing level of nonrandom convergence to tune into such simple initial parameters. Both twins predictably go into puberty thirteen years later, after a virtually endless period involving astronomical numbers of interactions at the molecular scale. So it apparently is with our own universe's puberty, which occurred about 12.7 billion years after the Big Bang, about 1 billion years ago. Earth's intelligence is apparently one of hundreds of billions of ovulating, self-fertilizing seeds in our universe, one that is about to transcend into inner space very soon in cosmologic time.
Most likely, this transition leads to a subsequent restart of the developmental cycle, which would provide the most parsimonious explanation yet advanced for how the special parameters of our universe came to be. As with living systems, these parameters were apparently self-organized, over many successive cycles, not instantiated by some entity standing outside the cycle, but influenced incrementally by the intelligence arising within it. In this paradigm, developmental failures are always possible. But curiously, they are rarer, in a statistical sense, the longer any developmental process successfully proceeds. Just look at the data for spontaneous abortions in human beings, which are increasingly rare after the first trimester, to see one obvious example. But even if all this speculation is true, we must realize that this says little about our evolutionary role. Remember, life greatly cherishes variation. There is probably a very deep computational reason why there are six billion discrete human beings on the planet right now, rather than one unitary multimind. Consider that every one of the developmental intelligences in this universe is, right now, taking its own unique path down the rabbit hole, and they are all separated by vast distances, planted very widely in the field, so to speak, to carefully preserve all that useful evolutionary variation. I find that quite interesting and encouraging. Free will, or the protected randomness of evolutionary search at the "unbounded edge" between chaos and control in complex systems, always seems to be central to the cycle at every scale in universal systems. Now it is appropriate to consider another commonly-asked question with regard to these dynamics. How likely is it, by becoming aware of a cosmic replication cycle and our apparent role in it, that we might alter the cycle to any appreciable degree? To answer this, it may also be helpful to realize that complex adaptive systems are always aware that many elements of their world are constrained to operate in cycles (day/night, wake/sleep, life/death, etc.). So it's only an extension of prior historical insight if we soon discover that our universe is also constrained to function in the same manner. It may help to remember that long before human society had theories of progress (after the 1650's), and of accelerating progress (after the singularity hypothesis, beginning in the 1900's), cyclic cosmologies and theories of social change were the norm. Even a mating salmon is probably very aware of his own impending demise in the cycle of life. They certainly expend their energy in ways that are entirely purposeful in that regard.
As personal development theorist Stephen Covey (Seven Habits of Highly Effective People, 1990) is fond of saying, you cannot break fundamental principles, or laws of nature. You can only break yourself against them, if you so choose. So it is that I don't have any expectation that our local intelligence could be successful in escaping the cosmic replication cycle, though we might certainly sublimate its traumatic effects over time, the way we learn to sublimate the effects of biological death by various cultural and individual responses. What about "escaping to the stars?" Almost every scenario that has been written about that idea ignores the accelerating intelligence that would occur onboard the ship. Such civilizations must lead, in a very short time, to technological singularities and, in the developmental singularity hypothesis, to universal transcension. As Vernor Vinge says, it is very hard to "write past the singularity," and in this regard he has referred both to technological and developmental types. Alternative scenarios of constructing signal beacons, or nonliving, fixed-intelligence robotic probes to spread an Encyclopedia Galactica, as Carl Sagan once proposed, ignore the massive reduction in evolutionary variation that would result. This strategy would effectively turn that corner of the galaxy into an evolutionarily sterile monoculture, condemning all intelligent civilizations in the area to go down the hole in the same way we did, and all developmental singularities in the vicinity to be of the same type. If I am right, our information theory will soon be able to conclusively prove that all such one-way communications can only reduce total universal complexity, and are to be scrupulously avoided. In conclusion, I don't think we can get around cyclic laws of nature, once we discover them. But they can give us deep insight into how to spend our lives, how to surf the tidal waves of accelerating change toward a more humanizing, individually unique, and empowering future. Much of this sounds quite fantastical, so let me remind you that these are speculative hypotheses. They will stand or fall based on much more careful scientific investigation in coming years. Attracting that investigation is one of the goals of our organization. D. If, as Ray Kurzweil has suggested, intelligence is developing on its own trajectory first in a biological substrate and now in computers is there an inevitability to the singularity that makes speculating about it superfluous? Is there really anything we can do about it one way or the other? Certainly you can't uninvent math, or electricity, or computers, or the internet, or RFID, once they arrive on the scene. Anyone who looks closely notices a surprising developmental stability and irreversibility to the acceleration. But we must remember that developmental events are only "statistically deterministic." They often occur with high probability, but only when the environment is appropriate. Developmental failure, delay, and less commonly, acceleration can also occur.
This is a controversial topic, so I will mention it only briefly, but suffice it to say that after extensive research I have concluded that no biological or nuclear destructive technologies that we can presently access, either as individuals or as nations, could ever scale up to "species killer" levels. All of them are sharply limited in their destructive effect, either by our far more complex, varied, and overpowering immune systems, in the biological case, or by intrinsic physical limits—combinatorial explosion of complexity in designing multistage fission-fusion devices—in the nuclear weapons case. These destructive limits may exist for reasons of deep universal design. A universe that allowed impulsive hominids like us an intelligence-killing destructive power wouldn't propagate very far along the timeline. Speaking
pessimistically, I'm sure we could do quite a bit to delay the transition,
by fostering a series of poorly immunized catastrophes. If events take
an unfortunate and unforsighted turn, our planet might suffer the death
of a few million human beings at the hands of poorly secured and monitored
destructive technologies, perhaps even tens of millions, in the worst
of the credible terrorist scenarios. But I am of the strong opinion that
we will never again see the 170 million deaths, due to warfare and political
repression, that occurred during the 20th century (top killers:
50 million associated with WW II, 40 million with Chinese Commmunist repression,
30 million with Soviet Communist repression, and 15 million with WW I).
As I've argued before, I think history will show that our Technological Independence Day, as a species, was July 1, 1948, the debut of the transistor at Bell Labs. Many science historians consider this the single greatest signifier of the advent of the information age. From that day forward the world became increasingly transparent, and we became increasingly sublimated and constrained by the accelerating digital jungle now rising all around us. Today, we live in an era of instant global news, and of violence that is becoming surgically minimized by an emerging global consensus. Even with our primitive, clunky, first generation internet and planetary communications grid, I believe our planet's technological immune systems have become far too strong and pluralistic, or network-like, for the scale of political atrocities of the twentieth century to ever recur. Yet conflict and exploitation will continue, and we could certainly choose a dirty, self-centered, nonsustainable, environmentally-unsound approach to the singularity. Catastrophes can and will continue to transpire. But I hope for all our sakes that they are progressively minimized by vigilance and foresight, and that we learn from them as rapidly and thoroughly as possible. I also applaud the efforts we are making to create a more ecologically sustainable, carefully regulated world of science and technology. Wherever we can inject values, sensitivity, accountability into our sociotechnological systems, I think that is a wonderful thing. I'd love to see the U.S. take a greener path to technology development, the way several countries in Europe have. I'm also pragmatic in realizing that most social changes we make will be more for our own peace of mind, and would have little effect on the intrinsic speed of our global sci-tech advances, on the rate of the increasingly human-independent learning going on in the ICT architectures all around us. I consider such moves to be more reflections on how we walk the path, choices that will in most cases do very little to delay the transition. I also do not think it is valuable to hold the perspective that we should get to the singularity as fast as we can, if that path would be anything other than a fully democratic course. There are many fates worse than death, as all those who have freely chosen to die for a cause have realized over the centuries. There are many examples of acceleration that come at unacceptable cost, as we have seen in the worst political excesses of the twentieth century. No one of us has a privileged value set. So perhaps most importantly, we need to remember that the evolutionary path is what we control, not the developmental destination. That's the essence of our daily moral choice, our personal and collective freedom. We could chart a very nasty, dirty, violent, and exploitative path to the singularity. Or with good foresight, accountability, and self-restraint, we could take a much more humanizing course. I am a cautious optimist in that regard. E. Christine Peterson recently told me that artificial intelligence represents the one future development about which she has the most apprehension. It can come the closest of any scenario to Bill Joy's "the future that doesn't need us." If the coming of the technological singularity means the ascendancy of machine intelligence and the end of the human era, shouldn't we all be doing what we can to prevent it from happening? Ah yes, the Evil Killer Robots scenario. Some of my very clever transhumanist colleagues worry quite a bit about "Friendly AI." I'm glad to have friends that are carefully exploring this issue, but from my perspective their worries seem both premature and cautiously overstated. I strongly suspect that A.I.s, by virtue of having far greater learning ability than us, will be, must be, far more ethical than us.
This optimism
isn't enough, of course. We humans had to go through a nasty, violent,
and selfish phase before we became today's semi-civilized simians. How
do we know computers won't have to do the same thing? I think the answer
to this question is that at one level, Peterson's intuitions are probably right.
But with a learning curve that is multi-millionfold faster than ours, I expect that "insect transition" to last weeks or months, not years, for any self-improving electronic evolutionary developmental system. You can be sure these systems will be well watched over by a bevy of A.I. developers, and those few catastrophes that occur to be carefully addressed by our cultural and technological immune systems. It's easy to underestimate the extent and effectiveness of immune systems, they aren't obvious or all that sexy, but they underlie every intelligent system you can name. Computer scientist Diana Gordon-Spears and others have already organized conferences on "Safe Learning Agents," for example, and we have only just begun to build world-modeling robotics. We're still several decades away from anything self-organizing at the hardware level, anything that could be "intentionally" dangerous.
In short, I expect human society will coexist with many decades of very partially aware AI's, beginning some time between 2020-2060, which will give us ample time to select for stable, friendly, and very intimately integrated intelligent partners, for each of us. Hans Moravec (Robot, 1999) has done some of the best writing in this area, but even he sometimes underestimates the importance of the personalization that will be involved. As a species, humanity would not let the singularity occur as rapidly as it will without personally witnessing the accelerating usefulness of A.I. interacting with us in all aspects of our lives, modeling us through our CUI systems, lifecams, and other aspects of the emerging electronic ecology. By contrast, every scenario of "fast takeoff" or A.I. emergence that I've ever seen, the heroic individual toiling away in the lab at night to create HAL-9000, just doesn't seem to understand the immense cycles of replication, variation, interaction, selection, and convergence in evolutionary development that are always required to create intelligence in both a bottom-up and top-down fashion. Since the 1950s, almost all the really complex technologies we've created have required teams, and there is presently nothing in technology that is as remotely complex as a mammalian brain. As I mention on my website, I think we are going to have to see massively parallel hardware systems, directed by some type of DNA-equivalent parametric hardware description language, unfolding very large, hardware-encoded neural nets and testing them against digital and real environments in very rapid evolutionary developmental cycles, before we can tune up a semi-intelligent A.I. The transition will likely require many teams of individuals and institutions, integrating bottom-up and top-down approaches, and be primarily a hardware story, and only secondarily a software story, for a number of reasons.
Note the order of magnitude difference in the two domains. Hardware has always outstripped software because, as I've said earlier, it seems to be following a developmental curve that is more human discovered than human created. It is easier to discover latent efficiencies in hardware vs. software "phase space", because the search space is much more directed by the physics of the microcosm. Teuvo Kohonen, one of the pioneers of neural networks, tells me that he doesn't expect the neural network field to come into maturity until most of our nets are implemented in hardware, not software, a condition we are still at least a decade or two away from attaining. The central problem is an economic one. No computer manufacturer can begin to explore how to create biologically-inspired, massively parallel hardware architectures until our chips stop their magic annual shrinking game and have become maximally-miniaturized (within the dominant manufacturing paradigm) commodities. That isn't expected for at least another 15 years, so we've got a lot of time yet to think about how we want to build these things. If I'm right, the first versions of really interesting A.I.s will likely emerge on redundant, fault tolerant evolvable hardware "Big Iron" machines that take us back to the 1950s in their form factor. Expect some of these computers to be the size of buildings, tended by vast teams of digital gardeners. Dumbed-down versions of the successful hardware nets will be grafted into our commercial appliances and tools, mini-nets built on a partially reconfigurable architecture, systems that will regularly upgrade themselves over the Net. But even in the multi-millionfold faster electronic environment, a bottom-up process of evolutionary development must still require decades, not days, to grow high-end A.I.. And primarily top-down A.I. designs are just flat wrong, ignorant of how complexity has always emerged in physical systems. Even all of human science, which some consider the quintessential example of a rationally-guided architecture, has been far more an inductive, serendipitous affair than a top-down, deductive one, as James Burke (Connections, 1995) delights in reminding us.
In a related
point, I also wouldn't worry too much about the loss of our humanity to
the machines. Evolution has shown that good ideas always get rediscovered.
The eye, for example, was discovered at least thirty times by some otherwise
very divergent genetic pathways. This leads us to a somewhat startling realization. Even if, in the most abominably unlikely of scenarios, all of humanity were snuffed out by a rogue A.I., from a developmentalist perspective it seems overwhelmingly likely that good A.I.s would soon emerge to recreate us. Probably not in the "Christian rapture" scenario envisioned by transhumanist Frank Tipler in The Physics of Immortality, 1997, but certainly our informational essence, all that we commonly hold dear about ourselves. How can we even suspect this? Humanity today is doing everything it can to unearth all that came before us. It seems to be in the nature of all intelligence to want to deeply know its lineage, not just from our perspective, but from the perspective of the prior systems. If the world is based on physical causes, then in order to truly know one understands the world, one must truly know, and be able to understand at the deepest level, the systems in which one is embedded, including the simpler systems from which one has emerged, in a continuum of developmental change. The past is always far more computationally tractable than what lies ahead. That curiosity is a beautiful thing, as it holds us all tightly interdependent, one common weave of the spacetime fabric, so to speak. That's why we are already spending tens of millions of dollars a year trying to model the way bacteria work, trying to predict, eventually in real-time, everything they do before they even do it, so that we know we truly understand them. That's why emergent A.I. will do the same thing to us, permeating our bodies and brains with its nanosensor grids, to be sure it fully understands its heritage. Only then will we be ready to make the final transition from the flesh. F. Also on your website, I read that the singularity will occur within the next 40 to 120 years. Isn't that kind of broad range? What's your best guess on when it will occur? I find that those making singularity predictions can be usefully divided into three camps: those predicting near term (now to 2029), mid-term (2030-2080), and longer term (2081-2150+) emergence of a generalized greater-than-human intelligence. Each group has somewhat different demographics, which may be interesting from an anthropological perspective. I think the range is so broad because the future is inherently unpredictable and under our influence. It is also true that none of us has yet developed a popular set of quantitative methodologies for thinking rigorously about these things. Very little money or attention has been given to them. If you'd like to send a donation to our organization to help in that regard, let us know. From my website: "Most estimates in the singularity discussion community, intuitive as they all are at this early stage, project a generalized human-surpassing machine intelligence emerging circa 2040, give or take approximately 20 years. This puts many singularitarians on the 2020 end, and some of the older, more conservative prognosticators (like Marvin Minsky and myself) on the 2060 end. When I started considering the time of arrival back in 2001, I considered 2020-2060 as a broadly reasonable rage. But in subsequent inquiry, I have gained a better understanding that a mature, distributed, planetwide network of semiautonomous and largely bottom-up hardware and software development processes is likely to be required. The singularity is not likely to be precipitated by today's evolutionary computing but by tomorrow's massively parallel, cyclical, evolutionary developmental processes, which are going to require a paradigm shift in computing to attain. Metal oxide semiconductor dedicated ASIC systems are likely to be our dominant computing paradigm until at least 2020. A Symbiotic Era of complex, CUI-equipped interfaces, built on top of an increasingly parallel architecture is going to take a lot of money, practice, and time to develop high level intelligence and autonomy. I'm presently assuming 30 years, from 2020-2050. That would move the Autonomy Era to 2050, and my estimated time of generalized human surpassing AI to 2060, the upper end of this range. Nevertheless, my confidence interval remains wide (20 years per standard deviation) as I realize these are average expectations of progress in a fault-prone world, and I believe the arrival likely depends, within a human generation or two either way, on the choices we make. To significantly accelerate its arrival, most important may be our political, economic, social, and personal choices in regard to science and technology education, innovation, research, and development. To significantly delay its arrival, we have many more possibilities, none of which I need reiterate here."
G. Do you take the position that we can make no meaningful statements about what may happen after the singularity occurs? Or, if we can at least speculate about it, what is your best guess as to what life will be like in a post-singularity world? As I've described above, I think that there are a number of simple, global statements we can make about the developmental course that the universe must take after the singularity emerges. It seems a very good bet, for example, that tomorrow's technological intelligences will be fully constrained by the laws of physics in this universe, both the majority that I feel are known and that much smaller set that remains undiscovered. That constraint already tells us volumes about what they'll be doing in their exploration of our increasingly informationally and energetically barren universe. I think
Stephen Weinberg (Dreams of a Final
Theory, 1993) is right, that we are within just a few decades
(or perhaps generations) of understanding all the functional elements
at the bottom end of this finite universe. As I've mentioned before, I think they'll be constrained to be ethical, to be information seekers, and to rapidly enter a black hole transition (the developmental singularity hypothesis). But this tells us little about the evolutionary uniqueness of their path, other than that it will have intricacies within it that we cannot comprehend. We'll also have plenty of decades to see if persuasive computing, personality capture and the humanizing AI scenario emerges, as described earlier, long before the singularity occurs. If machine intelligence does develop along the lines predicted, I think it's pretty clear that when the A.I. arrives, they will be we, just natural extensions of ourselves. In that world, as Hans Moravec was perhaps the first to remind us (Mind Children, 1988), it seems very likely that all local intelligence will jump to a postbiological domain. Soon after that, I suspect, we may transition to a postuniversal domain. That seems a very natural transition, to me. H. You’ve placed a good deal of emphasis on academia, specifically on degree programs related to the study of accelerating change. Why is this so important? To develop any kind of foresight, we need to study. If the biological sciences have taught us anything in the last century, its that the difference between evolution and development in living systems is one of the last great mysteries. With careful effort, we will tease out that special, simple, developmental component, and understand how development uses evolution in all complex systems.
Academia is only one of the players that can help chart a safe transition to general artificial intelligence, but it has a unique ability to provide big picture perspectives. In partnership with government, business, and dedicated individuals it is one of the important pieces of the puzzle. I. When I heard you speak recently, I was surprised by what you had to say on the question of whether we’re alone in the universe. In the end, do you think that our universe will be occupied by any intelligence other than human intelligence or its descendants? As I've mentioned earlier in this interview, I think all universal intelligence follows a path of transcension, not expansion. This has to do with such issues as the nature of communication in complexity construction (two-way, with feedback, is relentlessly preferred), the large scale structure of the universe (which puts huge space buffers between intelligences) and the small scale structure of the universe (which rewards rapid compression of the matter, energy, space, and time necessary to do any computation).
Our own planet began sending such signals out to space with the emergence of powerful radio technology in the 1920's. Like the wheel, electricity, and a broad range of other technological inventions, manipulation of the electromagnetic spectrum to produce radio information seems very unlikely to be an evolutionary experiment, unique to Earth, but rather an inevitable developmental discovery. I would expect it to be a technology common to all self-aware chemical assemblages that begin to manipulate broader segments of matter, energy space, and time. If we guesstimate that our own civilization enters a developmental singularity circa 2150, after which transmissions cease, this would allow us an average of 200 years of transmission time, out of a stellar lifetime of roughly 10 billion years. How many stars in our Milky Way galaxy alone might be incubating intelligent planets? Charles Lineweaver, Yeshe Fenner and Brad Gibson have recently estimated (Science, to be published Jan 2, 2004) that 10% of the galaxy's stars, are in a Galactic Habitable Zone (GHZ) ringing the Milky Way. They suggest that about 10% our galaxy's stars have the appropriate metallicity, lifespan, circular synchronization with galactic rotation and adequate protection from supernovas to sprout life. Planet hunters Geoff Marcy and Paul Butler have also used the 10% estimate. While some astronomers have proposed higher figures (Seth Shostak, Cosmic Company, 2003), the current conservative estimate is that there are between 20 and 30 billion habitable planetary systems in the Milky Way. Lineweaver et. al.'s estimates describe the GHZ as a thin band ringing our galaxy between 23,000 and 30,000 light years from the galactic center. We are 28,000 light years out, near the outside of the habitable band. In the same way, a previously proposed stellar habitable zone (SHZ, see David Darling's Life Everywhere, 2002) forms another narrow band, between 0.95 to 1.3 AUs (Earth-Sun median distance) from our sun, a zone too narrow to harbor more than one intelligent planet per solar system, statistically speaking. The new study estimates that GHZ stars formed between 8 and 4 billion years ago, and that 75% of the stars in this band are older than our sun. This would make the earliest GHZ stars in our galaxy about 3 billion years older than ours. This would provide plenty of time for civilizations to develop and colonize the galaxy, if physics would allow. But if civilizations transcend, rather than colonize, and if we are assuming 20-30 billion Earth-like planets, 75% in solar systems older than ours, closer to the galactic core, this argues (200/10 billion) * 30 billion * 3/4 = 450 radio fossils are patiently waiting to be discovered in the night sky in our galaxy, and perhaps many more in other galaxies, when we can gain the technology to detect them. I've described this falsifiable prediction further in a 2002 Journal of Evolution and Technology article on the Fermi Paradox, so I refer you to that if you'd like to further explore these interesting ideas. Fortunately according to a conversation with Frank Drake, our antennas are arguably sensitive enough now to detect unintentional EM emissions from the closest of our neighboring stars. When these sensitivities scale up to allow us to scan millions of local stars for unintentional radio emissions, I think we'll begin to discover these unmistakable signatures of nonrandom intelligence. We should also notice that every year, a small fraction (roughly 1/200th) of these radio fossils will suddenly stop sending signals. Like us in coming years, these will be civilizations whose science invariably discovers that the developmental future of universal intelligence is not outer space, but inner space. That's our destiny of species. Does this all seem too far-fetched? Before rejecting the developmental perspective for more comfortable orthodoxies, let me leave you to contemplate one final curiosity, what I call the Midpoint Principle in universal evolutionary development. This is the observation that the path to emergence of Earth-like intelligence seems to be structured to occur at or near the midpoint, the half-way zone between a number of key developmental boundaries or symmetry breaks in universal systems. Consider that current estimates suggest that about half of the stars in a galaxy will form binary star systems without planets, while the other half will form planetary systems. Notice that our galactic habitable zone is about halfway out along the galactic radius, in an optimal location of appropriate metallicity between two much radiation near the core and a barren region farther out. Notice that our sun is in the "dark half" of the galactic disk, between two great spiral arms of the galaxy, Sagittarius and Perseus. Notice that our G-type sun (a yellow dwarf) is located right in middle of the luminosity distribution in the Hertzsprung-Russell diagram for main sequence stars. Notice that life intelligent enough to model the universe, us, has emerged five billion years after our Sun's birth. Where is this in our Sun's lifespan? Almost exactly half-way, as if the system was developmentally encoded for our emergence to occur at the mipoint of the stellar lifecycle. Consider Martin Rees's observation that a 58 kg human being is midway between the weight of an atom and that of a star. Notice that the human scale is midway between that of a 10^-20 cm atom and a 10^25 cm universe. I've put together a growing list of these curiosities, and collectively, they argue that human emergence in form, space, and time is not random, but subject to a set of guiding principles from a systems perspective. I suggest that in the same way that Darwin's finches were evidence of incremental evolution, midpoint observations are among many circumstantial evidences of developmental design. The hypothesis of the Midpoint Principle, if true, invalidates what is known in cosmology as the Generalized Copernican Principle, the idea that Earth's existence and our sun do not occupy a privileged position in the cosmos. But I predict that the better we come to understand the mysteries of galactic formation and structure in coming decades, and the more we unravel the mysteries of biological development, the closer we will come to realizing that De Chardin's idea of cosmic embryogenesis was exactly right. We appear to be here due to an encoded developmental physics. Our accelerating local understanding of our place in the universe is not a matter of chance, but of deep, natural, self-organized evolutionary developmental design.
Thanks to Phil Bowermaster, Lynn Elen Burton, Jose Cordiero, Ryan Elisei, Michael Hartl, Neil Jacobstein, John Peterson, Chris Phoenix, Wayne Radinsky, and Wendy Schultz for valuable comments and ideas. Feedback: feedback{at}accelerating{dot}org. [Revised 1.2004, 3.2005. Major revisions noted.] |