Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques
Background Readings on the Developmental Singularity Hypothesis (DSH)

A Speculative Evolutionary Developmental Model for Our Universe's History of Hierarchical Emergence Under Conditions of Continously Accelerating Change


DSH Studies:
Resources Overview

1. Pervasive Trends in Accelerating Change

2. Technological-Evolutionary and Human-Evolutionary Paths to AI




3. Meso/Nano/Femtotech: Accelerating and Asymptotic Trends in Computation and Physics (Including STEM Compression)

4. Black Holes and Smolin's Cyclic Recursion

5. Simulation and Computational Closure: Are We Headed for Inner or Outer Space?


6. Emergent AI: Stable, Moral, and Interdependent vs. Unpredictable, Post-Moral, or Isolationist?

7. Responsible Advocacy and Dangers of the Transition


Develomental Singularity Hypothesis — Resources Overview

This page contains resources helpful to study of the Developmental Singularity Hypothesis, one possible "big picture" of locally accelerating change.

For a more recent set of readings relative to the DSH, please see the following article:
Evo Devo Universe? A Framework for Speculations on Cosmic Culture (PDF), 2008.

The following seven topics in accelerating change seem particularly relevant to interpreting the developmental singularity hypothesis, and may be explored more thoroughly here. Selected insightful books (enjoy the Amazon reviews), articles (in quotes), and web resources are in chronological or alpha order.

Resources marked with are particularly helpful for understanding change/computation as a holistic, universal process. Those with are more technical in nature.


1. Pervasive Trends in Accelerating Change

Should we expect a local technological singularity (self-aware A.I.) circa 2060? Can this apparently inevitable event be best understood as the latest phase in a universal trend of accelerating change through a succession of emergent computational substrates?

The Spike, Damien Broderick, 2001
Faster: the Acceleration of Just About Everything, James Glieck, 2000
The Evolution Explosion: How Humans Cause Rapid Evolutionary Change, Stephen Palumbi, 2001
Age of Spiritual Machines: When Computers Exceed Human Intelligence, Ray Kurzweil, 1999
The End of The World: A Handbook for the Practical Idealist, Hugh Jeffries and Leslie Fieger, 1999 (New-agey and dramatic, but a useful contemplation of the continual acceleration.)
The Evolutionary Trajectory, Richard Coren, 1998 Review (w/ reg.)
Children of Prometheus: The Accelerating Pace of Human Evolution, Christopher Wills, 1998
Waking Up in Time: Finding Inner Peace in Times of Accelerating Change, Peter Russell, 1998
Blur: The Speed of Change in the Connected Economy, Christopher Meyer and Stan Davis, 1998 (Fluffy and jargonized business writing, but another valuable recognition of the acceleration).
Is Progress Speeding Up?, John Templeton, 1997

Considering the Singularity, John Smart, 2002
The Singularity is Near, Ray Kurzweil, 2001
Tearing Toward the Spike, Damien Broderick, 2000
The Coming Technological Singularity, Vernor Vinge, 1993

Sites: Edge, Extropy, KurzweilAI, Acceleration Watch, SIAI, WTA


2. Technological-Evolutionary and Human-Evolutionary Paths to AI

Has the field of evolutionary computation already demonstrated the capability to increase adaptive hardware and software complexity independent of human aid? Are all evolutionary developmental goal-control systems (whether in molecular, genetic, neural, memetic, or technologic substrates) self-organizing, context dependent, and only partially amenable to conscious rational human analysis? Are human logic and rational A.I. strategies themselves therefore also evolutionary (e.g., emergent substrates for universal evolutionary development)? Is every A.I. approach simply a different type of evolutionary and developmental search in phase space for ever more effective algorithms? In other words, is our own semi-rational, serendipitous search for better A.I. designs best seen as a set of tools used by the universe, through the human substrate, to semi-randomly explore the human-evolutionary computational phase space? If true, what does this imply about the nature and trajectory of emergent A.I.?

Flesh and Machines: How Robots Will Change Us, Rodney Brooks, 2002
Self-Organization in Biological Systems, Scott Camazine (Ed.), 2001
Self-Organizing Maps (in Neural Networks), Teuvo Kohonen, 2001
Swarm Intelligence, Kennedy, Eberhardt, Shi, 2001
The Algebraic Mind: Integrating Connectionism and Cognitive Science, Gary Marcus, 2001
The Advent of the Algorithm: The Idea that Rules the World, David Berlinski, 2000
Neural Networks and Intellect: Using Model-Based Concepts, Leonid Perlovsky, 2000
Evolutionary Robotics: The Biology, Intelligence, and Technology of Self-Organizing Machines, Stephano Nolfi and Dario Floreano, 2000
Conceptual Spaces: The Geometry of Thought, Peter Gardenfors, 2000
Computational Explorations in Cognitive Neuroscience, Randall O’Reilly, 2000
Field-Programmable Logic & Apps: A Roadmap to Reconfigurable Computing,  R. Hartenstein, 2000
Neural and Adaptive Systems: Fundamentals through Simulations, Jose Principe, 1999
Cambrian Intelligence: The Early History of the New AI, Rodney Brooks, 1999
Understanding Intelligence, Rolf Pfeifer and Christian Scheier, 1999
A good look at designing autonomous agents/embodied cognitive science. They discuss the "chaotic nature" of complex emergence, and suggest that the bottom up approach, working with neural nets interfaced to embodied robotics is the best strategy: "To reinforce the connections between a particular sensory-motor coordinate and the networks for different processes, the readings from the IR sensors and the wheel encoders are projected onto a Kohonen map with leaky integrators for all the sensor variables." Sounds like chaotic emergence to me. It would be nice to have better standards for and measures of growth in adaptation, but when we don't have them, it's nice to know that these evolutionary approaches are also yielding real fruit—though the size of the berries (i.e., incrementally better robotic soccer, at present) are generally small. What is sometimes forgotten by those who criticize this approach is that even if this is the way we'll be constrained to get emergent A.I., we are creating regular emergences, using this "hacker's approach," at least millions of times, if not tens of millions of times faster than they originally occurred in the biological substrate. I'd bet on this "hacker's evolution" over either the glacial pace of biological evolution or the limitations of top-down centric design.
Spikes, Decisions, and Actions: The Dynamical Foundations of Neurosciences, Hugh Wilson, 1999
Dynamics, Synergetics, Autonomous Agents: Nonlinear Systems Approaches to Cognitive Science, Tschacher and Dauwalder (Eds.), 1999
An Anatomy of Thought: The Origin and Machinery of the Mind, Ian Glynn, 1999
The Essence of Artificial Intelligence, Alison Cawsey, 1998
Darwin Among the Machines: The Evolution of Global Intelligence, George Dyson, 1998
Enchanted Looms : Conscious Networks in Brains and Computers, Rodney Cotterill, 1998
Talking Nets: An Oral History of Neural Networks, James Anderson, 1998
Dynamic Patterns: The Self-Organization of Brain and Behavior, Scott Kelso, 1998
Evolutionary Computation: The Fossil Record, David Fogel, 1998
Robot Shaping: An Experiment in Autonomous Engineering, Marco Dorigo, 1997
Exercises in Rethinking Innateness: A Handbook for Connectionist Sims, K. Plunkett, 1997
An Introduction to Genetic Algorithms, Melanie Mitchell, 1996
Rethinking Innateness : A Connectionist Perspective on Development, Jeffrey Elman et. al., 1996
Artificial Minds: An Exploration of the Mechanisms of Mind, Stan Franklin, 1995
Turtles, Termites, & Traffic Jams : Explor. in Massively Parallel Microworlds, Resnick, 1995
Hidden Order: How Adaptation Builds Complexity, John Holland, 1995 Review
Artificial Life: An Overview, Christopher Langton, 1995
Genetic Programming II: Automatic Discovery of Reusable Programs, John Koza, 1994
Analogy-Making as Perception: A Connectionist Model, Melanie Mitchell, 1993
Genetic Programming: On the Programming of Computers by Natural Selection, John Koza, 1992
Emergent Computation: Self-Organization, Collective, & Cooperative Phenomena in Natural and Artificial Computing Networks, Stephanie Forrest, 1991
The Age of Intelligent Machines, Ray Kurzweil, 1990
Apprentices of Wonder: Inside the Neural Network Revolution, William Allman, 1989
Vehicles: Experiments in Synthetic Psychology, Valentino Braitenberg, 1984
Adaptation in Natural and Artificial Systems, John Holland, 1976/92

What is Artificial Intelligence?, John McCarthy, 2000
Ripples and Puddles, Hans Moravec, 2000
Thoughts about Artificial Intelligence, Marvin Minsky, from The Age of Intelligent Machines, Ray Kurzweil, 1990

Sites: About/AI, AAAI, ACM, CNS/CalTech, Ev.Hardware, GeneticProg, INC/UCSD, ISGEC CCSBS/FAU


3. Meso / Nano / Femtotech: Accelerating and Asymptotic Trends in Computation and Physics (Including STEM Compression)

Does the universe facilitate both ever faster and more spatially compressed computational substrates? Do computationally denser substrates always figure out clever ways to use less space, time, energy, and matter (STEM compression) to encode their learned environmental information, and thus continually avoid limits to hyperexponential growth? Do several of the special laws of the universe (such as c, the information speed limit) require STEM compression as the only viable pathway for creating a continually accelerating local complexity? Is the apparent tuning of the newly discovered dark energy (cosmological constant) evidence that the universe is now entering a seed recreation/developmental singularity production stage (ie, a reproductive maturity), to be followed by a universal decomposition stage, involving an accelerating decrease in computational and physical density, while all the remaining computationally complex systems transcend via a developmental singularity (ubiquitous black hole involution) into the multiverse?

Cosmic Evolution: The Rise of Complexity in Nature, Eric Chaisson, 2001. Chaisson reintroduces and significantly updates the idea of free energy rate density (Phi, an energy density flow) as a useful index for complexity in this important work. Note his estimates for the following important semi-discrete substrates (units are ergs/sec/gm):

    Galaxies (Milky Way), 0.5;
    Stars (Sun), 2;
    Planets (Cooling Earth, Climasphere), 75;
    Ecosystems (Biosphere), 900;
    Animals (Human body), 2x10^4;
    Brains (Human cranium), 1.5x10^5;
    Society (Modern culture), 5x10^5.

What is most interesting in this analysis is that our technologies, when expressed in this index, have complexities exceeding biological and cultural substrates.

    Modern engines range from 10^5 to 10^8.

But most tellingly, modern computer chips exceed all these measures by orders of magnitude, due to their extreme miniaturization (STEM compression).

    The Intel 8080 of the 1970's comes in at 10^10;
    The Pentium II of the 1990's at 10^11.

That makes both of these very local, very special computational domains already much more impressively "complex" (or, in alternative language, more dynamically "self-organizing" per unit time) if not yet more sentient—or more structurally complex, which is only distantly related to dynamic complexity—than the individual and social organisms they are coevolving with. If you are searching for a universal perspective, and a coarse quantitative proof, that silicon systems (more generally, the "electronic systems" substrate) are the current leading contender for the next autonomous substrate, Chaisson's analyses are well worth investigating.
The Runaway Universe, Donald Goldsmith, 2000
Supersymmetry, Gordon Kane, 2000
The Elegant Universe, Brian Greene, 1999
Ultimate Zero and One: Computing at the Quantum Frontier, Colin Williams, 1999
The Cerebral Code: Thinking a Thought in the Mosaics of the Mind, William Calvin, 1996
About Time: Einstein's Unfinished Revolution, Paul Davies, 1995
A History of Mind: Evolution and the Birth of Consciousness, Nicholas Humphrey, 1992
Wrinkles in Time: The Imprint of Creation, George Smoot, Keay Davidson, 1993
Nanosystems: Molecular Machinery, Manufacturing, and Computation, Erik Drexler, 1992
The Unfinished Universe, Louise Young, 1986
Engines of Creation: The Coming Era of Nanotechnology, Erik Drexler, 1986
Philosophy of Space and Time, Hans and Maria Reichenbach, 1982
Concepts of Space: The History of Theories of Space in Physics, Max Jammer, 1954/93
Space, Time, Matter, Hermann Weyl, 1918/85
Methods of Theoretical Physics, Philip Morse and Herman Feshbach, 1953
Asymptotic Realms of Physics, Alan Guth (Ed.) et al., 1983

Ultimate Physical Limits to Computation, Seth Lloyd, Nature 2000. Lloyd observes that in a (theoretical) black hole computer, the fastest computational substrate we can envision, the time it takes to communicate information, anywhere within the system is no longer than the time it takes to process (operate on) this information, at any location. Communication issues are fundamental problems in existing computational architectures, a problem known to system architects as the "memory wall" or memory bandwidth problem. Lloyd suggests that this hidden equivalency, involving the resolution of communications issues at the black hole limit, has some as-yet-unknown universal significance. To me, that significance is what we might call the "black hole computational attractor" of the universal space-time manifold. Due to careful initial design of universal parameters (most likely, a self-organized design), this state appears to be the preferred universal developmental destiny for all complex computational systems.

Thermodynamic Reasons for Perception-Action Cycles, Rod Swenson and Michael Turvey, Ecological Psychology 3, 1991. This paper sets forth a speculative yet interesting interpretation of thermodynamics as a driver for accelerating and ever more non-local (universal) modelling complexity, suggesting the existence of a "Third Law" of Thermodynamics: The Law of Maximum (Local) Entropy Production. Chaisson (Cosmic Evolution, 2001) argues that complex systems create ever higher local density of free energy flux, and it seems intuitive that these mechanisms may be linked: higher local free energy flow densities drive (or result from, or are another way of understanding) the ever more rapid degradation of local energy potentials in self-organizing systems. Swenson's (still controversial) discovery that ordered states produce local entropy faster than disordered states, and are thus thermodynamically preferred in an energy gradient, seems equivalent to Chaisson's idea that free energy flow rates are a very useful index for the complexity of emergent systems. Both of these approaches are potential ways of understanding the STEM compression of locally accelerating change, though in my hypothesis the latter concept goes a bit beyond these to observe that emergent computational systems encode environmental information not only more rapidly (time), densely (space, matter-energy), and entropically, but also using less absolute space, time, energy, and matter—per any standardized encoding—than previous systems. Such mechanisms allow computational systems to continually evade conventional limits to growth, and appear to constrain our trajectory toward a local black/white hole very soon in cosmologic time.

Bush Robots, Hans Moravec, 1999. A concise introduction to the idea of miniaturization as a recursive process.


4. Black Holes and Smolin’s Cyclic Recursion

Is intelligent life in the process of creating a local black hole, which will "bounce" to create a new universe? Does universal life cycle through the multiverse from (big bang) singularity to (black hole) singularity, in the same manner that a seed creates an organism which in turn creates a new seed? Of the trillions of black holes in our universe—which may each go on to create new universes—is there a continuum of complexity in their offspring, i.e., an ecology of replicating primordial, quasar, galactic, stellar and "intelligent" black holes, each going on to create universes which engender various fixed degrees of developmental complexity, and most of which represent the "stable base" of amoeba-like universies, but also including a smaller population of intelligence-engendering universal systems (of which ours is arguably a case) at the top of the pyramid? Are such models only comfortable infopomorphisms, or are they eventually provable by simulation, and can such "cosmological selection", when generally applied, explain the widely observed evidence for anthropic design in our universe?

Biocosm , James N. Gardner, 2003
Our Cosmic Habitat, Martin Rees, 2002
Just Six Numbers, Martin Rees, 2000
Before the Beginning: Our Universe and Others, Martin Rees, 1997
The Life of the Cosmos, Lee Smolin, 1997 Booksite
In the Beginning: The Birth of the Living Universe, John Gribbin, 1993
Black Holes and Baby Universes, Stephen Hawking, 1993
Universes, John Leslie, 1989
The Symbiotic Universe, George Greenstein, 1988 Review
The Quickening Universe, Eugene Mallove, 1987

The Role of Life in the Cosmological Replication Cycle, Bela Balazs, 2001
The Natural Selection of Universes Containing Intelligent Life, Ed Harrison, QJRAS 36:193, 1995
Did the Universe Evolve?, Lee Smolin, 1992


5. Simulation and Computational Closure: Are We Headed for Inner or Outer Space?

Have we discovered most of the simplest laws of the universe, in our mental recreation of its structure? Is the universe itself a simulation of sorts, if we can model it so effectively with our own simple simulations? From our position within the universe, are we close to a gross understanding of the beginning, end, and recurrence of the universe's developmental cycle, in a manner that cannot exceed inherent universal constraints? Are we close to extending the standard model of physics all the way to the Planck scale, and developing a fundamental "theory of everything", and would this define a natural lower limit in spacetime (universal "computational closure") to the intrinsic complexity of the universe as a self-organized computational substrate? Are we simultaneously close to discovering a universal replicating cycle via black hole transcension that might define a natural upper limit in spacetime to computational cycles within this particular universe? Will our exponentiating simulation capacity allow us to rapidly discover remaining hidden universal structure, and inform us in the production of a more computationally complex universe in a subsequent cycle? Will we gain adequate computational closure on this developmental cycle simply by looking at and simulating outer space rather than by physically traveling there? Is the direction of change (time, the arrow of complexity) leading us irreversibly to inner space (black holes, new universes) to create our future, and is outer space therefore essentially an informational record of our past, less complex universal history—a computational rather than physical frontier? Does this closure and journey to inner space suggest that we are now in the end stages (ie, a type of universal maturity/"ovulation" stage) of locally recreating a new universe seed?

Where is Everybody? Fifty Solutions to Fermi's Paradox, Stephen Webb, 2002
A New Kind of Science, Stephen Wolfram, 2002 Booksite
Three Roads to Quantum Gravity, Lee Smolin, 2001
The Bit and the Pendulum: The New Physics of Information, Tom Siegfried, 2000
The Computational Beauty of Nature, Gary Flake, 1998
Would-Be Worlds: How Simulation is Changing the Frontiers of Science, John Casti, 1997
Figments of Reality, Stewart and Cohen, 1997
The End of Science, John Horgan, 1996
Extraterrestrials: Where Are They? (Fermi Paradox), Ben Zuckerman and Michael Hart, 1995
The Collapse of Chaos, Cohen and Stewart, 1994
Cellular Automata and Complexity: Collected Papers, Stephen Wolfram, 1994
Supercomputing and the Transformation of Science, Kaufmann and Smarr, 1993
Mirror Worlds: The Day Software Puts the Universe in a Shoebox…, David Gelernter, 1991

Thermodynamics, Evolution, and Behavior, Rod Swenson, 1997. Swenson's explanation of the steady increase in the space-time dimensions of "meaningful component relations" (ie, self-referential relations, self-"generated" simulations) within complex systems during evolutionary development is a simple and elegant way to understand how, as surviving systems increase their local complexity, their inner simulation activity creates computational closures (recursive perception-action models) with ever greater universal dimensions. Bacteria don't react to the stars, but humans do, as well as to the structure of implicit and discovered physical law. From my perspective, once the hyperexponentiating dimensions of local simulations approximately equal universal dimensions, and once such simulations have effectively mapped the universe's natural developmental compuational cycle (if such in fact exists), local seed recreation (new universe production) may be expected to occur. Swenson also reprises his "Law of Maximum Entropy Production" in this useful article.

Sites: CESPA SwensonPubsPage


6. Emergent AI: Stable, Moral, and Interdependent vs. Unpredictable, Post-Moral, or Isolationist?

Are complex systems naturally convergent, self-stabilizing and symbiotic as a function of their computational depth? Is the self-organizing emergence of "friendliness" or "robustness to catastrophe" as inevitable as "intelligence," when considered on a universal scale? Are deception and violence useful strategies only for systems (like biological humans) with very computationally limited (e.g., largely non-plastic) learning capacity and social information flow? Are such strategies relentlessly eliminated as computational capacity, flexibility and interconnectedness (global brain, swarm computation) increase, as some have argued? If so, can we better characterize and reinforce this intrinsic trajectory as we create our pre-emergent A.I.? Are all catastrophes in complex systems, independent of substrate, primarily catalysts for both increased balance and complex immunity in the surviving substrate? Are there any examples of catastrophes, from any timescale or substrate, which have eliminated more than a small fraction (usually less than 5%) of the extant systems of similar complexity in the local environment? (So far I can think of none, after long deliberation on this issue). Will our emerging technological substrate (the internet and its computational intelligence) become ever more seamlessly integrated and symbiotic with human minds, even long prior to any potential "uploading?" In other words, as our interfaces increase in sophistication and utility, will we "upload by degrees" into the coming electronic systems substrate? What insights can such tools as evolutionary game theory, the evolutionary psychology of metazoan and primate morality, and a universal, substrate-centric perspective provide about the preconditions, friendliness, and implicit safety and security of our currently developing computational technology?

Nonzero: The Logic of Human Destiny, Robert Wright, 2000 Booksite
Information and Self-Organization: A Macroscopic Approach to Complex Systems (Synergetics), Hermann Haken, 2000
Haken founded the field of synergetics, a nonlinear dynamics approach to understanding the self-organization of coordination in all complex systems, independent of substrate. A number of researchers have extended his approach, but as yet these models remain only crudely predictive of mental or other complex systems behavior. Nonlinear models may be far more accurate descriptors of reality than our less complex mathematics. But navigating nonlinear conceptual space is very difficult for human minds, as we reach the limits of our innate biological processing ability. (It is very hard, for example even to find good courses in nonlinear mathematics at most universities). Such areas are leading contenders for new insights, but it is likely that it will be our computers (computational mathematics) and even more so, our emergent A.I. that will discover them.
Evolutionary Origins of Morality: Cross-Disciplinary Perspectives, Leonard Katz, 2000
The Transparent Society, David Brin, 1998
Individual Strategy and Social Structure: An Evolutionary Theory of Institutions, Peyton Young, 1998
Symbiotic Planet: A New Look at Evolution, Lynn Margulis, 1998
Unto Others: The Evolution and Psychology of Unselfish Behavior, Sober and Wilson, 1998
The Complexity of Cooperation: Agent-Based Models of Competition & Collaboration, R. Axelrod, 1997 Review
Origins of Virtue: Human Instincts and the Evolution of Cooperation, Matt Ridley, 1996
Evolution of the Social Contract, Brian Skyrms, 1996
Beyond Humanity: Cyberevolution and Future Minds, Paul and Cox, 1996
The Moral Animal, Robert Wright, 1994
The Adapted Mind, Cosmides, Tooby and Barkow, 1992
Mind Children: The Future of Robot and Human Intelligence, Hans Moravec, 1988
The Society of Mind, Marvin Minsky, 1985
Life in Darwin's Universe: Evolution and the Cosmos, Gene Bylinsky, 1981 Review

Friendly AI, Eliezer Yudkowsky, 2001
I tend to disagree with many assumptions of Yudkowsky, but his is a good example of top-down models which express a "conditional confidence" in future friendliness. I share his conclusion but without invoking a "consciousness centralizing" world view, which assumes that human imposed conditions will continue to play a central role in the self-balancing, integrative, and information-protecting processes that are emerging within complex adaptive technological systems. While it is true that consciousness and human rationality play central roles in the self-organizing of the collective human complex adaptive system (human civilization, species consciousness), and that these processes often control the perceptions and models we build of the universe (ie, the quality of our individual and collective simulations) such systems do not appear to control the evolutionary development of the universe itself, and are thus peripheral to the self-organization of all other substrates, be they molecular, genetic, neural, or most importantly in this case, technologic.

It is deceptively easy to assume that because humans are catalysts in the production of technology to increase our local understanding of the universe, that we ultimately "control" that technology, and that it develops at a rate and in a manner dependent on our conscious understanding of it. Such may approximate the actual case in the initial stages, but all complex adaptive systems rapidly develop local centers of control, and technology is proving to be millions of times better at such "environmental learning" than the biology that it is co-evolving with. It can be demonstrated that all evolutionary developmental substrates take care of these issues on their own, from within. Technological evolutionary development is rapidly engaged in the process of encoding, learning, and self-organizing environmental simulations in its own contingent fashion, and with a degree of STEM compression at least ten million times faster than human memetic evolutionary development. Thus humans are both partially-cognizant spectators and willing catalysts in this process. This appears to be the hidden story of emergent A.I.

Ethics for Machines, Josh Hall, 2000 Critique (Peter Voss)
The Coming Merger of Mind and Machine, Ray Kurzweil, 1999
The Web Within Us: Minds and Machines Become One," Ray Kurzweil, 1999

Sites: AAAI/Ethics of AI, Ethics of AI


7. Responsible Advocacy and Dangers of the Transition

What are our greatest levers for increasing the technological effectiveness/ computational complexity of our existing economic, social, and political systems and institutions? What classes of catastrophes can occur in the transition to a technological singularity? How can we use our best models to minimize their frequency and severity? Do catastrophes naturally limit their scope and severity as a function of substrate complexity? Is a moderate and omnipresent level of catastrophe a necessary catalyst for accelerating change? If so, how do we, as purposeful agents for catastrophe reduction (creating self-organizing immune systems on a cultural level), find the balance between inadequate selection pressure and destructive stresses?

State of the World 2004, Worldwatch staff, 2004
The Long Boom: A Future History of the World 1980-2020, Schwartz, Leyden, Hyatt, 2000 Booksite
Artificial Immune Systems and their Applications, Dasgupta, 1998
Reason Enough to Hope, Morrison and Tsipis, 1998
Beyond Calculation: The Next Fifty Years of Computing, Denning and Metcalfe, 1997
The Self-Organizing Economy, Paul Krugman, 1996
The End of the World: The Science and Ethics of Human Extinction, John Leslie, 1996
Natural Capitalism: Creating the Next Industrial Revolution, Hawken, Lovins, Lovins, 1996
Unbounding the Future: The Nanotechnology Revolution, Drexler, Peterson, Pergamit, 1991
The Collapse of Complex Societies, Joseph Tainter, 1990

Existential Risks: Human Extinction Scenarios,Nick Bostrom, 2001
While I find Nick's thesis an important conceptual exercise, I do not share his conclusions. For me, the record indicates that as any system's computational capacity increases it's average distributed complexity becomes exponentially less subject to the possibility of informational destruction over time. While the absolute degree of potential destruction in any catastrophe clearly increases, the relative informational destruction always exponentially decreases. Indeed, the unbroken record of local exponential progression of computational complexity (the historical metrics predicting the singularity, the smooth curve of Moore's law progression through 110 years (including depressions, recessions, and world wars), the hyperexponential "evolutionary trajectory" to date) is perhaps the single best evidence for the "information-protecting" nature of evolutionary development, and can be demonstrated at all substrate scales. For more evidence of the friendliness of this process (ie, the apparent decreasing perception of violence, from the perspective of the organisms involved), see my essay "Evolutionary Computation in the Universe," 2001.
Cosmic Impact Encourages Life…, Robert Britt, 2001
Foresight Guidelines on MNT, Foresight Org, 2000
Some Limits to Global Ecophagy…, Robert Freitas, 2000
Why the Future Doesn’t Need Us, Bill Joy, 2000
Promise and Peril, Ray Kurzweil, 2000
Embrace, Don’t Relinquish the Future, Max More, 2000
Strategies and Survival, Ch 12, Engines of Creation, Erik Drexler 1986

Sites: (see Acceleration-Relevant Conferences page).

Omissions? Oversights? Please let us know. I hope you find these resources to broaden and sharpen your perspective on the fascinating topic of universal accelerating change. As time allows, we will add more mini-commentaries under selected entries to highlight some of their specific contributions to the issues surrounding the developmental singularity hypothesis. A more extensive bibliography will also be forthcoming at a later date.