Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques
 
Some Potential 'Laws' of Complex Systems

© 2002-2015, John M. Smart. Reproduction, review and quotation encouraged with attribution.

Outline

'Laws' of Development [1][2][3]

'Laws' of Technology [1][2][3][4][5]

 

 

 

'Laws' of Prediction [1][2][3]

'Laws' of Information Theory [1][2][3][4]

 

 

 

These musings aren't anything like scientific laws at present. Perhaps "Dictums" is a better word, but it isn't as well-known as 'laws' (in quotes). Some of these 'laws' may turn out to be valuable as statistically probable constraints or universal developmental processes affecting Earth's complex systems. Some are at least useful rules of thumb in many adaptive contexts. There are many other dictums we might propose, but the following seem particularly important to keep in mind as we begin to construct a better 21st-century theory of foresight.

Other authors have also championed several of these. I've made some attributions where known, and will make additional ones as memory serves and as readers point me to prior citations. I hope you find them useful. Let me know of any others you'd highly recommend.

 


'Laws' of Development

1. The universe is an evolutionary developmental system, with both a a small set of statistically predictable long-term developmental outcomes and a much larger set of unknowable, unpredictable short-term evolutionary paths. (Championed to varying degrees by Lee Smolin, Ed Harrison, Steve Wolfram, Ed Fredkin, John A. Wheeler, Simon Conway Morris, Jack Cohen, Ian Stewart, Robert Wright, myself, and several others). Evolutionary development, through a statistically predictable succession of universal archetypes (cosmological, chemical, biological, cultural, technological, and beyond) is a central paradigm for understanding accelerating change. This paradigm appears to operate on all known physical-computational levels, including our universe itself as a complex system, which appears to be following both unpredictable local evolutionary pathways and a predictable global developmental lifecycle within the multiverse. Read more about this in my book precis, Evo Devo Universe?, 2008.

2. Inner space, not outer space, is the apparent constrained developmental destiny of increasingly complex systems in the universe (also known as "STEM Compression, or STEM Efficiency and STEM Density of computation and physical transformation)".
(Partially championed by: Eric Chaisson, Lee Smolin, Seth Lloyd, and Buckminster Fuller (see etherealization)). In what I call the developmental singularity hypothesis, a black hole-equivalent transcension, not lightspeed expansion, may be the developmental destiny for the future of Earth's higher intelligence, crazy as that seems on first consideration. Most essentially, life's amazing history has been about doing more and more (universal computation and physical transformation) with less and less (physical space, time, energy, and matter (STEM) resources per standard computation or transformation). Due to STEM compression, local intelligences will soon be doing almost everything possible in this universe, in mental space, using virtually nothing in physical space. STEM compression appears to be an unrealized developmental attractor for all complex systems. Read more about it at the links above.

3. We are engaged in an asymptotic approach to universal computational limits, and an apparent, effective computational closure (Locally Asymptotic Computation), a form of path-dependent developmental optimization at the universal scale.
(myself, unknown others) This is an abstract and controversial concept, involving the way initial constraints and emergent limits to computational complexity create a form of path dependent optimization and intelligence maximization in all developmental processes, leading to the need for their regeneration. At the universal scale, our rapid approach to an apparent fundamental universal granularity of physical-computational dynamics (the Planck Scale), combined with the constraint of physical law, may soon lead for a utility maximum for local intelligence. Closure is a way a way of understanding developmental recycling, and our apparently "U"-shaped curve of universal change. The concept has been called by many names by investigators at different systems levels: ergodic theory in statistical thermodynamics, canalization in genetics, category saturation in management theory, etc. Read more about it in my book precis, Evo Devo Universe?, 2008..

 


'Laws' of Technology

1. Technology learns about ten million times faster than we do.
(Eric Chaisson, Ray Kurzweil, several others). It seems we all need to get used to this fact of modern life. Whether you measure it by communication (input, output) rates, computation (memory, processing) or replication rates, technological evolutionary development (and, I assume, evolutionary developmental learning, such as that which lead to human intelligence) generally runs at least seven orders of magnitude (10 million times) faster than genetic systems with regard to important rates of computational change. Chaisson's measure Phi (Cosmic Evolution, 2001), a metric of dynamic complexity, provides a valuable cosmological perspective on this process. Phi is free energy rate density, or the free energy available for physical-computational dynamics in local space time, and Chaisson shows that the Phi of a pentium chip (its relevant rate of potential marginal learning from its environment, not its structural complexity) is, to a first approximation, six orders of magnitude greater than the human brain. If we can take such measures as a rough proxy for the dynamic learning potential for information and nanotechnology as a whole, as I believe we can, we should expect great things from our technological progeny relatively soon in the human future.

2. Humans are selective catalysts, not ultimately controllers, of technological evolutionary development on Earth.
(Kevin Kelly, several others) Technology's evolutionary development appears directed by both a latent universal developmental trajectory, and the finite evolutionary learning capacity available within each substrate. Humans are technology's primary stewards and developers today, but it is becoming clear that as technology develops its own intelligence and autonomy at accelerating rates, humanity's role has already moved from self-appointed controller to selective catalyst of the kinds of technological futures we prefer. It is most commonly in the evolutionary path we take, and generally not in the developmental outcome, that we find the essence of our individual and social democratic choice.

It would be futile, for example, to try to stop the global adoption of the wheel, or electricity, or computing, or human-competitive autonomous intelligence (A.I.), or even the ability of a handful of motivated individuals to be able to engineer superpathogens in their basement in 2100, as all of these appear to be statistically inevitable technological developments within the network of human civilizations. Nevertheless, we have the power to locally delay (with regulatory or social adoption "speedbumps") and even temporarily regress any particular developmental outcome (as Japan and China did with handguns for several centuries, for example), and to create our own local evolutionary pathways to these eventually inevitable capacities, and to reward the emergence of environmental conditions that make these capacities nonthreatening when they finally do emerge. Thus we might ensure the emergence of A.I.'s that have been incrementally proven to be safe via stepwise development, we might spur the global ability to manufacture and deliver effective antidotes to any biological pathogen by 2050, and we might catalyze the emergence of enough global development and transparency to prevent most individual terrorism attempts from emerging, while simultaneously providing fine grained assurances of individual liberties and meaningful employment for those who seek it.

In the same manner, as we come to realize that even humanity as a whole does not control the technological world system, we can nevertheless strongly influence the evolutionary path of a range of harmful technological applications (e.g., nuclear weapons proliferation, CBW research, pesticides and pollutants, first generation nuclear power technology), while accelerating the development of balancing and beneficial technologies (e.g., communications, computation, automation, transparency, immune systems R&D), and phasing them in in ways that improve, rather than disrupt, human political, economic, and cultural systems.

3. Technology should self-actualize people and their cultures, not degrade, addict, or enslave them in 'structural violence.'
(Johan Galtung, Richard Rhodes, Paul Farmer, Jacques Ellul, several unknown others). In the thoughtful and accessible Visions of Technology, 2000, Richard Rhodes introduces the concept of structural violence with respect to technology innovation, which he defines as the ways that the infrastructure and most common features of our technological environment often do great violence to us as individuals and as a culture.

An obvious example would be the automobile, a tool most of us must use to compete in the modern world (we have little choice in the matter), and yet one that claims 40,000+ lives in the U.S. and 1.2 million+ lives in the world every single year. Leaving fossil fuels aside for the moment, which also have their own health, environmental, and political costs, and focusing solely on safety, just a little thought applied to the issue makes it clear that we could make many low-cost improvements to our automobiles and the political-legal structure around them that might cut these terrible costs in human lives to half of their present daily toll, or less. See for example the SafeCar, described a little way down this wiki page.

Clearly intelligent machines will be driving us, with vastly lower fatalities, just a few decades (or generations) hence. But what can we do in the meantime? The list of presently unutilized technological aids to this problem, as for so many other social problems, is quite long. Consider modifications to the car, such as four point harnesses, internal occupant sleds, crash webbing, internal airbags, bumper airbags, telemetry-assisted braking (where sharp braking in one car induces braking in all cars in the vicinity), and even helmets (which some of us would wear if they were retractable, and if there were an insurance break for wearing them). There are many potential modifications to the environment (rumble strips, lower speed limits, etc.) and to legal requirements (drivers ed, driver training, license renewal). Some of these should be required, some should receive R&D and prize money to stimulate innovation, some should be subsidized with insurance incentives for their use, some should be promoted in drivers ed, and some left to the free market.

The public apathy that exists today with respect to the safety of automobile technology is itself a clear form of structural violence, as the true social costs of the technology are hidden from the citizen, the putative ultimate decider in our democracy. Such apathy will only change when voting citizens are allowed, and incentivized, to realize the real ongoing cost of such technologies to our culture.

In the longer run, we appear to be inevitably and progressively handing off the mantle of highest intelligence to our technological successors, but today we remain solely responsible for our own continued improvement, as individuals and as a species. When we ignore that responsibility, when we succumb to technology's many distractions, addictions, and outright enslavements, we deny our future and remain impoverished. We can do better by remembering this principle, which operates to select against those cultures that seek to flaunt it the most.

4. The first generation of any technology is often dehumanizing. The second generation is generally ambivalent to humanity. The third generation, with luck, becomes net humanizing.
(myself, unknown others). With reflection, the consequences of this law are self-evident in technological systems that we can observe at every scale. We can observe it in the effects of civilization on the human being (our first generation was the age of monarchy, slavery, and perpetual state warfare), with industrialization (our first generation was the polluted, dehumanizing, child labor utilizing factory), with automobiles (our first generation uses dirty fossil fuels, and originally had few safety features), with televisions (our first generation are noninteractive, and separate and deeducate us almost as much as they socialize us), with calculators (our first generation cause us to lose even mental calculation skills even when we desire to retain them), with computers (our first generation are expensive and have terrible interfaces and are restricted to an educated technological elite), with the internet (our first generation is buggy, primitive, hacker-infested, and far too anonymous), and with cell phones (our first generation increase motor vehicle accidents as they require too much human attention).

It is a constant challenge to the designers and users of any technology to seek ways to minimize the duration and extent of the negative externalities we so often see with any new technological deployment. Yet even with our best intentions, we seem to take three steps forward, two steps back, six steps forward, two steps back— the eternal dance of accelerating change. Those who would criticize a technology as dehumanizing and unacceptable would do well to realize that developmental advances have always always been associated with disruption and some degree of dehumanization, as we learn to adapt to the new order of things.

Fortunately, the faster and more intelligent our technology becomes, the greater the social standard we can hold it to, and the sooner we can move it from dehumanization and disruption to enhancement in its net effect. A recent example is takeback legislation (cradle-to-cradle recycling of manufactured goods) a third generation of manufacturing that has increased the sustainability of European manufacturers without significantly impacting their competitiveness. There are good arguments that sustainable takeback programs would have been impossible in a world without supply chain automation, recycling automation, and other technological advances, but there is a time when such advances become affordable, and it is incumbent upon us to recognize when that time has arrived, and to advocate for the next generation to to emerge.

5. Technology innovation is progressively less disruptive to humanity as societies develop.
(William Bernstein, Robert Fogel, Peter Wilson, Jacob Bronowski, Marc Stiegler, several others). It’s easy to forget just how disruptive our first technologies were. Fire totally remade us as a species 1.5Mya, growing our brain size 40% from H. habilis to H. erectus (See Wrangham, Catching Fire, 2010). We radically self-domesticated when we settled into villages with Neolithic agriculture 12Kya, shrinking our brains 10% (just as do all domesticated animals) and becoming far less individualistic and violent (See Wilson, The Domestication of the Human Species, 1991).

Social life was again massively disrupted by the rise of the first Empires, with their emperors, armies, class systems, and tremendous numbers of slaves, first arising in Mesopotamia and Egypt 5.5Kya. Empire-driven warfare and state violence have killed, oppressed, and enslaved progressively lower percentages of populations as our technology has advanced (see Pinker, Better Angels of Our Nature, 2012) though it's true this violence has often erupted in larger absolute numbers, and with greater intensity (over less time).

The emergence of the clock in the Middle Ages (10th-16th centuries) drove a coercive regularization of human routines, as Mumford describes in Technics and Civilization, 1934. But it was the emergence of money, and the trading rituals and technologies of early capitalism in 12th-16th century Europe that was the most disruptive and also liberating change of this era, as these technologies broke the bonds of feudalism, a brutally restrictive social order that existed for half a millennium (9th-15th centuries). William Bernstein describes this well in his epic book, The Birth of Plenty, 2004. Modern rights, individualism, and competitiveness soon followed, and the world hasn't been the same since.

The Industrial Revolution of 1730-1850 was perhaps the last wholesale disruptive transformation of human society, with its steam engine, telegraph, canals, marine clock, and the rise of the industrial labor force. Early twentieth century innovations like the internal combustion engine, electrification, the aircraft and automobile, mass production, mass media, and the consumer society caused their own social disruptions, but of a milder order, extending industrialization, not replacing it. The Haber-Bosch process to produce ammonia fertilizer, commercialized in 1910, is arguably the single most important technology of the twentieth century, as it fueled our global agricultural and population boom. So too with computers and the digital revolution of the latter half of the 20th century. Yet the bounty these twentieth century technologies delivered, and the people power they ultimately fostered around the world (see the BBC's amazing The People's Century, 1900-1999) was arguably significantly less socially disruptive than earlier technology revolutions. The growing personal rights and social safety nets won in developing societies since mid-20th century have increasingly insulated the average citizen from disruption.

Both World Wars and the Cold War temporarily accelerated technological advances, but the growth of digital information, of scientific knowledge, and of technological capacities are each driving sustained accelerating change today like never before. But at the same time, there’s less social disruption than ever. These accelerations are increasingly going on “under the hood” inside machine systems and away from most human awareness, as I described in “Measuring Innovation in an Accelerating World,” 2005. And as computers progressively wake up and start talking to and later understanding us in the next few decades, as I and many other futurists predict, that technological singularity is going to feel as natural and non-disruptive to humanity as our own children growing up and learning to talk.

Technical innovation continues to race ever faster all around us, but it is increasingly be happening in “inner space”, the domains of the very small and of computation itself. as I outline in "The Race to Inner Space", 2011. As a result, we biological humans don't see or feel most of this acceleration. Some even deny we are in an acceleration phase today, as these nano and digital accelerations have become so unobtrusive from human perspectives.

Nevertheless our global environment will continue to get smarter, more moral, and more comfortable at fast exponential rates. Eventually, I expect we'll be offered reversible procedures by our electronic progeny, procedures that will turn us into them. We’ll be able to go back to our slow and simple biology anytime, if we want. But I bet we won’t. We’ll instead move on, into far faster and more innovative domains. See Stiegler, The Gentle Seduction, 1990, for one nice science-fictional view of how that coming transition might play out.

 


'Laws' of Prediction

1. The more things change, the more some things stay the same.
(Paul Saffo, several others). Futurists who say "the only constant is change" haven't done their homework. Human systems (psychology, sociology, politics, economics) are amazingly stable in their evolutionary psychology, even in a world of accelerating technological change. So many social events that appear novel are best understood as recycled versions of yesterday's news. Cars were initially banned in San Francisco and Manhattan. So were Segways. The anonymous Wild West gave way to law and order, and the anonymous Internet is doing the same. The ratio of women to men in frontier America (e.g., San Francisco, 1849) was originally 50:1. Same for frontier Transhumanism, those who expect technology will increasingly surpass human ability in coming decades (U.S., 1980's). Cable TV was initially commercial free. So were Movies. So was TiVo. More generally, any apparently universal development-dependent process, such as exponential growth in technological complexity, is something we can count on continuing in future environments. We know in our bones that computers will be twice as powerful every 12-18 months (or less) for the rest of our natural life. Or we should.

2. Most prediction is a predictable failure.
(Ed Cornish, Adam Gordon, unknown others). We should not find it surprising that the average futurist, missing the subset of developmental events, has a poor record of prediction. The paradigm of evolutionary development tells us that the vast majority of any average sample of the local events we observe will be evolutionary, and evolutionary events are intrinsically unpredictable. Only careful developmental systems thinking allows us to tease out that special class of events, constraints, and trajectories that are tied to the hierarchically emergent developmental structure of the universe and its complex adaptive subsystems, not to these systems' random, chaotic searches of their local environmental phase space.

For example, at the molecular scale, human development is intrinsically unpredictable. But step back to see the big picture, and after you've seen one human life cycle you've got a good idea how developmental (not evolutionary) events will proceed in the next. And after you've seen a multiplicity of developmental cycles, at a range of matter, energy, space, and time scales, you've got a good idea what kinds of developmental events are occurring in your local environment.

I can't predict which software company will be dominant in 2030, but it is a good bet that they all will be running the most sophisticated Conversational Interface network in existence. We may not know yet what computer architecture will come after MOS, but we can predict it will be vastly more STEM compressed and efficient. And in a controversial astrobiological example, while you would go broke quickly trying to predict the exact shape of humanoid life forms on other Earth-like planets, or the styles of cars that will sell best in those worlds, you can make an excellent developmentalist bet that those planets must all produce computationally dominant humanoids, that the humanoids will all be highly likely to have two eyes, bilateral symmetry, jointed limbs (possibly with an average of five fingers on each limb), and large number of other predictably convergent developmental features. Furthermore, there are great developmentalist arguments that all such planets will be very likely to invent internal combustion-based, automobile-like machines as swarm computing time-compression devices, that the dominant car body plans will involve four wheels, and that the environment must include a vast number of other universal technological archetypes, or developmental optima, such as electronic computers. And if you find any of that hard to believe, you're in good company. I'll do my best to address these issues in my book.

3. Long-term predictions of computationally-dependent processes need to be socially unreasonable.
(Ray Kurzweil, Jim Dator, several others). This is a variant of Ray's observation that we live in a world of historically exponential (or superexponential) growth in computational processes, and yet use intuitive linear models to approximate change. If we don't incorporate at least a few socially unreasonable forecasts in our extrapolation of accelerating technologies, we are blinding ourself to the real future, and aren't appropriately preparing or prioritizing our efforts today. Most likely, we won't put the appropriate time, energy, or resources into the places where they would have the greatest "unreasonable" positive effect, simply because we don't believe such amazing change is possible.

That's our own loss, and it impoverishes us in the present wherever we lack sufficient social foresight into the inevitable mechanisms of accelerating change. We at ASF will do our part in coming years to attempt to rectify our species' cultural proclivity for ignoring the historically unreasonable growth of computation.

 

 


'Laws' of Information Theory

1. Informational Intelligence (average distributed complexity (ADC)), a product of two-way communication in a collective of evolutionary systems, grows superexponentially at the leading edge of local development.
(Bela Nagy, several others). An increasing number of careful thinkers in anthropic cosmology suggest that the most interesting and unexpected feature of the universe is that so much of its fundamental architecture and process conforms to a range of simple, easily discovered rules and laws. The universe is unreasonably tractable to simple analysis. Applying intelligence is a curiously rigged, positive-sum game, which in turn rewards the accelerating emergence of intelligence. For more on this curious and deservedly controversial concept see a brief overview here.

2. Informational Interdependence (breadth and depth of symbiosis, or non-zero sum interactions) a product of two-way communication in a collective of evolutionary systems, grows superexponentially at the leading edge of local development.
(E.O. Wilson, Robert Wright, Matt Ridley, many others). All intelligent systems grow nicer, on average, as a function of their complexity. Collectives of intelligent systems still have individual moral deviants, capable of increasing individual acts of evil as a function of their complexity, but on average, their collective acts are vastly more moral and ethical as their individual and environmental intelligence advances. For more on this, read E.O. Wilson's Sociobiology, 2000, Robert Wright's Nonzero, 2001, Matt Ridley's Origins of Virtue, 1998, Herbert Gintis et.al.'s Moral Sentiments and Material Interests, 2006, and perhaps most especially Norbert Elias's The Civilizing Process, 1972. All of these will give you clues to a statistically inevitable calculus of civization, a mathematics of morality a "game theory of getting along" in increasingly complex swarm-computational systems, societies from bees to bohemians to tomorrow's Big Dogs and other robots.

3. Informational Immunity (ADC resilience to catastrophe), a product of two-way communication in a collective of evolutionary systems, grows superexponentially at the leading edge of local development.
(myself, unknown others). Do we live in a universe where the really dangerous aspects of technology are simply inaccessible to dangerously simple minds? Is it highly improbable that humans could destroy ourselves, even though so many of us seem hell-bent on trying? To understand this interesting proposal, let me first recommend any good introductory book on immune systems, such as How the Immune System Works, Lauren Sompayrac, 1999. Immune systems are foundational elements in complex systems. In human beings, for example, they are much more fundamentally important than the brain, in many respects, as stable intelligence never develops without concomitant immunity—and yet immune systems were the last major system discovered in human physiology, and are still one the most overlooked and poorly understood topics in modern education. Immune systems are seen, to varying degrees, in all substrate levels in universal evolutionary development, from galactic, stellar, planetary, plant, animal, neurologic, social, and technologic systems. They are apparently a key part of the deep structure of any cosmic system which allows the local development of computational complexity.

Immune systems work very well, in general, and even in those instances where they fail, they are generally quite benign in their damage to the network, though their failure can be devastating to the individual. In example after example, the immune learning which occurs with any catastrophe always seems to statistically increase the average distributed complexity (ADC) of the local network, if not the individual. This hypothesis has valuable implications for ways we can use our growing understanding of the lever of immunity to aid the stable development of our increasingly human-surpassing technological intelligence.

4. Informational Incompleteness (a zone of intractability) is a permanent feature of local computation.
(Kurt Godel, Alonzo Church, Alan Turing, John Barrow, Patrick Grim, Gregory Chaitin, several others). No complex adaptive system is ever informationally complete. Questions can be asked from within any system that can neither be proven nor disproven using the computational resources of the system, no matter how complex. Information theory is difficult, and a bit abstract. For now, read John Barrow's Impossibility, 1999, or The Universe That Discovered Itself, 2000, Patrick Grim's The Incomplete Universe, 1991, and Gregory Chaitin's The Limits of Mathematics, 1997. More on these topics at a later time.

 


Additions? Missed Attributions? Disagreements? I look forward to your comments.
Thanks to Dale Carrico and Jeff Thompson for helpful feedback.

Sincerely,


John M. Smart
President, Acceleration Studies Foundation