|Some Potential 'Laws' of Complex Systems|
© 2002-2015, John M. Smart. Reproduction, review and quotation encouraged with attribution.
These musings aren't anything like scientific laws at present. Perhaps "Dictums" is a better word, but it isn't as well-known as 'laws' (in quotes). Some of these 'laws' may turn out to be valuable as statistically probable constraints or universal developmental processes affecting Earth's complex systems. Some are at least useful rules of thumb in many adaptive contexts. There are many other dictums we might propose, but the following seem particularly important to keep in mind as we begin to construct a better 21st-century theory of foresight.
Other authors have also championed several of these. I've made some attributions where known, and will make additional ones as memory serves and as readers point me to prior citations. I hope you find them useful. Let me know of any others you'd highly recommend.
|'Laws' of Development||
1. The universe is an evolutionary developmental system, with both a a small set of statistically predictable long-term developmental outcomes and a much larger set of unknowable, unpredictable short-term evolutionary paths. (Championed to varying degrees by Lee Smolin, Ed Harrison, Steve Wolfram, Ed Fredkin, John A. Wheeler, Simon Conway Morris, Jack Cohen, Ian Stewart, Robert Wright, myself, and several others). Evolutionary development, through a statistically predictable succession of universal archetypes (cosmological, chemical, biological, cultural, technological, and beyond) is a central paradigm for understanding accelerating change. This paradigm appears to operate on all known physical-computational levels, including our universe itself as a complex system, which appears to be following both unpredictable local evolutionary pathways and a predictable global developmental lifecycle within the multiverse. Read more about this in my book precis, Evo Devo Universe?, 2008.
2. Inner space, not outer space, is the apparent
constrained developmental destiny of increasingly complex systems in the
universe (also known as "STEM
Compression, or STEM Efficiency and STEM Density of computation and physical
3. We are engaged in an asymptotic approach to
universal computational limits, and an apparent, effective computational
closure (Locally Asymptotic Computation), a form of path-dependent developmental
optimization at the universal scale.
|'Laws' of Technology||
1. Technology learns
about ten million times faster than we do.
2. Humans are selective catalysts, not
ultimately controllers, of technological evolutionary development on Earth.
It would be futile, for example, to try to stop the global adoption of the wheel, or electricity, or computing, or human-competitive autonomous intelligence (A.I.), or even the ability of a handful of motivated individuals to be able to engineer superpathogens in their basement in 2100, as all of these appear to be statistically inevitable technological developments within the network of human civilizations. Nevertheless, we have the power to locally delay (with regulatory or social adoption "speedbumps") and even temporarily regress any particular developmental outcome (as Japan and China did with handguns for several centuries, for example), and to create our own local evolutionary pathways to these eventually inevitable capacities, and to reward the emergence of environmental conditions that make these capacities nonthreatening when they finally do emerge. Thus we might ensure the emergence of A.I.'s that have been incrementally proven to be safe via stepwise development, we might spur the global ability to manufacture and deliver effective antidotes to any biological pathogen by 2050, and we might catalyze the emergence of enough global development and transparency to prevent most individual terrorism attempts from emerging, while simultaneously providing fine grained assurances of individual liberties and meaningful employment for those who seek it.
In the same manner, as we come to realize that even humanity as a whole does not control the technological world system, we can nevertheless strongly influence the evolutionary path of a range of harmful technological applications (e.g., nuclear weapons proliferation, CBW research, pesticides and pollutants, first generation nuclear power technology), while accelerating the development of balancing and beneficial technologies (e.g., communications, computation, automation, transparency, immune systems R&D), and phasing them in in ways that improve, rather than disrupt, human political, economic, and cultural systems.
3. Technology should self-actualize
people and their cultures, not degrade, addict, or enslave them in 'structural
An obvious example would be the automobile, a tool most of us must use to compete in the modern world (we have little choice in the matter), and yet one that claims 40,000+ lives in the U.S. and 1.2 million+ lives in the world every single year. Leaving fossil fuels aside for the moment, which also have their own health, environmental, and political costs, and focusing solely on safety, just a little thought applied to the issue makes it clear that we could make many low-cost improvements to our automobiles and the political-legal structure around them that might cut these terrible costs in human lives to half of their present daily toll, or less. See for example the SafeCar, described a little way down this wiki page.
Clearly intelligent machines will be driving us, with vastly lower fatalities, just a few decades (or generations) hence. But what can we do in the meantime? The list of presently unutilized technological aids to this problem, as for so many other social problems, is quite long. Consider modifications to the car, such as four point harnesses, internal occupant sleds, crash webbing, internal airbags, bumper airbags, telemetry-assisted braking (where sharp braking in one car induces braking in all cars in the vicinity), and even helmets (which some of us would wear if they were retractable, and if there were an insurance break for wearing them). There are many potential modifications to the environment (rumble strips, lower speed limits, etc.) and to legal requirements (drivers ed, driver training, license renewal). Some of these should be required, some should receive R&D and prize money to stimulate innovation, some should be subsidized with insurance incentives for their use, some should be promoted in drivers ed, and some left to the free market.
The public apathy that exists today with respect to the safety of automobile technology is itself a clear form of structural violence, as the true social costs of the technology are hidden from the citizen, the putative ultimate decider in our democracy. Such apathy will only change when voting citizens are allowed, and incentivized, to realize the real ongoing cost of such technologies to our culture.
In the longer run, we appear to be inevitably and progressively handing off the mantle of highest intelligence to our technological successors, but today we remain solely responsible for our own continued improvement, as individuals and as a species. When we ignore that responsibility, when we succumb to technology's many distractions, addictions, and outright enslavements, we deny our future and remain impoverished. We can do better by remembering this principle, which operates to select against those cultures that seek to flaunt it the most.
4. The first generation of any
technology is often dehumanizing. The second generation is generally ambivalent
to humanity. The third generation, with luck, becomes net humanizing.
It is a constant challenge to the designers and users of any technology to seek ways to minimize the duration and extent of the negative externalities we so often see with any new technological deployment. Yet even with our best intentions, we seem to take three steps forward, two steps back, six steps forward, two steps back the eternal dance of accelerating change. Those who would criticize a technology as dehumanizing and unacceptable would do well to realize that developmental advances have always always been associated with disruption and some degree of dehumanization, as we learn to adapt to the new order of things.
Fortunately, the faster and more intelligent our technology becomes, the greater the social standard we can hold it to, and the sooner we can move it from dehumanization and disruption to enhancement in its net effect. A recent example is takeback legislation (cradle-to-cradle recycling of manufactured goods) a third generation of manufacturing that has increased the sustainability of European manufacturers without significantly impacting their competitiveness. There are good arguments that sustainable takeback programs would have been impossible in a world without supply chain automation, recycling automation, and other technological advances, but there is a time when such advances become affordable, and it is incumbent upon us to recognize when that time has arrived, and to advocate for the next generation to to emerge.
5. Technology innovation is progressively
less disruptive to humanity as societies develop.
Social life was again massively disrupted by the rise of the first Empires, with their emperors, armies, class systems, and tremendous numbers of slaves, first arising in Mesopotamia and Egypt 5.5Kya. Empire-driven warfare and state violence have killed, oppressed, and enslaved progressively lower percentages of populations as our technology has advanced (see Pinker, Better Angels of Our Nature, 2012) though it's true this violence has often erupted in larger absolute numbers, and with greater intensity (over less time).
The emergence of the clock in the Middle Ages (10th-16th centuries) drove a coercive regularization of human routines, as Mumford describes in Technics and Civilization, 1934. But it was the emergence of money, and the trading rituals and technologies of early capitalism in 12th-16th century Europe that was the most disruptive and also liberating change of this era, as these technologies broke the bonds of feudalism, a brutally restrictive social order that existed for half a millennium (9th-15th centuries). William Bernstein describes this well in his epic book, The Birth of Plenty, 2004. Modern rights, individualism, and competitiveness soon followed, and the world hasn't been the same since.
The Industrial Revolution of 1730-1850 was perhaps the last wholesale disruptive transformation of human society, with its steam engine, telegraph, canals, marine clock, and the rise of the industrial labor force. Early twentieth century innovations like the internal combustion engine, electrification, the aircraft and automobile, mass production, mass media, and the consumer society caused their own social disruptions, but of a milder order, extending industrialization, not replacing it. The Haber-Bosch process to produce ammonia fertilizer, commercialized in 1910, is arguably the single most important technology of the twentieth century, as it fueled our global agricultural and population boom. So too with computers and the digital revolution of the latter half of the 20th century. Yet the bounty these twentieth century technologies delivered, and the people power they ultimately fostered around the world (see the BBC's amazing The People's Century, 1900-1999) was arguably significantly less socially disruptive than earlier technology revolutions. The growing personal rights and social safety nets won in developing societies since mid-20th century have increasingly insulated the average citizen from disruption.
Both World Wars and the Cold War temporarily accelerated technological advances, but the growth of digital information, of scientific knowledge, and of technological capacities are each driving sustained accelerating change today like never before. But at the same time, there’s less social disruption than ever. These accelerations are increasingly going on “under the hood” inside machine systems and away from most human awareness, as I described in “Measuring Innovation in an Accelerating World,” 2005. And as computers progressively wake up and start talking to and later understanding us in the next few decades, as I and many other futurists predict, that technological singularity is going to feel as natural and non-disruptive to humanity as our own children growing up and learning to talk.
Technical innovation continues to race ever faster all around us, but it is increasingly be happening in “inner space”, the domains of the very small and of computation itself. as I outline in "The Race to Inner Space", 2011. As a result, we biological humans don't see or feel most of this acceleration. Some even deny we are in an acceleration phase today, as these nano and digital accelerations have become so unobtrusive from human perspectives.
Nevertheless our global environment will continue to get smarter, more
moral, and more comfortable at fast exponential rates. Eventually, I expect
we'll be offered reversible procedures by our electronic progeny, procedures
that will turn us into them. We’ll be able to go back to our slow
and simple biology anytime, if we want. But I bet we won’t. We’ll
instead move on, into far faster and more innovative domains. See Stiegler,
Gentle Seduction, 1990, for one nice science-fictional view of
how that coming transition might play out.
|'Laws' of Prediction||
1. The more things
change, the more some things stay the same.
2. Most prediction is a predictable failure.
For example, at the molecular scale, human development is intrinsically unpredictable. But step back to see the big picture, and after you've seen one human life cycle you've got a good idea how developmental (not evolutionary) events will proceed in the next. And after you've seen a multiplicity of developmental cycles, at a range of matter, energy, space, and time scales, you've got a good idea what kinds of developmental events are occurring in your local environment.
I can't predict which software company will be dominant in 2030, but it is a good bet that they all will be running the most sophisticated Conversational Interface network in existence. We may not know yet what computer architecture will come after MOS, but we can predict it will be vastly more STEM compressed and efficient. And in a controversial astrobiological example, while you would go broke quickly trying to predict the exact shape of humanoid life forms on other Earth-like planets, or the styles of cars that will sell best in those worlds, you can make an excellent developmentalist bet that those planets must all produce computationally dominant humanoids, that the humanoids will all be highly likely to have two eyes, bilateral symmetry, jointed limbs (possibly with an average of five fingers on each limb), and large number of other predictably convergent developmental features. Furthermore, there are great developmentalist arguments that all such planets will be very likely to invent internal combustion-based, automobile-like machines as swarm computing time-compression devices, that the dominant car body plans will involve four wheels, and that the environment must include a vast number of other universal technological archetypes, or developmental optima, such as electronic computers. And if you find any of that hard to believe, you're in good company. I'll do my best to address these issues in my book.
3. Long-term predictions of computationally-dependent
processes need to be socially unreasonable.
That's our own loss, and it impoverishes us in the present wherever we lack sufficient social foresight into the inevitable mechanisms of accelerating change. We at ASF will do our part in coming years to attempt to rectify our species' cultural proclivity for ignoring the historically unreasonable growth of computation.
|'Laws' of Information Theory||
1. Informational Intelligence
(average distributed complexity (ADC)), a product of two-way communication
in a collective of evolutionary systems, grows superexponentially at the
leading edge of local development.
2. Informational Interdependence (breadth
and depth of symbiosis, or non-zero sum interactions) a product of two-way
communication in a collective of evolutionary systems, grows superexponentially
at the leading edge of local development.
3. Informational Immunity (ADC resilience to catastrophe),
a product of two-way communication in a collective of evolutionary systems,
grows superexponentially at the leading edge of local development.
Immune systems work very well, in general, and even in those instances where they fail, they are generally quite benign in their damage to the network, though their failure can be devastating to the individual. In example after example, the immune learning which occurs with any catastrophe always seems to statistically increase the average distributed complexity (ADC) of the local network, if not the individual. This hypothesis has valuable implications for ways we can use our growing understanding of the lever of immunity to aid the stable development of our increasingly human-surpassing technological intelligence.
4. Informational Incompleteness (a zone of intractability)
is a permanent feature of local computation.
Additions? Missed Attributions? Disagreements? I look forward to your