Singularity Watch
HomeNewsletterReading GroupsConferencesPublications"Singularity Studies" LinksDegree ProgramsCritiques

 

Limits to Biology
Performance Limitations on Natural and Engineered Biological Systems

Terminal Differentiation and Path Dependency in Human Biological Evolutionary Development, Limitations on Bioengineering in Humans (the Biointerventionist Strategy), and Speculations on the Increasingly Non-Biological Future of Local Intelligence

© 2001-2015, John M. Smart. Reproduction, review and quotation encouraged with attribution.

This page explores a few propositions regarding natural performance limitations on biological systems and of all human-initiated biotech and pharmacological enhancements to those systems in coming decades. It is best understood with a little background in molecular, cellular, organismic, evolutionary, and developmental biology.

Outline

Terminal Differentiation in Developmentally Complex Species Like Human Beings

On the Limitations of Bioengineering of Human Beings (Longevity, Intelligence, etc.)

 

HAR and Path Dependence: The Phenomenon of Human Genetic Similarity

Neurotechnologies and Nootropics: Beware of Hype

 

The Importance of Research and Development in the Biological Sciences

On Phase Transitions and the Coming Technological Singularity

Other Writings on This Topic

 


Terminal Differentiation in Developmentally Complex Species Like Human Beings

As Jared Diamond reminds us in The Third Chimpanzee, 1992, humans are much more closely related to chimpanzees than to other ape species (gorilla, orangutan, and gibbon). We apparently diverged from the other two chimpanzee species (common and pygmy chimps) only six million years ago. Roughly speaking, we have only a 1.6 percent "genetic distance" from them, as estimated by Charles Sibley and Jon Ahlquist's simple hybridization melting point DNA comparison method.

Diamond proposed that humans and chimps are so closely related that it would be both clarifying and humbling to change our taxonomy to include all three species in the same genus: Homo sapiens (us), Homo paniscus (pygmy chimps, also known as bonobos, with matriarchal and highly social and sexual cultures, occupying environments of plenty in the Congo), and Homo troglodytes (the larger common chimps, with patriarchal and much more brutish societies that have flourished in the more common environments of scarcity). Read Frans De Waal's Our Inner Ape, 2006 if you'd like an insightful explanation of how closely H. sapiens is a blend of these two species.

One of the most interesting insights into what these 1.6% of genetic differences might be is the neotenic hypothesis, originally championed by Stephen Jay Gould, in Ontogeny and Phylogeny, 1977. It has become increasingly well supported with time, as Xavier Penin et. al. note in this 2002 article. Relative to chimpanzees, the growth of the human skull is "clearly retarded in terms of both the magnitude of changes (size-shape covariation) and shape alone (size-shape dissociation) with respect to the chimpanzees. At the end of growth, the adult skull in humans reaches an allometric shape (size-related shape) which is equivalent to that of juvenile chimpanzees with no permanent teeth, and a size which is equivalent to that of adult chimpanzees."

See Shapes of Time, Ken McNamara, 1997 for a good intro to heterochrony (the more general process of which neoteny is a part), and Beyond Heterochrony: The Evolution of Development, Miriam Zelditch (Ed.), 2001 (excerpt here) for some of the latest thinking on heterochrony, heterotopy, and other topics in the evolution of development, a fascinating research fronteir. While you're at it, you should read the pioneering work, Niche Construction: The Neglected Process in Evolution, John Odling-Smee et al, 2003, to understand how the kind of environmental niche we construct (such as an information-rich child rearing environment for juvenile human beings) strongly affects the kind of evolutionary development that subsequently occurs.

In other words, in its divergence from our chimpanzee relatives, our species made (or was forced into?) the evolutionary choice to become less developmentally differentiated at birth, to backtrack on a previous differentiation pathway, most likely as our only easily accessible way of gaining greater behavioral capacity and brain plasticity in later life. The other alternative, changing our brain plan to achieve greater intelligence and learning capacity, was likely simply not available.

Why? The more complex any life form becomes, the more it becomes a legacy/path dependent system, with many antagonistic pleiotropies (negative effects in other places and functions in the organism) whenever any further change is contemplated. It seems that evolutionary development, just like differentiation from a zygote or stem cell to a mature tissue, becomes increasingly terminally differentiated the more complex and specialized the organism. One extreme case of this kind of terminal differentiation, at the cellular level, is nerve cells in the human brain, which are so specialized, and the connections they support so complex, that they cannot even replace themselves, in general. Could they eventually learn to do so without disrupting the connectionist complexity that they create in the brain, after their development has stopped? Perhaps not. The more compex the system becomes, the less flexible it is. It gets progressively harder to make small changes in the genes that would improve system, and given how finely tuned so many system elements are, large changes are out of the question.

Increasing antagonistic plieotropies as a function of complexity tell us why you can clip a growth gene into a frog and get a bigger frog, but put the same gene in a mouse and you get a bigger mouse with growth dysregulation problems, including cancer. Put this gene into a pig and you simply get a pig with arthritis. I assume the same thing would happen with a human, but fortunately this experiment hasn't yet been done, to my knowledge. Antagonistic plieotropies also explain why genetically aided lifespan and intelligence amplification efforts will be increasingly less valuable the more complex the organism we try them in (more on that shortly).

Applying terminal differentiation at the level of all biological systems means that the further any life gets from that primordial prokaryotic cell, 3.5 billion years ago, the more complexity we try to build on top of the tree of life, the more difficult it becomes to add further complexity. Our tree of life keeps branching, to be sure. Evolutionary variability and diversity always grow, as Gould reminded us.

But if our terminal differentiation hypothesis is correct, the feebleness of evolutionary innovation, in all the most complex branches, grows greater and greater as a function of the complexity attained. Eventually, very little complexity advance can come on top of the old frame, simply more increasingly ergodic combination and recombination in the lower levels. Such is the nature of the evolutionary tree in all substrates we would consider, whether physics, chemistry, biology, culture, or technology. Eventually they all reach their "maximum height."

We are not accustomed to thinking of the generative evolutionary capacity of biology as being on an S-curve, and thus eventually saturating this capacity, yet I believe there are data for this at every level we investigate. Consider this figure, from Geerat Vermiej (1987) which shows the rapidly declining rate of origination of marine animal families since the Cambrian. Yes, some new families are created after major extinctions, evolution does maintain some creative capacity “in reserve,” but by and large, there is steadily declining innovation capacity the further evolutionary 'differentiation' of marine animal families proceeds, for a number of reasons we will only touch on here.

Read this revealing comment from the great evolutionist Julian Huxley (1942) which I believe aptly describes not only the continually increasing extant variety (continous branching) of biological evolution, but also its increasing feebleness the farther out the ‘branches and leaves’ of our biological evolutionary tree extend from the original cell into physical and computational morphospace:

“Evolution is thus seen as a series of blind alleys. Some are extremely short - those leading to new genera and species that either remain stable or become extinct. Others are longer - the lines of adaptive radiation within a group such as a class or subclass, which run for tens of millions of years before coming up against their terminal blank wall. Others are still longer - the lines that have in the past led to the development of the major phyla and their highest representatives; their course is to be reckoned not in tens but in hundreds of millions years. But all in the long run have terminated blindly. That of the echinoderms, for instance, reached its climax before the end of the Mesozoic. For the arthropods, represented by their highest group, the insects, the full stop seems to have come in the early Cenozoic: even the ants and bees have made no advance since the Oligocene. For the birds, the Miocene marked the end; for the mammals, the Pliocene.

Or the following from the zoologist and theorist, Pierre Grassé (1973):

"The period of great [mophological] fecundity is over; present [biological] evolution appears as a weakened process, declining or near its end. Aren't we witnessing the remains of an immense phenomenon close to extinction? Aren't the small variations that are being recorded everywhere the tail end, the last oscillations of the [biological, vs. social and technological] evolutionary movement?"

Heterochrony, the heritable variations in the rate and timing of events on the developmental cycle (birth, growth, maturity/reproduction, death) is one of the most important processes of the minor human-chimp genetic difference that has allowed our unique intellect to emerge. We have had to "go back to go forward," in order to get out of one of those blind alleys Huxley mentions above. But simply having a more plastic brain appears to have been only half the story. The other half appears to be the intelligence that has been incrementally encoded not into genes, but into our language, our customs, and our tools. These "memetic" and "technologic" substrates are doing their own entirely different set of computations, and imprinting malleable human brains much faster than genetic processes, which no longer comprise the leading edge of planetary computational change.

Thus our species intelligence/learning capacity appears both less instinctually encoded and more culturally encoded, in a human-unique ecological niche that is comprised of 1) a much more extended period of network and neural plasticity in youth and 2) far more elaborate culturally communicated behavioral mimicry customs ("memes") for the growing members of our species. Some of these customs, like language and ethics, have been environmentally stable for so long that we've been backtuning our genes to them in a coevolutionary process. For more on that I suggest you skim Terrence Deacon's, insightful work, The Symbolic Species, 1998. Once we were talking and thinking, other than increasingly path-dependent coevolution between our language and neural structures, the human substrate has moved progressively farther away from genetic change in the last several million years. The more useful our language becomes, and the more differentiated we become in our genetic facilitation of its use (e.g., Wernicke's area, Broca's area), the less valuable any new mutation becomes. And particularly after the emergence of talking apes, I suspect our genetic substrate became path dependent, stabilized on a new, much narrower range of genetic possibilities.

So humanity's first great leap forward, roughly 2 million years ago with rapid brain expansion in Homo habilis and Homo erectus due to complex social mimicry, tool use, and probably first complex oral language, happened by taking us backward developmentally, via neoteny/heterochrony. Yes, there we also a few other genetic changes (in 49 HAR regions, to be discussed shortly) but vastly less genetic input to the transition than one would reasonably expect. Only a few feeble "twigs" of genetic change have occurred in humans in the last six million years: 1.6% of DNA difference, and a very small number of very feebly "accelerated" changes in only 49 small regions, stand between us and our chimpanzee cousins. What's more, the pace of evolutionary change in human brain tissue has steadily slowed down since our split from chimpanzees, six million years ago, as reported the fascinating work of Chung-I Wu et. al. in 2006. The title of this Science Daily article, "Complexity Constrains Evolution of Human Brain Genes," says succinctly what I tried to say quite poorly in 2001, in my first public draft of this article, trying to describe how very little human genetic change we can expect in our forseeable future, including rationally engineered change, for a raft of quite interesting reasons.

Again, to summarize, almost all the credit for our great leap forward goes to our biological system backtracking, providing us a simpler and more plastic vessel for building computational complexity outside of our bodies, in niche construction space, or what computer scientist and biologist Pierre Baldi, in The Shattered Self, 2002, so elegantly calls our external selves. That's where all the evolutionary action has been for the last several millennia, and that's where it will likely stay.

On the Limitations of Bioengineering of Human Beings (Longevity, Intelligence, etc.)

Biological systems are bottom-up evolved, massively multifactorial, highly and nonlinearly interdependent complex adaptive systems. Those futurists who propose a coming world of genetically engineered humans, beyond the elimination of single and few-gene disorders in genetic diseases, or of accelerating changes in our biology through pharmaceuticals or "neurotechnology" simply do not appreciate the incredible interdependence, inertia, antagonistic plieotropies, and terminal differentiation of our genetic evolutionary developmental lineage. "Genetically engineered humans," redesigned for increased performance, seem to me like the "atomic vacuum cleaners" of the 1950's—futurist fantasies that will never come to pass, for a host of complex reasons.

The more one studies the history and nature of genetic systems, and our thirty year history of attempted interventions in them (transgenic organism experiments and the like), the clearer this insight becomes. The more terminally differentiated and developmentally path-dependent a system becomes, the less possible it is to positively affect the course of its development with external interventions. We are stuck with our developmental legacy code, and we won't be able to significantly reengineer it until we move to an entirely new computational substrate.

Furthermore, the more rapidly our technological systems advance, the higher the social barrier to any attempt at genetic interventionism. Humans can reliably and affordably increase their capacity and ability by augmenting their electronic extensions, their external selves, doubling them in power every two years. In that kind of increasingly abundant world, human biological life becomes more precious, equity more important, and societies are progressively less willing to accept the high cost of trial and error strategies in genospace (and humans will never be able to do better than trial and error when working in such highly complex and slow systems as biological organisms).

This perspective is counter to the claims a number of biotech-oriented futurists (e.g., Gregory Stock, Redesigning Humans, 2002; or Lee Silver, Remaking Eden, 1998) who predict that humans will soon enter an era of genetic interventionism. But I believe such perspectives greatly oversell the biology interventionism case. Biotechnology as an industry, while it is dearest to our hearts, is also a charity case, an industry fueled primarily by human hope, and only seconarily by profit. As the president of Genentech famously said in the 1990's, as an industry class it has lost money ( has had negative total ROI for investors), on average since its inception in the 1970's. While having hope is commendable, and while our priorities as an enlightened society must be to aggressively try all things that might help minimize suffering and ameliorate disease processes, no matter how poor their odds, terminal differentiation in biology must bring our hopes back to ground, especially when compared to the consistent returns that have been delivered by infotech and nanotech.

The way forward for human intelligence now seems overwhelmingly likely to be a primarily bottom-up development within the electronic systems substrate, guided by a secondary, top-down artificial selection of these electronic ecologies by human beings. Even Ray Kurzweil (Age of Spiritual Machines, 1999, The Fantastic Voyage, 2004, The Singularity is Near, 2005), who has good insight into the accelerating nature of information technology, seems to have overinterpreted what is possible in the biotechnology space. The limitations on the "wetware" substrate are legion, and are still far too rarely appreciated.

Such perspectives lead us to expect much less from the biolongevity researchers than they often like to tell us is "possible." We know of course that lifespan can be extended radically in worms and flies with simple gene modifications (SOD, for example), or rapidly bred into a simple organism (as in Michael Rose's experiments with Drosophila). But such yields must decrease substantially the more differentiated the organism, as there are more and more legacy systems to go wrong, each with its own housekeeping complexities.

We'll be able to validate this with Aubrey De Grey's Methusela Mouse prize, to be given in coming years for efforts to substantially increase the lifespan of the laboratory mouse. I'm all for this prize, and for us spending a whole lot more on real anti-aging research than the ridiculous pittance we currently spend. But if I'm right, the gains we'll see will be very modest relative to what is possible in C. elegans nematodes, for example. What we could do in humans even if we could settle the ethical questions, which we presently can't, would be again substantially less significant than what we could get in mice.

A good way to begin to understand the complexity of genetic and entropic constraints on longevity in complex organisms is via Tom Kirkwood's excellent Disposable Soma Theory (DST) of ageing (1977). You can find a nice outline of the DST in Kirkwood's book, Time of Our Lives: The Science of Human Aging, 1999, but I haven't yet found a good online overview (please let me know if you do). Wikipedia's entry on the DST is unfortunately abbreviated and factually incorrect in many places.

In the Disposable Soma Theory, all complex, differentiated somas (bodies) are assumed to be self-organized to be disposable, from the ('selfish') genes perspective, because they have limited energy and error-correction budgets, and it is much more useful and adaptive to put that energy into escalation and control mechanisms (like the sympathetic nervous system), even when that escalation ('fight or flight') has a fitness cost to the individual organism (cortisol, a stress hormone, will age you rapidly at the cellular level). They also put this limited energy and error-correction budget into high-level repair and immortality of the germline tissues. Note that this last topic (energy and error-correction needed to ensure an immortal germline) is just part of the evolutionary picture. Energy and error-correcting computations are costly, and smart organisms use them only in places where they are most adaptive, not only for the individual in competition with other individuals, but for the species, in an environment of accelerating adaptive complexity.

In other words, there is clearly some level of mortality selection that works at the species level, not just at the individual level. Current selection theory simply isn't up to the task of explaining this, and it also ignores accelerating environmental complexity. But common sense tells us that while individuals would love to be immortal, species simply would not be adaptive with a bunch of long-lived and sharply growth-limited individuals hanging around. Kill them off fast and you continue to get strong selection pressure for genetic innovation within the species.

According to the DST then, ageing is simply a matter of cumulative damage done to the organism's far more vulnerable somatic cells over the lifecycle, while the germline cells stay 'immortal' (technically, experience very low rates of change). From the species and genetic perspective, this damage has no adaptive cost, because every organism has the drive to reproduction built into its lifecycle. Aging has a negative adaptive cost from a memetic perspective (useful ideas and behaviors, replicating in brains), but we humans get around that problem by storing our best ideas and behaviors in technemes, or information and algorithms that replicate in our technologies, and like genes, far outlive mortal somatic individuals.

If organisms didn't partition their limited energy and error-correction budgets between immortal germline cells and mortal somatic cells, they would be instantly outcompeted by organisms that did make this very smart choice. That's why we find 'immortal' organisms like Hydra so rarely in the world. There are very few niches such organisms can occupy, and none of them allow higher somatic complexity—threats are too complex, and environmental conditions change far too fast in those environments for this 'undifferentiated' strategy to work. Somatic complexity exclusively goes to those organisms willing to have disposable somas. This principle is known as competitive exclusion and it is basic to evolutionary theory.

The basic claims of DST continue to be supported by research, as in this 2004 study. It fits well with the two life extenstion strategies known to work across a wide variety of complex organisms, caloric restriction and castration or sexual maturity delay. Caloric restriction significantly reduces the total entropy load (oxidative stress, malformed and out-of-place molecules, etc.) on an organism's limited somatic repair mechanisms, thus greatly improving cellular efficiency and efficacy. Castration in complex organisms eliminates the major physiological energy investment in sexual reproduction, and its associated stress/cortisol load, also significantly improving the organism's cellular efficiency and efficacy. (For example, a study from the 1960's found that castrated human male prisoners lived an average of 7-9 years longer than their cohorts.).

Thus, over countless evolutionary developmental cyclings over the millennia, complex organisms have self-organized a series of energy and information tradeoffs between their 'immortal' germline cells and their mortal bodies. With respect to biological longvevity, we can't hope to get around those tradeoffs without understanding and extensively reengineering the system, from the genes up. My claim, my expectation, would be that this will prove too difficult for human minds to do. It may even be too difficult for the AIs of tomorrow to do. We shall see. It is important for us to put real resources behind such efforts, far more than we do today. But it is also important to be realistic about their potential for success. If there were any "easy" genetic routes to gaining somatic immortality without losing the soma's great adaptive complexity, we can expect they would have been discovered by blind evolution, long ago. Evolution is a very powerful force, and it's had billions of years to play.

Of course we do have evidence of one mechanism that might allow endless rejuvenation in an adult organism, neoteny. If we genetically revert the entire adult organism to a more juvenile, far less differentiated state, in effect doing a reset, we have what we want, on the surface at least. A caterpillar turning into a butterfly is one great example. We can imagine a mature 21st century regenerative medicine that allows us to grow new organs, from scratch, when our old ones fail. Advances in regenerative medicine will certainly greatly improve our recovery from disease, and may get us a lot closer to our 'celluar' limit of 100-120 years, which would be a great thing. But it seems an unavoidable truth that such approaches wouldn't, couldn't, improve human longevity to any significant degree beyond this natural limit. Why?

Neoteny as a regenerative mechanism, unfortunately, would not work in the brain, that mostly postmitotic organ of highest importance to human self-identity, because the information we want to protect is contained in the unique ultrastructure in each mature neuron, and in the unique synaptic connectivity each neuron has established with 1,000 to 10,000 of its neighbors in the adult brain. When we use neoteny on this marvelous machine, all we will do is erase all that amazing adapted complexity wherever we apply it. If future medical science were able to come up with some kind of localized, serial neoteny, which involved regenerating (and erasing) small numbers of cortical areas at a time, giving the human brain time to adapt and relearn those areas that were destroyed, our perception of death should become significantly less violent, as we'd be adapting to lots of petit morts (little deaths) all the time. But the actual informational death would still occur, as this would still would involve lots of degradation of your original complexity over time. It would seem less subjectively violent to you, but information loss would still occur. We see this in the way a stroke victim can relearn function after a local brain region is destroyed. But allow too much of this process too fast, and the patient suffers multi-infarct dementia. There is only so much redundancy in the human brain, and it must have a limited rate and level of relearning capacity, in its current design.

By contrast to these highly limited strategies for bioimmortality, each of us already gets increasing informational immortality when we migrate our biological intelligence into our cybertwins (today's first generation digital assistants), as we do increasingly every day in the modern world. As our informational and nanotechnologies accelerate in complexity, and as the intimacy of the human-machine and physical-virtual interface advances every year, we all are rapidly gaining this 'effective' informational immortality. I predict that within this century, almost every human being on Earth will recognize this, and all our most intimate intellectual and emotional connections with others and with ourselves will be mediated through not just our biological but also the cybernetic components of our identities.

Of course, your intelligence will rapidly evolve into something else once it is in technological form, much more rapidly than your biological self evolves, but at least your old thoughts and mental architectures can be safely archived, for later access, in a way that will never be possible in the biological realm. In other words, you will still be mortal as a postbiological organism, but death loses not only its subjective violence in such a state, it loses its informational violence as well. That truly will truly be something new in (our corner of) the universe, and this 'cyber immortality' is very much something worth working toward, measuring, and advancing on a daily basis.

Like longevity bioengineering, genetic intelligence bioengineering apparently works the same way: expect it to be progressively less effective with organismic complexity. We could probably breed or gene splice some significantly smarter flies and worms, but don't expect this to happen with mice. Yes, you will continue to hear studies of gene spliced mice that run their mazes better (recall the "Doogie Howser" NMDA mouse and its successors), but it will be very unlikely that you'll see a strain of significantly smarter gene-engineered mice make it into the pet population, other than as a marketing stunt. What we've seen of these modifications so far is that they always introduce a range of unanticipated side effects (mice with better memories have pain intolerance and social problems, for example). We may see a few modest gains, but again, don't hold your breath.

Breeding or gene intervention for increased intelligence in dogs would be even less effective than the gains we might expect in mice. We've had at least 10,000 years of dog and horse domestication to date, which should be ample time for breeders to select for increasingly intelligent dogs. I'm sure there's also been a lot of pressure for that intelligence to emerge in hunting dogs or horses, yet the difference between a labrador and an average domestic dog is quite modest, and all domestic dogs are less environmentally smart than any wild dingo (and their brains of domesticated dogs, like all domestics, have shrunk some 15-20% in size as well).

A number of labs, perhaps starting with Fred Gage in the 1980's at UC San Diego, have tried quite hard and failed to engineer smartness in lab mice at the genetic level. Again, don't hold your breath. This is important research and may lead to valuable discoveries, but perhaps the most valuable discovery will be how incredibly difficult this whole process is, due to the massively multifactorial nature of differentiated complexity in multicelllular organisms. I would also predict any gains for dog intelligence would have to be significantly more modest still, and for human intelligence more modest yet again. Furthermore, these latter experiments may never be done, given their presently terrible cost/benefit ratio by comparison to ever-accelerating nonbiological intelligence amplification alternatives.

Going back to that old, slow technology (neurogenetics) looking for a similar advance seems an exercise in futulity. Perhaps the most you could expect from such efforts would be smarter mice, monkeys, and other pets. But animal rights advocates would justifiably quash even this, if you think about it. We can envision the 2030 court cases: Transhumanists argue eloquently that mice should have the ability to have humanlike consciousness (and allow us to learn how to create more biologically conscious humans) and Conservatives and Animal Rights Advocates argue that this is an abomination. Meanwhile the even more networked and trasnsparent IRBs of that more developed era make sure every mistake is clearly documented, and the cost-benefit ratios carefully assessed. Don't expect that line of research to proceed very far.

If you have friends with infants and would like to help them understand the unique opportunities of human mental development, What's Going On In There?, by Lise Eliot, 2000 is a great place to start. The book explains all the open developmental windows in the early years of child development, and gives lots of practical childraising advice. I expect tomorrow's parents will be further optimizing these imprinting periods in coming years as the psychological data start flooding in and as our robotic nannies and Conversational Interface-equipped cribs start getting particularly helpful post 2020.

HAR and Path Dependence: The Phenomenon of Human Genetic Similarity

Recall that genetic mutations have driven the divergence of humans from our closest living relatives, chimpanzees over the last six million years, but this has apparently occurred only very small set of regions (roughly 2% of our DNA, by hybridization), and has occurred increasingly quickly in apparently only 49 “human accelerated regions” (HARs), not globally throughout the genome. By far the most extreme of these, HAR1, changed 18 of its 118 nucleotides in the course of the last few million years. By comparison, only two of these nucleotides had changed in the prior 310 million years that separate chickens from apes. (Pollard et. al. 2006). But what is astonishing to me from this data is how few such accelerating regions there are. This is the kind of dynamics we would expect in an increasingly terminally differentiated system, just a little rapid branching at the end of a few of the ‘twigs’ of genospace, not an increasingly rapid branching of all or even most of the twigs.

Microsatellite markers have recently helped us discover that chimpanzees and baboons have more genetic diversity within one troop than we have between all human races (Jorde et. al. 2000). Geneticists are presently seeking some reasonable explanation for this astonishing phenomenon.

An interesting article by a Stanford research group ["Features of Evolution and Expansion of Modern Humans, Inferred from Genomewide Microsatellite Markers," Zhivotovsky et. al., 2003, American Journal of Human Genetics.] suggests that humans came close to extinction roughly 70,000 years ago, and that the very high genetic similarity of modern humans is explainable by this bottleneck effect. The researchers proposed that we once dwindled to a population of 1,000 or so, due to some catastrophe (environmental disaster, conflict) before growing to our current nearly genetically identical population of 6.7 billion. A massive extinction event is an interesting hypothesis to explain our astonishing genetic similarity, but I suspect the real story is due to our friend, terminal differentiation and path dependence in complex human brains.

What percentage of human genes are represented in the brain? I'm sure the experts have a guess, but I haven't been able to quickly find it, and today it would only be a guess. In the Paul Allen Mouse Brain Atlas, completed in 2006, some 80% of all mice genes were reported to be expressed somewhere in the brain—even higher than the 60-70% previously suspected. And mice share some 90% of their genes with humans. So a canalized human brain may mean a canalized human genome.

Once humans started using complex mimicry memetics, language, and technology some two million years ago, rapid change, at least in brain genes, would be increasingly difficult without antagonistic plieotropies messing up brain regions that are globally tuned for linguistic, gestural, and other social grammars. Chimps don't have a lot of upstream culturally imprinted memetic-linguistic complexity to protect in their evolutionary development, and we do. That places limits on the variation in our neural architecture that they don't have. Vary this architecture too much and you would get autistic humans who can't do all the complex things we do. Fortunately, this hypothesis should be increasingly testable as our science of genetics matures in coming years, and I propose it to any scholar interested in the challenge.

Neurotechnologies and Nootropics: Beware of Hype

What about the impact on human complexity of top-down applications of neuropharmacology or neurotechnology? Again the prospects appear far more overpromoted by some futurists than reality and the historical record can support. We should expect, for example, to see marginally better pharmacological approaches to managing our "average" complexity emerge from our 21st century biotechnology. In fact, our pharmaceutical companies may be among the most commercially successful of our applied nanotechnology activities in coming decades. But it wouldn't fit with developmental systems theory to expect anything profound, anything that would increase our intellectual performance more than incrementally.

In particular, it would be extremely improbable to discover a biotechnological intervention, socially accepted or otherwise, that increased human memory, attention, or energy in anything other than a very limited, temporary, and dose-dependent manner. As history has heretofore indicated, such a crude, top-down intervention would be fought vigorously by our neural homeostatic mechanisms—the most complex systems in the known universe—and be replete with ugly side-effects that must escalate with dose. Consider the very limited success we've seen with Prozac, Ritalin, and other psychiatric aids. These can be quite helpful as transitional aids, but many studies show they are most effective in their first few months (Prozac takes an additional month or two to become effective), and decline in utility with longer courses. Prozac is in a class of drugs that works by dulling receptors and associated circuits (serotonin mediated pathways in this case), and this diffuse dulling of presumably overactive neural pathways can be useful in temporary circumstances. When used as a way of life, however, side effects invariably overtake its value. The same goes for Ritalin, which excites separate pathways (dopamine in this case, if I recall correctly) and with overuse also reduces the general value of dopaminergic pathways to the brain (excitotoxicity, compensatory downregulation, etc.). Such drugs are often overused or abused, and their general efficacy is far less than Big Pharma would have you believe (except for that first dose, which often has very powerful effects in a "virgin" brain).

In coming years I expect we'll be able to develop predictive models that estimate the infinitesimal odds for substantially increasing the complexity of biological systems via top down genetic, pharmacologic, or neurotechnological interventions.

Furthermore, even were we to luck into some significant pharmacological or neurotechnogical lever on human mental complexity, we must consider the strong political pressures against implementing that type of intervention, due to its unavoidably divisive social effect during the rollout. Humans would actively resist this on a variety of ingroup ethical grounds. Whether or not you happen to agree with ingroup ethics, they are fundamental to our evolutionary biology. As a result, genetic interventionist prohibitions in higher and even some lower organisms are growing daily, the more concerned we become with global dialogs of environmental responsibility, fairness and eliminating social divides.

The Importance of Research and Development in the Biological Sciences

None of this minimizes the fundamental importance of the biological sciences to improving the human condition. Of all our scientific and technological advances in recent centuries, public health breakthroughs arguably top the list. Medical care has made stunning improvements in the last few decades, health consciousness is expanding globally, and preventive medicine has great unmet promise. The modern theory and empirical tests of evolution has reorganized our worldview and become the dominant intellectual paradigm, for now. The advent of cellular and molecular biology have given us tremendous benefits, as well as new strategies and tools for solving great unmet problems. As our biological knowledge base (genomics, proteomics, metabolomics) accelerates in coming years, driven primarily by Moore's law advances in our technologies (instrumentation, computation, simulation), we will gain tremendous insights on the nature of human and biological systems. We can see a future of ever more computationally-catalyzed biological inquiry, and new models, such as evolutionary development, which give us the inklings of a new, post-Darwinian paradigm, and even deeper insights into the nature of both ourselves and the universe at large.

Perhaps most importantly for the coming transition, we will use our accelerating knowledge of the biological sciences to incorporate increasingly more biologically-inspired designs into our technological systems, thereby migrating the essence of our humanity to the new substrate. As neuroscience and mammalian brain scanning play their parts in AI design in coming decades, we can see this as no less than an extension and expansion of humanoid consciousness into the technological domain, as well as the emergence of a new, posthuman hyperconsciousness, with capabilities we can only speculate on.

Biological instantiation has a long and fruitful history. We see it in the personalization of all our technology, in the centuries of mechanical automata, modeled after living forms, and it underlies so many of our greatest inventions. It is easily forgotten, for example, that the mechanical membrane that transduces sound waves to electrical energy in the telephone emerged from Alexander Graham Bell's experiments with the phonoautograph, a device that incorporated a human ear harvested from a cadaver, designed to make a lever jump in response to sound. Bell's study of the human eardrum in this intimate manner helped him to instantiate its function into a technological analog. This new mechanical substrate, while initially crude, was nevertheless no less "natural," adaptive, or dynamic. It contained many new advantages its parent lacked, not the least of which was its amazingly time-compressed cycle of change. Witnessing the explosion of innovation that subsequently occurred, Bell said "The telephone reminds me of a child, only it grows much more rapidly." He wasn't kidding!

In summary, biology, while it will still yield a host of socially valuable benefits in coming decade, appears to be, in many respects, essentially a saturated substrate. We will gain a host of new knowledge from the biological sciences, but we won't use that information to redesign humans, in any significant biological sense. There won't be time or reason to do so.

Infotech, not biotech, now appears to be the constrained developmental future for local intelligence. Thanks for carefully considering these fundmental and strategically important ideas.

On Phase Transitions and the Coming Technological Singularity

Here's a valuable insight from general systems theory: All important substrate emergences, or phase transitions, appear to require primarily bottom up, and secondarily top down control processes. I suggest that the technological singularity, the coming of greater than human machine intelligence on Earth some decades hence, is simply another such phase transition. Let's see how we can apply this insight to understand how machine intelligence may be constrained to emerge.

First, a few examples of phase changes, to understand their nature: consider the phase transition from liquid to solid water. While a top down process of orderly hydrogen bonding is forming at the "skin" of cooling liquid, this is mostly driven by a bottom up, unpredictable loss of thermal energy in random molecular collisions with the growing solid.

Consider next life's dramatic phase transition to multicellularity, half a billion years ago in biological evolutionary development. While a top down, newly emergent transcription regulation mechanism in eukaryotic cells is driving the way individual cells can differentiate, the emergence of successful body plans is mostly driven by a bottom up experimentation, the "Cambrian Explosion," within a massive and highly interactionist selection environment.

Consider the phase transition of human language. While a top down, newly emergent memetic architecture in hominids is driving the way they can choose to communicate, again a mostly bottom up collective experimentation process leads to successful linguistic structures.

Consider the emergence of cities. While a top down process of conscious individual choice to build architectures and technological artifacts is necessary, cities mostly emerge in a bottom up process of cultural experimentation and resource competition.

So it will apparently be with our coming phase transitions to the conversational interface and functional robotics, and to our next great phase transition to the technological singularity, autonomous "posthuman intelligence."

In other words, it is becoming clear that the next autonomous substrate emergence will be a natural but mostly nonbiological extension to our biological selves.

Ever since the Asilomar conferences in the 1970's, it has been obvious that bottom-up evolutionary experiments in biotechnology simply cannot be run in any open environments on our planet today, where they might interact with existing ecologies in unforseen ways. Those few bottom up "evolutions" that are run in the labs (eg. Michael Rose's work with Drosophila) are painfully slow by comparison to infotech. That leaves only top-down efforts to redesign or tweak our complex and very slow genetic code as a robust strategy. Biotech therefore has only half the necessary tools to create emergent complexity, and it must use those tools in within a substrate that is multi-million fold slower than the electronic one.

Today, an unaware and pre-infantile infotech, in coevolution with human culture, is running an uncountable number of bottom up evolutionary searches across and throughout our planet's human environment, and will continue to do so until it achieves a natural developmental phase transition (the technological singularity) of full replicative and computational autonomy. At the same time, today's technology designers will continue to use the best of our top down theories about how to create emergent artificial intelligences to "construct" (read: select) the increasingly more self-directing evolutionary developmental intelligences of coming decades.

We are just beginning to understand the critical necessity of both bottom up and top down processes to any "substrate shift." Furthermore, systems theory suggests that each shift becomes possible only when the developmental environment has matured to the point where primarily bottom up processes, and secondarily top down processes, drive the transition. Arguably, we haven't reached that point with technology yet.

Fifty years ago, most technology scholars would argue that humans were top down controllers of technology development. Today, we find a mix of both perspectives, a state of undifferentiated indeterminacy. On one hand it is clear that we function less like controllers and more like selective catalysts and artificial selectors of our increasingly self-reparative, self-directing sets of socially useful technologies. Yet we would be hard pressed to look at our infotech systems as having an embodied intentionality (more like a pre-mind), and we can still find many serious (and I would say, misguided) proponents of primarily top down approaches to development of greater than human intelligence.

But in just a few more decades, if this falsifiable hypothesis is correct, the primacy of bottom up approaches will become blatantly obvious, when most human designers are acting like "digital gardners," selecting preferential development of a range of bottom up artificial intelligences.

When today's early biologically inspired computing technologies (neural nets, genetic algorithms, belief networks, support vector machines, evolvable hardware, and their ilk) become both dominant and mature tools for development of much of our complex, high level machine functionality in coming decades, we will know that the singularity is near.

Other Writings I've Found on This Topic

Al Fin, "The Limits to Biology," 21 Aug 2005. Insightful opinion piece (1 page).

Fortunately, there was a fascinating conference, Are There Limits to Evolution?, 25-26 Sept 2014, Cambridge University, that finally addressed this long-neglected topic. Hopefully now, Limits to Evolutionary Development (a much better name) will become its own recognized field:

Example presentations included:
Tom McLeish, Are there ergodic limits to evolution?
Lakshmi Mahavedan, Newtonian Limits on Evolution.
George McGhee, Limits in the evolution of biological form: A Theoretical perspective.
Virginie Orgogozo, Is there a limited set of genetic paths for evolution?
Matthew Wills, Are there limits to the evolution of novel morphology?

Writings on the unlikeliness of life extension:
Marios Kyriazis et al., The Fallacy of the Longevity Elixir (Abs)., Current Aging Research, 2015;8(3):227-34.

Feedback? johnsmart{at}accelerating{dot}org