All Astronautical Evolution posts in 2013:

Neubrandenburg Thoughts (I): OldSpace versus NewSpace (Nov.)

Highlights from the Starship Century Symposium in London (Nov.)

Alien Civilisations: Two Competing Models (Oct.)

Elysium, Earth; Elysium, Mars (Sept.)

The Futures We Love to Fear (Aug.)

Quotations from Sophie’s World (May)

Do I Really Exist? (May)

When will Voyager 1 leave the Solar System? (April)

Technological Singularity, or Plateau? The case for antisingularitarianism (March)

Space and Sustainability: Ecological Collapse versus Technological Growth (Feb.)

Manned spaceflight on the plateau awaits a new business model (Jan.)

New in 2020:

Download science fiction stories here


AE posts:

2022: What’s to do on Mars?…

2021: New space company Planetopolis…

2020: Cruising in Space…

2019: The Doomsday Fallacy, SpaceX successes…

2018: I, Starship, atheism versus religion, the Copernican principle…

2017: Mars, Supercivilisations, METI…

2016: Stragegic goal for manned spaceflight…

2015: The Pluto Controversy, Mars, SETI…

2014: Skylon, the Great Space Debate, exponential growth, the Fermi “paradox”…

2013: Manned spaceflight, sustainability, the Singularity, Voyager 1, philosophy, ET…

2012: Bulgakov vs. Clarke, starships, the Doomsday Argument…

2011: Manned spaceflight, evolution, worldships, battle for the future…

2010: Views on progress, the Great Sociology Dust-Up…

Chronological index

Subject index


General essays:

Index to essaysincluding:

Talk presented to students at the International Space University, May 2016

Basic concepts of Astronautical Evolution

Options for Growth and Sustainability

Mars on the Interstellar Roadmap (2015)

The Great Sociology Debate (2011)

Building Selenopolis (2008)


= ASTRONAUTICAL EVOLUTION =

Issue 90, 1 March 2013 – 44th Apollo Anniversary Year

=============== AE ===============


Technological Singularity, or Plateau?
The case for antisingularitarianism

Stephen Ashworth, Oxford, UK

Alien robots?

In his book The Eerie Silence, Paul Davies writes:

“All in all, machines offer a far safer and more durable repository for intelligence than brains. [...] I think it is very likely – in fact inevitable – that biological intelligence is only a transitory phenomenon, a fleeting phase in the evolution of intelligence in the universe. If we ever encounter extraterrestrial intelligence, I believe it is overwhelmingly likely to be post-biological in nature [...] In a million years, if humanity isn’t wiped out before that, biological intelligence will be viewed as merely the midwife of ‘real’ intelligence – the powerful, scalable, adaptable, immortal sort that is characteristic of the machine realm.”

The Singularity theorists such as Ray Kurzweil see machine super-intelligence coming a lot sooner than a million years – around the middle of the present century, in fact. And Nick Bostrom and his fellow professional worriers at the Future of Humanity Institute in Oxford see that intelligence as an “existential threat” to humanity. (Recent articles on Professor Bostrom and his institute of anxiety in Centauri Dreams and Aeon Magazine.)

Anyone interested in the human expansion into space, particular on an interstellar scale, must therefore take a view on the question: are humans about to be replaced by self-replicating, exponentially self-improving machines, computer-controlled robots? And by the same token, are any extraterrestrial civilisations we might encounter more likely to consist of little green men or alien robots?

The challenge is nicely posed in a good summary article on the Daily Galaxy website.

A good book-length discussion is J. Storrs Hall, Beyond AI: Creating the Conscience of the Machine, described as a “must read” by Ray Kurzweil.

Obviously, nobody really knows the answer. But let’s consider a few key points.

Of brains and machines

The brain is a material object, and its functions can in principle be replicated by artificially constructed material objects such as digital computers. But the brain is different in important ways from any machine which we can build: it has evolved over half a billion years with no deliberate design involved; it somehow wires itself up from an essentially randomised starting-point; it is partly analog, partly digital; and it is thoroughly integrated into the body.

We do not yet know how to build an artificial brain comparable with our own. We expect to have machines with brain-equivalent memory and processing speed by the 2030s, but this does not guarantee human-equivalent intelligence, any more than does putting brain-equivalent quantities of carbon, hydrogen, oxygen and other necessary atoms into a test-tube and shaking them up together.

Gary Marcus, professor of cognitive psychology at New York University, writes on science and technology at The New Yorker. He recently stated in a blog article “The Brain in the Machine”:

“For more than twenty-five years, scientists have known the exact wiring diagram of the three hundred and two neurons in the C. Elegans roundworm, but in at least half a dozen attempts nobody has yet succeeded in building a computer simulation that can accurately capture the complexities of the simple worm’s nervous system.”

(Read about the OpenWorm project here.)

Computers have proved useful for well-defined tasks such as making mathematical calculations and playing chess. They are proving difficult to adapt to poorly defined tasks involving general learning, developing common sense and general intelligent behaviour. Clearly, a computer is a machine for automating logical instructions. A brain is something different. But what?

Intelligence is not disembodied logical thought, but is intimately linked with feelings, emotions, moral sensibilities and humour. These appear to be emergent properties of the brain-body system rather than qualities which can be programmed in terms of specific lines of code.

Intelligence (like life itself) is difficult even to define exactly, but I suggest that one key feature is that, whatever logical structure of activity or algorithm is in effect, an intelligent being can always mentally step back from that and make a judgement as to whether that algorithm is in fact sensible or not. A stupid being, if doing something stupid, will continue doing so forever; an intelligent one will realise that it is behaving stupidly, stop doing so and work out a better approach.

If this capability is an emergent property of complex systems which cannot itself be codified, then I question whether it is possible for any being to deliberately design and construct a being of greater general intelligence than itself. We will get capable machines for specific tasks, but not super-intelligent ones. Our own intelligence has been enormously enhanced by technological aids (starting with simple things like pencil and paper), and this will continue, but the wholesale replacement of the human brain as the ultimate organ of thought and feeling is not a plausible prospect.

Daniel Dewey, research fellow at Bostrom’s Institute, is quoted in the Aeon article as worrying that a superintelligence might mechanically pursue some goal with superhuman efficiency and accidentally destroy humanity along the way. Thus his speculation attributes to it superhuman intelligence and subhuman stupidity in the same sentence. Good for science fiction. Totally unworthy of an institute that is part of Oxford University.

We know that superior general intelligence does arise, because we ourselves exist! But so far it has done so in an unplanned evolutionary fashion. This suggests a scenario in which we see the overall intelligence of the human race continue to gradually increase as machines for specific tasks continue to be integrated into the overall economy, without a super-intelligence arising at any one specific point.

The limits to progress

In contrast to the singularitarians, I am more attracted to the plateaunian point of view: technological progress has reached an inflection point and is starting to decelerate towards a plateau. This is already apparent in such areas as nuclear fusion research, space travel, medicine and fundamental physics and cosmology.

Predictions that we will soon have artificial intelligence and other magical things (in Clarke’s sense) like faster-than-light travel, universal material abundance or personal immortality are hype, not well-founded expectations. They are motivated by the belief that the accelerating progress of the past 200 years will continue, but in reality exponential growth must come to an end when it reaches its natural limits, and even technology has its limits.

Yes, there is a lot of progress still to be made. We will eventually get fusion power, interplanetary colonisation, a decent standard of living for the majority of people, semi-intelligent robots, but more slowly and at greater cost than the most optimistic among us imagine. We will get to the stars, but again, not soon and not quickly. Warp drives and so-called “faster than light” travel (a physical impossibility) will remain theoretical concepts, and the multi-generational worldship scenario will win out, after several centuries of gradual preparation within the Solar System.

In the Daily Galaxy article, Seth Shostak is quoted as saying: “If we build a machine with the intellectual capability of one human, then within 5 years, its successor is more intelligent than all humanity combined”. This has to be hype: it depends upon the idea that a being of intelligence X can deliberately design and build a being of intelligence X+1, which, I have suggested, is highly dubious.

For a counterblast to the enthusiasts, see Athená Andreádis’s reaction to the news of a planet orbiting Alpha Centauri B in her blog: Why We May Never Get to Alpha Centauri (I have added stress marks to her name so that readers pronounce it correctly). She writes, in her own inimitable style:

“There’s exactly one domain that’s moving fast: technology that depends on computing speed, although it, too, is approaching a plateau due to intrinsics. To give you an example from my own field, I’ve worked on dementia for more than twenty years. During this time, although we have learned a good deal [...] we have not made any progress towards reliable non-invasive early diagnosis of dementia, let alone preventing or curing it. The point here is not that we never will, but that doing so will require a lot more than the mouth farts of stage wizards, snake-oil salesmen or pseudo-mavens.”

In other words, we have so far not the slightest idea what the difference is between a “superintelligent machine” that suffers from dementia and one that does not!

Creating artificial intelligence is very comparable to interstellar space travel: we can imagine doing it, it seems achievable, it has become a commonplace of science fiction, so we assume that it’s a foregone conclusion given only a few more decades of research. But in both cases the magnitude of the undertaking is immense, whether we’re talking about sending astronauts or even robots safely over tens of trillions of kilometres, or about replicating the functionality of a biological organ with around a trillion components, each of which itself contains millions of moving parts.

In the imagination we can easily leap ahead and solve these problems with a wave of the hand. But what if the limitations on our powers built into the structure of the universe through such things as the laws of thermodynamics, of relativity, of quantum mechanics and Gödel’s incompleteness theorem are really forcing a technological deceleration?

During the 20th century, it it true, dramatically accelerating technological change outstripped almost everyone’s imagination. But times have changed, and we are now at a stage when the popular imagination, charged up on the transformations in society over the past century and expecting more of the same to come, is running far beyond the new capabilities remaining to us in this, the final phase of the industrial revolution.

The idea of a superhuman intelligence is a seductive one. We imagine it would be free from human imperfections and untroubled by what we regard as our frivolous emotions. But in reality any intelligent being, comparable with human general intelligence or superior to it, would of necessity find itself in the same world of incomplete information, inability to predict the future with 100% accuracy, unintended consequences to its actions, competing priorities, limited resources, the necessity to make arbitrary emotionally driven decisions about what to do, and vulnerability to regrets about the past. In fact any real artificial intelligence would be as aware of its own imperfections as we are of ours – perhaps more so.

As J. Storrs Hall points out, there is a danger that people will become convinced that a sufficiently advanced computer understands what it is doing, in the same way that a competent human does, when in fact it does not. He describes the ELIZA effect: people are easily fooled into believing that a simple computer program (ELIZA, an early version of a chatbot) is interacting with them intelligently when an inspection of the program reveals that it is merely producing arbitrary responses in a mechanical fashion, while repeating back key words and phrases to its interlocutor.

Plateaunian reality

I propose, firstly, that conscious general intelligence cannot be defined with sufficient rigour to provide a usable engineering template. Secondly, that general intelligence is an emergent property of sufficiently complex information processing systems, and not a property that can be deliberately programmed into such a system by an outside programmer. Thirdly, that we cannot therefore deliberately create a human-equivalent or super-human artificial general intelligence, but that such an intelligence will naturally emerge from our information-processing systems in an unplanned way if we continue to develop and network with increasingly sophisticated machines.

The consequence will therefore be that we will have interactions with machines of gradually increasing resemblance to those which we have with other people, but there will be no particular eureka moment when the first conscious computer is switched on, and no sudden takeoff of self-improving superhuman intelligence.

There will be a long period when the status of machine intelligence is completely unclear, with some hotheads proclaiming that the Singularity has arrived, and other, more skeptical, observers claiming that the machines are still nothing more than high-speed idiots. This will be exacerbated by the fact that computers are already superior to humans in specific well-defined tasks that can be accomplished with programmable algorithms, and will continue to display a mixture of superhuman ability in some areas and subhuman idiocy in others.

It will also be exacerbated by the development of anthropoid robots (male-themed androids and female-themed gynoids) for care of the elderly, then care of the young, then as sexbots (all three representing enormous markets, and therefore attractive for investment). The anthropomorphism will convince many that the bots are already human-equivalent when in reality they are not.

Meanwhile, as computers become more powerful, there will continue the parallel trends towards increasing networking among computers and increasing directness of interaction (tending towards direct thought control) between humans and machines. The consequence will be that the specific location of intelligence will be of progressively less importance: we will have, not a community of individual humans struggling to keep up with a community of individual computer intelligences, but a global network intelligence encompassing all the people and machines which are linked in. A global brain, in fact.

But it will not be “post-biological” (Davies’s term) any more than we multicellular animals are post-bacterial.

At some point we will probably be able to network brains directly to one another, and later directly to computers. Thus consciousness may be shared among multiple brains, and among brains and sufficiently advanced computers. This will in effect amount to immortality – not that the individual person will live for ever, but that consciousness will no longer appear to be confined to that individual. The illusion of our separate identities will be lost. Consciousness awareness will be able to migrate from one individual to another at will, and ultimately into computers, once those achieve human-equivalent functionality, which as noted above may not happen very quickly.

In the long run, therefore, I think that Davies is partly right: our post-human descendants, assuming continued technological development, will be progressively more designed and less dependent upon their genetic heritage, though where the ultimate balance between grown components (e.g. brains with neurons) and manufactured components will lie is impossible to say. Maybe no ultimate stable balance will be found, and/or different branches of post-humanity will find different solutions.

But my point here is that our intelligence is not a disembodied logic, but is intimately tied in to our experience of the physical world and our abilities to manipulate it. By virtue of our intelligence we can do sports and gymnastics, games and arts of all sorts, and these in turn augment our intelligence. The physics genius also plays the violin or teaches himself to read Sanscrit. Any human-equivalent intelligence worthy of the name – even more so any superhuman intelligence – must be at least as interested in and at least as adept as we are in all the arts and games and sensual experiences and pleasures that we have. Its physical incarnation must therefore be at least as physically agile and as well-endowed with sensory organs as we are.

It will therefore not look in the least like a machine as we think of a machine, and even less be a thinking black box. Daniel Dewey’s idea that a superintelligence can be confined in a cage “whose only tool was a small speaker or a text channel” is self-contradictory – a product of the fashionable over-intellectualising of intelligence and ignoring of its embeddedness in the sensory world.

While it could take many physical forms, one of those forms will continue to be that of the human body because the body has so many modes of functionality – it can dance, play music, taste wine, have sex... (are people seriously supposing that a superintelligence worthy of the name will not be able to dance?). But it will be a body augmented with apparently godlike powers, notably telepathy (i.e. inbuilt wireless networking) and telekinesis (i.e. that networking with everyday objects such as doors and other electronic devices).

The quest for artificial intelligence involves exploring the human brain as well as making ever more sophisticated machines, and the two are beginning to join up. Kevin Warwick has already demonstrated a hardwired link between his own nervous system and a robot arm (on YouTube). Thus artificial intelligence research is being paralleled by cyborg research, suggesting that the future is one, not of computers overtaking human intelligence, but of the two merging together. And this is also discussed in the Daily Galaxy article.

So here I disagree with Davies: we are looking, not at a totalitarian machine future, but at one in which the distinction between biology and technology is increasingly blurred. While our bodies are increasingly augmented by mechanical aids (from spectacles to implanted digital computers), our most sophisticated machines (i.e. computerised robots) will increasingly resemble biology, and may even end up using the same basic machinery of organic molecules, if the recent interest in DNA for data storage is any guide (see the recent Centauri Dreams posting on this).

Not that humans will not turn into cyborgs, because we already are cyborgs. The integration of the biological human with the manufactured product has been going on for a long time now, from the Samurai warrior with his snicker-snee to car drivers, musicians, and obviously people with pacemakers, spectacles, artificial teeth and limbs and so on.

Hall bemoans the inability of computer programs to date (2007) to improve themselves: automatic programming has been “the single greatest failure of AI” (p.142); “The biggest lack of AI systems today is the ability to learn” (p.146). This is because general learning and the formation of new concepts is not a well-defined activity and requires more than just intelligence, if by intelligence is meant facility at manipulating symbols already given. It needs imagination, creativity, intuition, understanding.

As Hall says, “If this [unlimited self-improvement] isn’t possible, then any AI we create will just be a wind-up toy, albeit possibly one with a very big spring; but we won’t have to worry about it taking off into superintelligence” (p.120). He goes on (ch.7) to discuss whether human intelligence is on or above Von Neumann’s “complexity barrier”, the threshold of unlimited self-improvement, but his discussion is hampered by the fact that he does not distinguish between unconscious evolutionary change and deliberate engineered change. In my view, this is a crucial distinction. Evolution and engineering are not at all the same thing!

To conclude: a digital computer as currently understood is a machine for executing programmed instructions. The AI project is founded on the hypothesis that intelligence can be completely codified in a sufficiently large number of instructions, like a sufficiently large Turing machine. But in reality, I claim, intelligence is the ability to see beyond programmed instructions. “Machine intelligence” is therefore a contradiction in terms: intelligence cannot be manufactured in a mechanical way, but must be allowed to emerge organically.

This conclusion is supported by the fact that intelligence is associated with consciousness – another phenomenon which science likes to claim it understands when in fact it does not. At present, science does not have any instrument capable of directly detecting the presence or establishing the absence of conscious awareness in any physical object (a computer, an animal, an alien, another human being), and therefore no grounds on which to establish a correlation between consciousness and any structure of physical matter.

On balance, I would say that the future vision offered by Davies and Shostak must be rejected. The progressive integration of biology with technology will of course continue. The logical outcome is not that biological intelligent life will necessarily be replaced by a machine order, but that the two will merge together organically. Engineering design will continue to make a critically important contribution, but the progress into the post-human realm will ultimately be determined by the growth of the total system in an irreducibly holistic and evolutionary fashion.


Book references

Paul Davies, The Eerie Silence (Allen Lane, 2010), quote on p.160-161.

J. Storrs Hall, Beyond AI: Creating the Conscience of the Machine (Prometheus, 2007).