[wordup] singularity

Adam Shand adam at personaltelco.net
Wed Jul 24 22:08:50 EDT 2002


Some of this is obviously complete BS but some of it's amazing.  For
some reason I am reminded of Ryumin in Schismatrix.

Adam.

From: http://sysopmind.com/tmol-faq/tmol-faq.html

1.1: What is humanity's place in the cosmos?

The same place held by all the other technology-using species now
briefly living on or around the ten billion trillion (1) stars in this
Universe:  Our role in the cosmos is to become or create our
successors.  I don't think anyone would dispute that something smarter
(or otherwise higher) than human might evolve, or be created, in a few
million years.  So, once you've accepted that possibility, you may as
well accept that neurohacking, BCI (Brain-Computer Interfaces),
Artificial Intelligence, or some other intelligence-enhancement
technology will transcend the human condition, almost certainly within
your lifetime (unless we blow ourselves to dust first).

  "Within thirty years, we will have the technological means to 
  create superhuman intelligence. Shortly after, the human era will 
  be ended." -- Vernor Vinge, 1993 

The really interesting part about the creation of smarter-than-human
intelligence is the positive-feedback effect.  Technology is the product
of intelligence, so when intelligence is enhanced by technology, you've
got transhumans who are more effective at creating better transhumans,
who are more effective at creating even better transhumans.  Cro-Magnons
changed faster than Neanderthals, agricultural society changed faster
than hunter-gatherer society, printing-press society changed faster than
clay-tablet society, and now we have "Internet time".  And yet all the
difference between an Internet CEO and a hunter-gatherer is a matter of
knowledge and culture, of "software".  Our "hardware", our minds,
emotions, our fundamental level of intelligence, are unchanged from
fifty thousand years ago.  Within a couple of decades, for the first
time in human history, we will have the ability to modify the hardware.

And it won't stop there.  The first-stage enhanced humans or artificial
minds might only be around for months or even days before creating the
next step.  Then it happens again.  Then again.  Whatever the ultimate
ends of existence, we might live to see them.

To put it another way:  As of 2000, computing power has doubled every
two years, like clockwork, for the past fifty-five years.  This is known
as "Moore's Law".  However, the computer you're using to read this Web
page still has only one-hundred-millionth the raw power of a human brain
- i.e., around a hundred million billion (10^17) operations per second
(2).  Estimates on when computers will match the power of a human brain
vary widely, but IBM has recently announced the Blue Gene project to
achieve petaflops (10^15 ops/sec) computing power by 2005, which would
take us within a factor of a hundred.

Once computer-based artificial minds (a.k.a. Minds) are powered and
programmed to reach human equivalence, time starts doing strange
things.  Two years after human-equivalent Mind thought is achieved, the
speed of the underlying hardware doubles, and with it, the speed of Mind
thought.  For the Minds, one year of objective time equals two years of
subjective time.  And since these Minds are human-equivalent, they will
be capable of doing the technological research, figuring out how to
speed up computing power.  One year later, three years total, the Minds'
power doubles again - now the Minds are operating at four times human
speed.  Six months later... three months later...

When computing power doubles every two years, what happens when
computers are doing the research?  Four years after artificial Minds
reach human equivalence, computing power goes to infinity.  That's the
short version.  Reality is more complicated and doesn't follow neat
little steps (3), but it ends up at about the same place in less time -
because you can network computers together, for example, or because
Minds can improve their own code.

>From enhanced humans to artificial Minds, the creation of
greater-than-human intelligence has a name:  Singularity.  The term was
invented by Vernor Vinge to describe how our model of the future breaks
down once greater-than-human intelligence exists.  We're fundamentally
unable to predict the actions of anything smarter than we are - after
all, if we could do so, we'd be that smart ourselves.  Once any race
gains the ability to technologically increase the level of intelligence
- either by enhancing existing intelligence, or by constructing entirely
new minds - a fundamental change in the rules occurs, as basic as the
rise to sentience.

What would this mean, in concrete terms?  Well, during the millennium
media frenzy, you've probably heard about something called "molecular
nanotechnology".  Molecular nanotechnology is the dream of devices built
out of individual atoms - devices that are actually custom-designed
molecules.  It's the dream of infinitesimal robots, "assemblers",
capable of building arbitrary configurations of matter, atom by atom -
including more assemblers.  You only need to build one general
assembler, and then in an hour there are two assemblers, and in another
hour there are four assemblers.  Fifty hours and a few tons of raw
material later you have a quadrillion assemblers.  (4)!  Once you have
your bucket of assemblers, you can give them molecular blueprints and
tell them to build literally anything - cars, houses, spaceships built
from diamond and sapphire; bread, clothing, beef Wellington...  Or make
changes to existing structures; remove arterial plaque, destroy
cancerous cells, repair broken spinal cords, regenerate missing legs,
cure old age...

I am not a nanotechnology fan.  I don't think the human species has
enough intelligence to handle that kind of power.  That's why I'm an
advocate of intelligence enhancement.  But unless you've heard of
nanotechnology, it's hard to appreciate the magnitude of the changes
we're talking about.  Total control of the material world at the
molecular level is what the conservatives in the futurism business are
predicting.

Material utopias and wish fulfillment - biological immortality,
three-dimensional Xerox machines, free food,
instant-mansions-just-add-water, and so on - are a wimpy use of a
technology that could rewrite the entire planet on the molecular level,
including the substrate of our own brains.  The human brain contains a
hundred billion neurons, interconnected with a hundred trillion
synapses, along which impulses flash at the blinding speed of... 100
meters per second.  Tops.

If we could reconfigure our neurons and upgrade the signal propagation
speed to around, say, a third of the speed of light, or 100,000,000
meters per second, the result would be a factor-of-one-million speedup
in thought.  At this rate, one subjective year would pass every 31
physical seconds (5).  Transforming an existing human would be a bit
more work, but it could be done (6).  Of course, you'd probably go nuts
from sensory deprivation - your body would only send you half a minute's
worth of sensory information every year.  With a bit more work, you
could add "uploading" ports to the superneurons, so that your
consciousness could be transferred into another body at the speed of
light, or transferred into a body with a new, higher-speed design.  You
could even abandon bodies entirely and sit around in a virtual-reality
environment, chatting with your friends, reading the library of
Congress, or eating three thousand tons of potato chips without
exploding.

If you could design superneurons that were smaller as well as being
faster, so the signals had less distance to travel... well, I'll skip to
the big finish:  Taking 10^17 ops/sec as the figure for the computing
power used by a human brain, and using optimized atomic-scale hardware,
we could run the entire human race on one gram of matter, running at a
rate of one million subjective years every second.

What would we be doing in there, over the course of our first trillion
years - about eleven and a half days, real time?  Well, with control
over the substrate of our brains, we would have absolute control over
our perceived external environments - meaning an end to all physical
pain.  It would mean an end to old age.  It would mean an end to death
itself.  It would mean immortality with backup copies. It would mean the
prospect of endless growth for every human being - the ability to expand
our own minds by adding more neurons (or superneurons), getting smarter
as we age.  We could experience everything we've ever wanted to
experience.  We could become everything we've ever dreamed of becoming. 
That dream - life without bound, without end - is called Apotheosis.

With that dream dangling in front of you, you'll be surprised to learn
that I do not consider this the meaning of life.  (Yes!  Remember how
you got here?  We're still talking about that!)  It's a big carrot, but
still, it's just a carrot.  Apotheosis is only one of the possible
futures.  I'm not even sure if Apotheosis is desirable.  But we'll get
to that later.  Remember, this is just the introductory section.

All this is far from being the leading edge of transhumanist
speculation, but I wouldn't want to strain your credulity.  Still, if
you want some of the interesting stuff, you can take a look at my
"Staring Into the Singularity", or the Posthumanity Page from the Anders
Transhuman Pages.  See also 5.4: Where do I go from here?

If, on the other hand, you're still in a future-shock coma over the
whole concept of improved minds in improved bodies, I recommend Great
Mambo Chicken and the Transhuman Condition, the book which was my own
introduction to the subject.  (For more on Great Mambo Chicken, see the
Bookshelf.)

Otherwise, we now return you to your regularly scheduled FAQ. 




More information about the wordup mailing list