Uploaded intelligence

HareBrain

Ziggy Wigwag
Staff member
Supporter
Joined
Oct 13, 2008
Messages
13,951
Location
West Sussex, UK
I've been watching the Netflix animated series Pantheon, in which, broadly, the minds of dying people are uploaded into machines. This has obviously been a staple of SF for some time, but I haven't read/watched much dealing with it.

For those who have, a couple of questions.

First, this has been touted by some a way of surviving death (and so far is presented that way in the series). But if these people think the consciousness of the biological person transfers to the machine, what do they think happens if the data is uploaded into two machines at once? Will the consciousness be aware of being in two places at the same time?

Second, if a machine intelligence claims to be conscious, how will anyone know whether it is merely hallucinating in the same way that ChatGPT etc does?
 
I can't answer those questions specifically, but Iain M Banks deals extensively with backing up in the Culture books, and I'm sure that at one point it is discussed about significance of the point at which the backing up takes place prior to death. Also, Richard K Morgan in the Altered Carbon series has characters who die and are placed into new skins from back-up, which certainly allows for multiple copies of the same person to exist at the same time.

However, I don't think either author believed in a religious "soul," so to them the idea of "consciousness" is not as a non-material essence of a person, but instead identity and personality are merely something that can be written as a program, or as chemical equations, and memories are merely just data to be stored as text and images.

There was a film Johnny Mnemonic (1995) from a William Gibson short story, where people experienced other people's memories written on computer chips. As far as I remember, they didn't share identity and personality, only memories, but maybe some Cyberpunk stories do develop these ideas and answer your questions. Maybe look to the works of William Gibson and Rudy Rucker?
 
For those who have, a couple of questions.

Oh boy, big topic!

First, this has been touted by some a way of surviving death (and so far is presented that way in the series). But if these people think the consciousness of the biological person transfers to the machine, what do they think happens if the data is uploaded into two machines at once? Will the consciousness be aware of being in two places at the same time?

I guess it really depends what you think where or what consciousness is.

If you are a materialist you probably think that consciousness is an emergent property of the physical brain (and the interaction with the physical world, but let's leave that for the moment.) So, a bit like the teleportation issue, if you 'transfer' the various mind states from one body to a new one, I do think that really it is the end of one entity and the creation of a new one. That's not immortality in my book, even if the new one has all the experience and memories of the old one it is not the same consciousness. I guess one could argue that the 'pattern' that represents a mind has survived, so that could be an immortality of sorts.

In this case, having two bodies constructed for transfer wouldn't really be a problem. There would just be two copies of the consciousness, that would then go off and diverge from each other I presume.

On the other hand if you are dualist and believe that the mind is separate from the body then perhaps you could imagine the 'soul' floating over from the old body and attaching itself to the new body...but I do think you have an issue here. Namely why is your mind connected to your body in the first place? What happens when you bin the old body? Does the soul flit off somewhere else? Why would it reconnect with something else? You'd have to answer these questions before moving on to: what would happen if there are two identical bodies?

I think a better way of (perhaps) attaining immortality, that does not involve transfer, would be to take a 'ship of Theseus' approach - if you are a materialist I guess.

So, say you have an advanced nanotechnology that can replace a neuron with an 'e-neuron'. An e-neuron is identical in function to a natural neuron, but, let's say for this thought experiment cannot die and is effectively immortal. It can also be seamlessly be injected into a person's brain and replace its target neuron with no harm. Thus as someone's consciousness is still 'running' the substrate that produces the mind is slowly converted into immortal parts. Then I'd suggest that could provide a person's consciousness with a form of immortality.

Of course could we really produce an 'e-neuron' that 100% accurately replicates the behaviour of your original neurons? Cells are tremendously complex things.


Second, if a machine intelligence claims to be conscious, how will anyone know whether it is merely hallucinating in the same way that ChatGPT etc does?

Quick answer, I'd argue you have no idea if anything else in the universe, except yourself, is truly conscious. Being a subjective being. I'm afraid, is problematic in this regard.

If I am being very nice to ChatGPT, I'd say that it's programming uses mechanisms (neural nets) that we believe our brains use in order for it to work. So one could argue it's related to our brains in some manner. But generally, right now, I don't even think it's hallucinating that it's conscious. It's just not conscious. We're just a bit excited that the process that gives us a ChatGPT output gives us the impression of intelligence. (We can be easily over-excited...)
 
If you are a materialist you probably think that consciousness is an emergent property of the physical brain (and the interaction with the physical world, but let's leave that for the moment.) So, a bit like the teleportation issue, if you 'transfer' the various mind states from one body to a new one, I do think that really it is the end of one entity and the creation of a new one. That's not immortality in my book, even if the new one has all the experience and memories of the old one it is not the same consciousness. I guess one could argue that the 'pattern' that represents a mind has survived, so that could be an immortality of sorts.
This seems the most likely outcome to me -- and I agree, I don't think this would count as proper immortality in the same way as its real-life enthusiasts are thinking. It seems clear to me that they are expecting their experience (or consciousness) to continue, when it would actually terminate.

But what you would then have is a machine (or whatever) that as far as it's concerned, would indeed be the continuation of the experience of the first person. And I'm not sure how you could disprove that, except by uploading the mind to two hosts at once. If they share a consciousness with no clear means of doing so, then I guess we accept that it is a continuation. If they don't, then at least one can't be a continuation, and so logically we could infer that neither is.

I just wondered if that question specifically had been raised in much SF. (I've seen people posit that the Star Trek transporter actually kills the original and creates a copy, but they don't have the means to realise that.)

So, say you have an advanced nanotechnology that can replace a neuron with an 'e-neuron'. An e-neuron is identical in function to a natural neuron, but, let's say for this thought experiment cannot die and is effectively immortal. It can also be seamlessly be injected into a person's brain and replace its target neuron with no harm. Thus as someone's consciousness is still 'running' the substrate that produces the mind is slowly converted into immortal parts. Then I'd suggest that could provide a person's consciousness with a form of immortality.
That's an interesting idea I've not come across before.
 
That's an interesting idea I've not come across before.

Of course to be 'practical' you'd have to replace, as well, all your other cells with immortal versions, because although the body does a reasonable good job of replacing most of the rest of the body for a while, the whole natural replication mechanism does get worn out and break down when we get old. (Although one could probably just rely on other mechanical systems instead? For arms and legs say?)

Which made me think that our individual selves, that sense of being an 'I', does seem to remain intact, even although our bodies and brains are continually changing over time anyway. I believe that some neurogenesis has been observed and we do produce new neurons when we are old, but generally the brain and other systems just slowly degrade over time.

Yet I subjectively feel a strong sense of 'I' over a long time period. I wonder if that's just a blatant strong 'hallucination' our brains give us to stop us going mad. Maybe me 20 years ago really felt, thought and experienced things differently?

=========
 
My quick answer is: Beautiful Intelligence and No Grave For A Fox by Stephen Palmer.

My longer answer is here:
 
Quick answer, I'd argue you have no idea if anything else in the universe, except yourself, is truly conscious. Being a subjective being. I'm afraid, is problematic in this regard.
This is a "problem" caused by your acceptance of the phenomenal consciousness hypothesis - Chalmers, Koch, Goff et al. The functional consciousness hypothesis (Dennett, Humphrey, Frankish et al) considers consciousness not as a thing, an entity or a phenomenon, but as a process.
 
Second, if a machine intelligence claims to be conscious, how will anyone know whether it is merely hallucinating in the same way that ChatGPT etc does?
We would need to agree on real-world (objective) knowledge. In last week's Substack post, I made a suggestion about this. (The Social Correlates Of The Representation Of Consciousness. )
 
Also, Richard K Morgan in the Altered Carbon series has characters who die and are placed into new skins from back-up

It's a long time since I read it, but Ken MacLeod's The Stone Canal (the second book in his Fall Revolution series) has a situation where, to quote the book's blurb on Goodreads,
Life on New Mars is tough for humans, but death is only a minor inconvenience.
 
Some of the best writing about uploaded intelligence I've ever read is by Greg Egan. The novel "Permutation City" is all about this, as are many short stories, and he touches on the subject in several other novels.
 
I think you can simply ignore the philosophy and look at the question mechanically. If you make a copy, the copy exists and doesn't make the first one not work. Doesn't matter if it is a photograph or a person.

The real problem, which is shared with the question of AI consciousness, is whether any sort of upload has true fidelity to the original conscious entity, or whether it is a convincing emulation of the external behavior of the original entity. The fact that we are increasingly relying on programs that write themselves and that we don't understand how they function. Such programs could easily pass the buck by finding the most convenient, rather than accurate, way of turning your "scan" into a version of you. What was discarded altogether? What was turned into a different but similar processing?

The other major issue, which scanned uploading might miss, is the effect of neural plasticity on consciousness over time. How much does the growth and repair of neural tissue effect our ability to cope and change our minds?



But two minds being linked by their initial identicalness is not even vaguely a scientific speculation. That's the realm of the supernatural, unless you have purposely built in some sort quantum entanglement that simple molecules don't have.



The other question is: If 'you' don't actually survive the transfer, do you really care? You are mortal, and you have no other options. Is an electronic 'sibling' that believes it is you and acts like you a bad deal? It will certainly feel good to You 2.0 to be alive, and that is not so different than why people have children or put their names on endowments. The copy will continue your relationships and your interests; find pleasure in the things you did.
 
The other question is: If 'you' don't actually survive the transfer, do you really care? You are mortal, and you have no other options
I guess there's an argument about opportunity cost: if huge resources are uselessly poured into such a project purely because it is understood to mean death-survival, when they might otherwise be used on something more beneficial. Whether billionaires (or whoever) would actually spend it on something more beneficial is of course arguable.
 
I guess there's an argument about opportunity cost: if huge resources are uselessly poured into such a project purely because it is understood to mean death-survival, when they might otherwise be used on something more beneficial. Whether billionaires (or whoever) would actually spend it on something more beneficial is of course arguable.
Like how we waste our time on sci fi when we could be working to solve world hunger?
 

Back
Top