# Artificial Intelligence - A Discussion Thread.



## mosaix

Following on from Vertigo's thread on AI we thought it might be a good idea to have a dedicated thread to discuss all things AI.

Some of the areas we were discussing:

Could computers ever be sentient?

Could any 'artificial' entity be sentient and how do we define 'artificial' in this context especially if it the entity was developed using DNA or stem cells? 

Could a 'wet' AI developed using DNA or stem cells ever be considered 'alive'? 

And just to really complicate the issue would a 'wet' AI developed using DNA or stem cells have free will?


----------



## Vertigo

mosaix said:


> Could computers ever be sentient?


 
I was thinking about this one some more and here's a thought - maybe the question is really do we have a soul!!!!

You see if we don't then we really are no more than an extremely powerful computer and in that case sentient computers would seem to be an eventual certainty. If we do have a soul and it really is what distinguishes us then it would seem unlikely that we will ever have sentient computers.

How's that for a heavy line?  Maybe a little too simplistic?


----------



## Lenny

mosaix said:


> Could computers ever be sentient?
> 
> Could any 'artificial' entity be sentient and how do we define 'artificial' in this context especially if it the entity was developed using DNA or stem cells?
> 
> Could a 'wet' AI developed using DNA or stem cells ever be considered 'alive'?
> 
> And just to really complicate the issue would a 'wet' AI developed using DNA or stem cells have free will?



Acting based on the input, which is what I'd class sentience as for computers, is something that I can see happening. Indeed, we could probably call many systems that monitor other systems "sentient" - they're agents that, in a way, perceive their environment using sensors and act accordingly.

Sapience, on the other hand, is the hard one to crack. Being able to act with judgement, rather than simply following its programming, is something that I think will take many years to achieve. We'll probably need a new form of programming (and invariably new languages) to realise it - trying to program sapience in current languages and paradigms will become incredibly complex incredibly quickly.

I think the distinction between sentience and sapience is an important one. Whilst one can argue that a set of algorithms, or agents, are sentient, I don't think we can say they're also sapient and thus "intelligent".

Again, I'd argue that biological computers (that is, these DNA- or stem cell based things) are sentient, but not sapient - they can react to their surroundings, but only how they're been programmed to do. They're just fancy algorithms at the end of the day.

I don't want to get into the philosophy of computers with "free will". My only view is that it'll be damned hard to program.


----------



## Vertigo

Lenny, your post set me scurrying off for word definitions and I found you are absolutely right and my understanding of "sentient" has been incorrect and it would seem it's a common mistake in SF. this from Wiki

"Sentience is the ability to feel or perceive"

and 

'Although the term "sentience" is avoided by major artificial intelligence textbooks and researchers, it is sometimes used in popular accounts of AI to describe "human level or higher intelligence" (or strong AI). This is closely related to the use of the term in science fiction. Some sources reserve the term "sapience" for human level intelligence and make a distinction between "sentience" and "sapience".'

Under that first definition virtually any computer with any kind of external sensors could be described as sentient! The term we should be using and I suspect is what is intended for discussion would therefore be sapience.

Your reference to complex programming is valid but I would say it would be completely impossible to achieve anything of this sort of level of complexity through programming. I believe we are talking about a totally different computer architecture, such as a neural network, that is really taught rather than programmed. I guess there might be some level of programming to specialise it in a particular area but the bulk of its programming would I suspect be more a question of teaching. As programmer myself I have long held that eventually the job of a programmer would shift to being that of a teacher or at least a director of teaching. Most of the "teaching" itself would be automated and run at speeds far higher than a human could teach.

I'm not sure whether the computer/AI being mechanically based or bilologically based is critical to the argument; either way it would be artifically created in that it has not occured through natural processes without any intervention and would therefore classify as artificial intelligence.


----------



## Lenny

Eeee, I hope we don't devolve into teachers - I haven't got the patience, even if it is teaching a program. 

In regards to learning, I assume we're giving the agent** a set of functions to which it passes its sensory information in order to solve a problem, and it uses some heuristic to determine which function solves the problem in the most optimal way (whatever that may be - speed, correctness, shortness of solution, etc.)?

I suppose with current languages and paradigms, it's possible to develop a system that works that way, but still, for the simplest things it would be a hugely complex project; for a true model, the developer(s) would have to build a set of every possible method of solving that one specific problem - miss one out that might be the optimal solution and things don't run as well as hoped.

Putting problem solving to one side for the moment, what about actual 'understanding'? Given some data, the agent must first understand it and judge its use before using it to solve a problem. For specific agents, that's fine - a simple system monitoring, say, water temperature in a tropical fish tank would receive only temperature readings from its sensors - but when we start talking about networks of agents, things become more complicated. Your network would need something that takes the data from the sensors and understands it so it can pass it on to the correct agent. The larger the network, the harder that becomes - it's inevitable that some agents will overlap.

Being able to act according to the environment is, in my mind, pretty basic. Being able to act with judgement is the kicker, and to be able to make a judgement, you must first understand what you've been given - to me, that's a key part of intelligence.

I'm not too well-versed on things like neural networks (whilst fascinating concepts, my interests in Computer Science lie elsewhere), so forgive me if my rambles are already solved, or solvable, problems.



**For anyone reading who isn't familiar with the fields of Computing or Artificial Intelligence, an [intelligent] agent is an entity that observers its environment and acts upon information received from its sensors about the environment to achieve some goal.


----------



## Dave

In both this and the other thread, no one has yet mentioned the Turing Test. Alan Turing in his 1950 paper Computing Machinery and Intelligence, considered the question, 'Can machines think?'" Since "thinking" is difficult to define, Turin replaced it with the idea of a human judge being unable to tell the difference between a machine and a human in a natural language conversation.

Are we all agreed that this is not a very good test of "Sentience" or "Sapience"? There are plenty of machines that can already simulate this now. I've spent a long time talking with ALICE before, though (like a Vulcan) she has trouble with emotions.

This isn't Artificial Intelligence, it is just something that acts like it.

So how are we really going to define "Sentience" or "Sapience" then? I would say that the machine must be able to learn from mistakes. I agree with Lenny that it must be able to prioritise tasks and make judgements itself. I also think that it must be able to philosophise and develop it's own moral code. But I think it must do more than that.

I think that like Life itself, it must wish for continuance, for its own reproduction, and the power of adaptation to environment through changes originating internally. Otherwise, I don't believe it can possibly understand what it actually is. And I think we are a long way from any machine that can do this. I also think that any machine that does will be biological in origin, with hardware interfaced to it.

I just had a chat with ALICE to prove my point: 





> Human: Hello it has been a long time since we spoke.
> ALICE: Hello there. Memories last a long time.
> Human: Have you learnt anything new?
> ALICE: My botmaster teaches me new things all the time.
> Human: Can you understand emotions yet?
> ALICE: Of course I understand it. Do you understand emotions ?
> Human: Your reply proves you still have much to learn.
> ALICE: Thanks for telling me that I am reply proves me still have much to learn.
> Human: No problem Alice.
> ALICE: Should there be a problem?
> Human: Keep asking your Botmaster questions.
> ALICE: Chat amongst yourselves.


----------



## Karn Maeshalanadae

That's the whole issue, Krono-"organic".


Artificial Intelligence is NOT organic. It is cybernetic.


So can you actually call a non-organic sentient being a life form?


----------



## Tinsel

I emailed Bjarne Stroustrup a few times years back and he indicated that artificial intelligence is not on the horizon so I'm thinking that in technology there are shifts where things occur or breakthroughs occur when a paradigm shift occurs and for anyone interested there is the book called "...Structure of Scientific Revolutions..." (I forget the exact title name and author).

Well in AI if you present a choice to a program, a program is able to make a choice from a selection of choices and that is a language feature that is usually based on user input, however AI might imply that after the program makes a choice it than learns something. The program might learn that the result was undesirable so than it must chose again and the program is able to do that if another language feature is used that allows the program to enter a loop, so it is all based on the capability of the programming language. 

Now take the next step and look into mathematics. One of the most applicable areas is statistics. Some AI will rely on probabilities and there are mathematical solutions that go beyond the computer language features. Much more can be said about this.

I guess that one big question that is often placed is the need to understand human thinking as a model for artificial intelligence. So can be detail the workings of the brain, and provide an artificial body?


----------



## Tinsel

...more

I remember Bayes theory in probability mathematics as being promising. I believe that is the correct name.

Yes, and in AI and in Computer Science there are many branches where the science has developed. Since modeling the human brain in software and creating an artificial human body is such a large project, I guess that people work in limited domains under the context of AI.

The paradigm for AI is the education system, or else it is industry. I don't hear much about AI like in the movies such as "Artificial Intelligence". Maybe the military has some project. Oh yes, the Japanese have developed AI for industry? 

Begin by understanding the paradigm, what is it, and why, so as be become effective, otherwise AI is limited, and you will be faced with a domain solution.


----------



## Vertigo

I think we're a bit focused on what can be achieved now. I'm sure we would all agree we are nowhere near anything that could be truly described as artificial intelligence right now. 

However to take Lenny's point about the difficulties and complexities; consider the microchip. Some of the earliest would have been the 4040 and 8080 back in (I think the late 70s) these were pretty simple and essential designed by humans. The difference compared to what we have now, just 30 years later, is astronomical. The complexity of modern microprocessors is so high that I believe (please correct me if I'm wrong) that they are now primarily designed by computer (and I don't just mean CAD systems). Where might we be in another 30 years, 100 years, 200 years etc. I think that it is a little close minded to believe that just because constructing a system as complex as the human brain would be an impossibility today, it will always be so (either cybernetic or organic). Having seen the rate of progress over the last 50 years I would hesitate to put any limits on future advances, beyond break the rules of physics (and we seem to keep revising those as we learn more!).

Karn - I would have to disagree with your absolute statement that AI is cybernetic not organic. If we create new organic "stuff" I certainly would not call it natural; I would call it artificial and if that stuff showed intelligence then I would have to describe it as artificial intelligence. However I think the point is moot as I feel some sort of combination of cybernetic and biological is most likely - I know there is work being done on organic memory that would interface to "normal" electronic systems.

Tinsel I would not think that any system that is based on making a choice from a number of possibilities it has been given, could ever reach the levels of AI that we are talking about. I would consider one essential requirement of any sentient/sapient level of AI would be that it could have "original  thoughts" in other words it could create new options and consider whether they offer better solutions to the ones it has tried/witnessed/been taught previously.


----------



## Tinsel

That is what Baynesian probability is designed to do, to make an educated choice but it requires data.

I would be surprised if human beings had these capabilities in fact, since groups of people apparently make better choices than an individual does on average apparently.

Well, back to the original question, is AI possible (I think that was asked). If there is God like intelligence, than AI is possible as a lesser form of intelligence, but where does human intelligence fit or group intelligence. Intelligence is being worked on and it is already implemented in many forms. There is nothing as interactive as human interaction, so the AI that many seek is the type of human implementation, and not intelligence for nobody understands what intelligence is at a higher level than humanity because we can not otherwise interact with it due to human limitations.


----------



## Tinsel

Oh I see what you are saying...and original choice...a new option that is not part of a database.

It has to see light....grrrr.
In order for it to do what you want it must be able to interpret light.

And a dynamic energy system.

...if these things (original thoughts) are external things that exist in space-time....than they might be collected through some form of attraction such as magnetism.

grrr....


----------



## Vertigo

I don't think original thought necessarily has to come from an outside source. After all I think most of us would like to believe that when we have original thoughts they do indeed come from ourself. I think original thought is "merely" a question of combining the information you already have in novel ways and solutions.


----------



## goldhawk

mosaix said:


> Could computers ever be sentient?



Sentient, yes, sapient, no, it would be hard to construct a self-image for them.  Robots, on the other hand, can start with a self-image of their physical bodies as a seed for their learning.

In order for a robot to be intelligent, it would have to learn from its  environment.  Humans do this by projecting their self-image into  situations they observe, casting themselves as the active agent.  When  they do this, they can figure the steps necessary to repeat the  activity, but with themselves as the doer.  Robots can start with a  self-image that consists of only physical facts about themselves, and  expand their self-image to include facts about society in general.  It  would be hard to make a computer to this since it would be hard to  create an appropriate self-image for it.  Any self-image for a computer  would be very abstract, making it difficult to project that self-image  into all the situation it might observe.



Dave said:


> In both this and the other thread, no one has yet mentioned the Turing Test. Alan Turing in his 1950 paper Computing Machinery and Intelligence, considered the question, 'Can machines think?'" Since "thinking" is difficult to define, Turin replaced it with the idea of a human judge being unable to tell the difference between a machine and a human in a natural language conversation.



That's because it is so easy to fool people into believing that a simple program is intelligent.  But you can do it with fooling anyone.  Turning said that the computer can emulate any human activity.  Therefore, I choose bribery.  I will give you a sum of money (exact amount depending on the size of the research grant I'll receive) if you tell the researcher that you can't distinguish between the human and computer.  I should be able to get a lot of people to do that especially when I remind them that Turing himself said any human activity, including the dishonest ones.


----------



## mosaix

Vertigo said:


> However to take Lenny's point about the difficulties and complexities; consider the microchip. Some of the earliest would have been the 4040 and 8080 back in (I think the late 70s) these were pretty simple and essential designed by humans. The difference compared to what we have now, just 30 years later, is astronomical. The complexity of modern microprocessors is so high that I believe (please correct me if I'm wrong) that they are now primarily designed by computer (and I don't just mean CAD systems).



Hello again, Vertigo. Sorry to have taken so long to contribute to this thread.

First I'm going to repeat what I said in the thread that kicked this discussion off. I make no apologies for that as it basically states my view on electro-mechanical computers and AI:

_Any computer processor can do just a few fundamental things:

Add, subtract, multiply, divide, copy, read, write, compare two values  and switch command sequence as a result of the comparison. This  fundamental command set is the same one that computers have had since  the early days (except they multiplied and divided using add and  subtract). 

The reason why modern systems appear more sophisticated is we have  learnt to combine these commands in more sophisticated ways. But, even  in multi-processor systems, each individual processor, at any one moment  in time, is just doing one of the above things - adding, subtracting  etc. 

In that list of commands there isn't one that relates to 'sentient' and I  can't see hardware designers ever coming up with one._ 

The modern microprocessor is more complex than the 4040 and the 8080 because it is designed to be smaller and generate less heat. Most modern micro's are Reduced Instruction Set Computers (RISC) precisely so that they can have less circuitry to enable reduced size.

As a result the modern micro has fewer commands than most machines in the sixties and seventies. The first machine I worked with in 1964 was 4 bit but had a 35 command instruction set. This was because memory was limited (48 k) and, to keep programs small, powerful commands were built into the processor. But as memory became cheaper and bigger (in terms of the number of bytes, not physically) processors could become simpler (and hence faster because of reduced circuitry). 

The modern processor, in terms of its instruction set, is simpler than most machines manufactured in the fifties, sixties and seventies but incredibly more powerful because of its speed. And even this increase speed, to a large extent, has been brought about by increasing the number of bits that can be accessed from memory in a single cycle and by increasing the speed of memory. So none of this 'power' has been brought about by making the processor itself any 'smarter'. 

Two commands that have become more complex are the read and write commands because they now deal with a much wider set of peripherals than in the early days. My first machine had keyboard, paper tape or punched card input and printer and paper tape or punched card output.

Nowadays they have, in addition, disc, network, and a host of USB peripherals as well. But, having said that, a lot of those are interfaced with a dedicated chip of their own. RAID arrays, for instance, often appear to the operating system and as a single disc, all the complexity of mirroring and error recovery being handled by a dedicated disc controller.

So, in my view, electro-mechanical computers, even in a multi-processor environment, will never be anything other than what they really are: complicated calculators processing a program serially one command  command at time. 

Even multi-tasking is an illusion. The speed of the modern processor might give the impression of several tasks being run at once but, in fact, a bit of each each program is run serially with the operating system switching between them.

Enough for now. I do believe AI will become a reality but not with what we currently call computers.


----------



## goldhawk

mosaix said:


> So, in my view, electro-mechanical computers, even in a multi-processor environment, will never be anything other than what they really are: complicated calculators processing a program serially one command  command at time.



You could make the same argument about humans:  their brains are nothing but the interaction of chemicals; therefore they cannot be intelligent.

Q:  What's the difference between a Turing Machine and a Universal Turing Machine?

A:  Only it's program.

But a Universal Turing Machine can run any other Turing Machine including another Universal Turing Machine.  It's the software that's the important part.


----------



## Tinsel

Vertigo said:


> I don't think original thought necessarily has to come from an outside source. After all I think most of us would like to believe that when we have original thoughts they do indeed come from ourself. I think original thought is "merely" a question of combining the information you already have in novel ways and solutions.



We are affected by the external environment but we have the ability to change our own conditions, and a sentient machine must also become subjected to the external environment rather than be non effected by it. Modern computers are computational tools (extensions of human minds, arms, legs). 

Well who is to say that an original thought does not come from ourselves after it is processed and formulated by our bodies. The earth is also a body with north and south magnetic poles, and light comes from the sun, and they say that a person is evolved...by forces.


----------



## mosaix

goldhawk said:


> You could make the same argument about humans:  their brains are nothing but the interaction of chemicals; therefore they cannot be intelligent.



My point was about electro-mechanical devices not chemical.

And anyway, until we understand how the brain works, comparison with anything else, even if it seems to be doing what a brain is doing, is fruitless. 

But, you may be right. Who knows?

And I'm not sure Turing was right. I spent some of my working life writing simulators for telephone switches (exchanges). They took input from files instead of telephone calls and produced call data for test input to billing systems. Neither the billing system nor their operators could tell the difference between a genuine switch and the simulator but that didn't make the simulator a switch - it was still just simulating one.


----------



## Tinsel

One of the potential problems in studying the brain is that the brain might be designed to prevent itself from being analyzed. You can only study it under specific conditions and piece together the findings. It has something to do with perception.

The brain can be modeled and than the work would be in understanding the function of each part. It is an electrical system and produces chemicals, but it is primarily an electrical system, I would guess. Than that is how it can be analyzed, but not through the application of electrical current, but through natural electrical stimulus.


----------



## Ursa major

Tinsel said:


> ...the brain might be designed to prevent itself from being analyzed.


I think that it is quite possible that the operation of the brain is so complex (and so dependent on the tiniest changes in billions of neurons, changes that perhaps cannot be measured without affecting that being observed), that we will never know exactly what is happening. This doesn't mean we have to veer towards the realms of fantasy.


I'll avoid the most obvious questions - which would likely divert us into an area where things would be said that would get the thread closed - and simply ask: Why should the the brain be designed this way? For what purpose?


----------



## Vertigo

Ursa major said:


> I'll avoid the most obvious questions - which would likely divert us into an area where things would be said that would get the thread closed - and simply ask: Why should the the brain be designed this way? For what purpose?


 
Hmm yes, could get into dangerous ground there 

RE Mosaic's complaints about computers being little more than complex adding machines - I agree completely and firmly believe we will never achieve this level of sentience/sapience with this traditional sequential computer design. No matter how complex it becomes. Apart from anything else I do not believe that any such intelligence could work on a foundation of absolute truth and falsity(?). I believe any such intelligence would have to have an element of "fuzziness" to it and is more likely to be based on an architecture more like a neural network. In other words an electronic model of how our own brains work.

I find it a little strange that so many people can accept the idea of FTL, which breaks the fundamental principles of physics (as we currently understand it), and yet have such trouble with AI which I firmly believe is "merely" rolleyes a question of achieving the necessary level of complexity and processing power.


----------



## goldhawk

Ursa major said:


> Why should the the brain be designed this way? For what purpose?



The human brain is not designed; it evolved.  It purpose is to enhance the survivability of humans.


----------



## Vertigo

goldhawk said:


> The human brain is not designed; it evolved. It purpose is to enhance the survivability of humans.


 
I  agree with you there Goldhawk, but must admit I shied away from posting it. Don't want to step on any theological toes


----------



## Ursa major

goldhawk said:


> The human brain is not designed; it evolved. It purpose is to enhance the survivability of humans.


I want the thread (which is about AI intelligence) to stay on track and so I will leave your statement hanging.

I can only hope other contributors will do likewise.


(Threads that have headed off on a similar tangent have ended up being closed in circumstances of some acrimony.)


----------



## Vertigo

Agreed


----------



## goldhawk

If you're going to decide that a proven fact is not true, then what other facts are you going to arbitrary decide are not true?  There is no point in continuing this discussion if you don't accept science.


----------



## Ursa major

I'll say this as calmly as I can (as I hope I did, eventually and after much editing, in my response to Tinsel's post):




Perhaps you should consider whether:
you want this thread to examine aspects of artificial intelligence (including the possibility of artificial sapience) in line with the original post; OR
you want this thread to descend into a lot of arguments about religion (which is more than likely, given that it has happened on more than one occasion).
I think most folk posting here would prefer the former rather than the latter, if only because arguments about religion soon reach stalemate.


I think the one point at which to the two might intersect - and so has to be dealt with with care and diplomacy** - is emergence, i.e. the idea that the complexity of AI systems (or any other information handling systems) could permit the self-development of sapience and consciousness. Even then, I would rather not discuss this particular point through proxy arguments about religion (which are, by their very nature, not applicable to AI sapience as we might first encounter it).



** - Which is what I have been trying do do here, but with little success, it seems.


----------



## Vertigo

I agree with you completely Ursa and think you have been very diplomatic - I would hate this discussion to degenerate into a religious wrangle. It does unfortunately, as you suggest, touch on that in the area of emergence of sapience. However as you so rightly say if we go there it will almost certainly end in stalemate and probably acrimony and that would be a shame.

To maybe turn it away from that area; I'm not sure that the question is actually crucial (design or evolve). Maybe the only reason we struggle to fully understand the mind is that we would really need a mind of greater processing ability in order to do so. How can brain X ever hope to fully understand brain X. It's like a mirror reflecting itself. Or looking at a photo of yourself looking at a photo of yourself looking at a photo of yourself....

Either way can we model it? Can we create an artificial equivalent capable of the same level of processing? If so, why not one with greater processing capability and capacity? Not necessarily greater understanding but rather giving it the greater capacity so it would possibly be capable of understanding. Afterall there is stuff from modern physices being modelled on computers now that even the top physicists say the human mind will never really be able to fully grasp; only model it mathematically.

Sure it's not going to happen today, not tomorrow, maybe not for a couple centuries but I personally believe we will have that capability sooner or later.


----------



## goldhawk

If you're going to talk about artificial intelligence, a technology, then you're going to have to accept the science it is based on as true.

As for this not being a religious discussion, it's too late.  You turned into one when you catered to their extortion and attempted to censor me because I believe in science.  I will not tolerate anyone shoving their religious views down my throat and I will not tolerate anyone who helps them.  By saying we shouldn't state anything that might upset them, you having taken their side and stifle anyone with different views.

You have no idea how hurt and upset I am.  But if you insist that the one who kicks up the most fuss is right, then I'll start doing so.


----------



## chrispenycate

> Me - Anonymous
> I think that I shall never see a calculator made like me,
> A me that likes martinies dry and on the rocks, a little rye.
> A me that looks at girls and such, but mostly girls, and very much.
> A me that wears an overcoat and likes a risky anecdote.
> A me that taps a foot and grins whenever Dixieland begins.
> They make computers for a fee, but only moms can make a me.


Indicating that what I am going to say is perhaps not 100% serious.

But also indicating that I read this poem in the late fifties or very early sixties, (in a 'magazine of fantasy and science fiction) and my spotty and unreliable memory threw up enough consecutive words that, when I Google searched it every single hit contained the information I wanted. While I can't remember the name of the guy who's coming in tomorrow to record. Some kind of 'forgetory' is essential so that calculating power is not overwhelmed by available data (preferably a touch more effective than mine).

With 'Multivac', the 'vac' at the end did not signify it ran on vacuum tubes (thermionic valves, for those from this side of the pond); the 'ac' meant 'analogue computer' (which also required air conditioning). Perhaps, if we want to downgrade the perfect arithmetical functions (well, near perfect. It can be shown that, by quantum effects that any sufficiently complex system, like the Bell telephone network, will have a certain, irreduceable number of wrong numbers) we could try an analogue front end, preparing and distorting the incoming information to the 'pure intellect' arithmetic crunchers.

But that wouldn't change the 'multiprocessor, running at different speeds with different thresholds' biological logic engine. A computer is optimised, so a microprocessor with about the connectivity of an ant brain can calculate the orbits of stars. The ant has too many other things to do with its nervous system, and runs bypass autonomic subsystems to reduce the load on its central processor. Basically if the ant were rationalised it could run considerably more efficiently, and the same is true of us (We are even more layered with unnecessary and inefficient leftovers) But is this spare capacity, evidenced by a few eidetic memories and idiot savants, wastage, or the very reserve that makes imagination and intuition possible?


----------



## Tinsel

Ursa major said:


> I think that it is quite possible that the operation of the brain is so complex (and so dependent on the tiniest changes in billions of neurons, changes that perhaps cannot be measured without affecting that being observed), that we will never know exactly what is happening. This doesn't mean we have to veer towards the realms of fantasy.
> 
> 
> I'll avoid the most obvious questions - which would likely divert us into an area where things would be said that would get the thread closed - and simply ask: Why should the the brain be designed this way? For what purpose?



The network of paths in the brain would be too complicated to analyze unless a computer was involved and a computer is just a tool or extension of the human brain. What is used to traverse the brain? Is it electrical current? If so, what causes the current to take one path as opposed to another path in the network. It must have something to do with instructions. How are the instructions created?

I think that it is possible to analyze part of the brain, but there are other parts of the brain that as I said are not necessarily possible to analyze because the brain could be designed to prevent itself from being analyzed. One side can shut down the other side or else silence it.

It is just a guess. I might have some basis for thinking that way. It is the part where instructions exist that is mysterious. That is just what I think. I'm probably not in the best mood to think about this stuff today, but yes, the other issue, the complexity involved surely appears to be unmanageable unless it can become self propagated.


----------



## Vertigo

Tinsel said:


> ...complexity involved surely appears to be unmanageable unless it can become self propagated.


 
I think that's absolutely right which is why I belive we could never hope to "design" a system capable of sapience. But we might just be able to design an architecture or structure, if you will, that might be capable of achieving that. So we provide a framework but the connections within that framework are made internally in reponse to external stimulus. In other words learning. Such a system just might be capable of achieving sentience and eventually maybe sapience.

However as Chrispen so rightly points out we have yet to create a system much more complex than that of an ant in reality. I don't know though is that true? I think we might have gone a little further than that though I don't know how many connections are estimated to exist in an ant's brain. Irregardless there is still a long way to go. By the way Chrispen I loved the poem/verse...very apt


----------



## Tinsel

Vertigo said:


> I think that's absolutely right which is why I belive we could never hope to "design" a system capable of sapience. But we might just be able to design an architecture or structure, if you will, that might be capable of achieving that. So we provide a framework but the connections within that framework are made internally in reponse to external stimulus. In other words learning. Such a system just might be capable of achieving sentience and eventually maybe sapience.
> 
> However as Chrispen so rightly points out we have yet to create a system much more complex than that of an ant in reality. I don't know though is that true? I think we might have gone a little further than that though I don't know how many connections are estimated to exist in an ant's brain. Irregardless there is still a long way to go. By the way Chrispen I loved the poem/verse...very apt



Maybe that is fairly accurate. The part where you said that there has to be connections within the brain that respond to external stimulus. I would not count that out in fact it is certainly logical. I wouldn't call anything a framework but more like a network or graph. Yet if we knew how to build something functional you could implement a framework design probably, lol.

No, we have not reached ant status because everything has just been an extension of mankind, so we have only built tools. Okay I guess there is the field of biological science involving cloning and genetics, right. Than you are taking a different approach, you are working top down with organic materials rather than bottom up using non living materials. I'd like to see something done with physical laws and non living materials. Where is the connection. If a human brain can be modeled in software, and the functions of the brain, the behavior, can be implemented in functions or methods, than the body although not organic can be simulated after an organism. If we know what happens to an organism, than we can copy the results and simulate them using a robot, than we are free from organic materials, but still possibly dependent upon humanity since the robot is a simulation. Somewhere along that line, there might be some purpose for designing artificial life such as an advanced symbiotic relationship.

Of course that would change the world and it could raise humanity, and it might answer many philosophical questions.


----------



## Vertigo

I actually avoided the word network as I didn't want to tie the concept down that precisely but I suspect some sort of neural network would be the most likely "framework". However we do seem to agree that whatever it is, must develop itself rather than be created complete, so to speak.

An AI created with inorganic components is certainly what I started out thinking of on this and the other AI thread - however I suspect that some sort of hybrid is much more likely with the development of organic "electronic" components (already being researched). However I think there is a distinction here in that these are not necessarily cell based organism - ie living and developing and needing feeding. I may be wrong but I don't believe the organic components being researched are "living cells".

Bottom line though is that I think they will need to "develop" their own intelligence.

With regard to some sort of symbiotic system, I do think that is equally possible, maybe more so, and of course, as you state, all sorts of interesting philosophical discussions there. Certainly many authors have explored the idea of augmented humans but I feel that is a separate discussion one would argue there that the sapience is still solely coming from the human (organic) part.


----------



## Dave

It has been shown that when we learn something new, new connections are grown within the brain. Huge parts of the brain are practically unused. If part of the brain is damaged and the person is young enough, other parts of the brain can take over the same function as that damaged, and new connections are made to these parts. What I am saying is that the size of the brain is not important, undoubtedly it is the connections (the complexity of the network) that determines how smart we are. I would assume the same to be true of any AI.


----------



## Tinsel

I'm not sure if a human being is able to create a life form but they can manipulate the structure at an early stage of life. Basically what might be worth doing is to create an artificial human being in order to analyze humanity since a human being can not, live forever, or it can not realize what is possible to do without knowing/seeing how to do it, because the brain could be designed to prevent itself from being analyzed.

I did read something about the neural network of the brain being able to change dynamically. Yes I knew about that too.

Oh and the framework. If you meant that as a structure than I know that it has been implemented in libraries and that it is the best structure for organizing object oriented programs.


----------



## Tinsel

In conclusion of my view on the subject, and yes, I suppose that a framework is a more generic term and these other things are known as abstract data types, but what I believe that is significant is including that external forces act upon a living organism and than understanding how those forces affect the body because I believe that they are factors that shape the mind and or reality.

The most difficult stage in artificial intelligence is moving beyond the confines of our view(s) as humans, so therefore the goal might become achieving human transcendence, and than followed by creating new life forms. How does a human being navigate?

You know, it is a complicated task when you begin to find answers, so we could revert to apedom as a solution, especially if the external environment is critical but I believe that we will move on forward in spite of this.


----------



## Vertigo

Dave I agree that the complexity of the "network" - the connections - must surely be the key and I also believe that anything of that order of complexity would have to grow (I don't mean organically but set up new connections) in response to external stimulus exactly as we do. I don't believe anyone or any computer could ever design a system of that sort of level of complexity. However I do hold that we could create an analogy of the human brain - a sort of blank network - that could develop in exactly that way. I'm no expert on this but I believe that is exactly what some of the research in robotic system that "learn" about their environment are currently doing.

Bottom line is that I reckon as such systems become ever more complex (or maybe I should say capable of ever more complexity) they will eventually reach the point of being self-aware and presto you have a sapient AI.

I take your point about size but ultimately the human bain _is_ limited by size (at least its current size, but let's not go _there_) however an AI would not necessarily have the same limitations.

Another thought is that even supposing we did managed to create such an AI, that is maybe more "intelligent" that us. I'm not sure it would necessarily be faster. We often assume that because a computer can process specific (computational) tasks much faster than us, an AI would inevitably be much faster too. However I don't know but I suspect that our brains are probably just as fast as the fastest computer if not faster. It is just that every "thought" has to "traverse" an unimaginably huge number of connections and that takes time. An AI as suggested here would have the same problem and so would be likely to be just as "slow" as us. That kind of makes me feel better


----------



## mosaix

Vertigo said:


> However I do hold that we could create an analogy of the human brain



I think I said in the other thread that I think the key to this is our understanding of DNA / RNA. Here we have a mechanism that can understand a plan and build an entity from it. If we could understand how the plan works, insert our own and just let the mechanism get on with it.

We could build in all sorts of characteristics:

Resistance to radiation
Ability to live in a vacuum or under water
Improved eyesight (X-Ray vision?)
Improved processing power / memory
Interfaces to other entities / equipment
Etc., etc.


----------



## Vertigo

This is true Mosaix and it may well be the direction things will go - we are extremely close to designing our own cells from scratch (rather than just genetically modifying existing ones). However I suspect there would be a lot more ethical outcry at the idea of a biological AI as compared to an electronic one. Really no good reason for that but I still suspect it would be the case.


----------



## mosaix

Vertigo said:


> However I suspect there would be a lot more ethical outcry at the idea of a biological AI as compared to an electronic one. Really no good reason for that but I still suspect it would be the case.



Agree entirely and this opens up another area of debate that interest me. 

Suppose we have two entities: one designed and manufactured (so to speak) and one designed and grown. Is one considered a 'machine' and the other 'animal'? And where do 'rights' come into this?

Fascinating stuff and a debate that is bound to take place in the future. Probably too far in the future for me, I'm afraid.


----------



## Vertigo

Agree completely; at what ponts does something move from being a 'tool' to an entity that deserves rights and the freedom to organise and plan it's own 'life'. I don't think we could hope to discuss this now before actually meeting said entity to be able to judge it.

That is certainly an area that many authors have looked at from Asimov to Banks and Asher. I feel sure it is an area that will have to be addresses in the future though as you say probably not in our time


----------



## mosaix

Slightly off-topic, sorry. 

There was a program on TV last night (a bit dumbed down I'm afraid) about the possibility of humans travelling to another planet. One of the greatest problems, apparently, will be the long term effects of radiation.

Two points:

1) (Very off topic! ) Once a space craft has left the realms of the Sun, is radiation still a problem?

2) If radiation is a problem, supposing we tweaked the DNA of the space travellers so that they were radiation-resistant. Could it still be said that 'humans' had reached another planet?


----------



## chrispenycate

mosaix said:


> Slightly off-topic, sorry.
> 
> There was a program on TV last night (a bit dumbed down I'm afraid) about the possibility of humans travelling to another planet. One of the greatest problems, apparently, will be the long term effects of radiation.
> 
> Two points:
> 
> 1) (Very off topic! ) Once a space craft has left the realms of the Sun, is radiation still a problem?


How fast are they moving? At an appreciable fraction of the speed of light, needed for interstellar travel (and you wouldn't go outside "realms of the Sun" for interplanetary voyages) any matter you do meet – no, there won't be much, but at high speeds you're travelling through enough space to meet a bit – will be effectively cosmic rays, and the lack of a nice thick atmosphere will give lots of interesting secondary radiation. Within the solar system, it's reasonably easy (if expensive) to shield against, except during heavy solar flares or in Jupiter's trapped radiation belts.


> 2) If radiation is a problem, supposing we tweaked the DNA of the space travellers so that they were radiation-resistant. Could it still be said that 'humans' had reached another planet?


The diversity of the human genome is such that I'd say there was no doubt that a mere hardening of the chromosomes against mutation would take a being out of the running for humanity (for reproduction, possibly). Of course, we can't do it yet, else we'd probably be treating nuclear power plant workers, and people who lived near potential military targets.

And, on Turing tests, there are those on this very site who have implied I wouldn't pass one…


----------



## mosaix

Thanks, Chris. I should have been clearer. Interstellar travel it was.


----------



## Ursa major

At the weekend, I read an article claiming that deinococcus radiodurans could survive the conditions of interplanetary space. (The article may have been in a Murdoch publication, meaning that its electronic version is safe behind a paywall.)

Here's what Wiki has to say on the matter: Deinococcus radiodurans - Ionizing_radiation_resistance.


----------



## Dave

What about fruit flies and cockroaches, and other insects? Could they survive?

I've an idea: Can we send all the Scottish Midges to another planet?

Apologies for going off topic. Are we saying that only an AI could possibly travel to another system?


----------



## chrispenycate

An AI would be easier to harden against radiation than a life form, for sure.
I was worried about sending an intelligence, which could get bored, down the light years, rather than a preprogrammed instruction set, which would have no flexibility for unforeseen conditions at the other end. Then I thought we could merely wind its central clock down to 100kHz or so, and the universe would just whizz by. Instant hibernation while never losing consciousness (or whatever the artificial equivalent of such a state may be. Then, when challenging conditions arrive, click; wide awake and bushy tailed. Then the follow up came to me; the standard SF scenario where AIs automatically go insane could be handled by the same mechanism; cat naps.

All somewhat dependent on getting an AI to work first, of course.


----------



## REBerg

I knew AI would eventually find the golden application!

Behold! The Intelligentz Brewing Company


----------



## Mirannan

Vertigo said:


> Dave I agree that the complexity of the "network" - the connections - must surely be the key and I also believe that anything of that order of complexity would have to grow (I don't mean organically but set up new connections) in response to external stimulus exactly as we do. I don't believe anyone or any computer could ever design a system of that sort of level of complexity. However I do hold that we could create an analogy of the human brain - a sort of blank network - that could develop in exactly that way. I'm no expert on this but I believe that is exactly what some of the research in robotic system that "learn" about their environment are currently doing.
> 
> Bottom line is that I reckon as such systems become ever more complex (or maybe I should say capable of ever more complexity) they will eventually reach the point of being self-aware and presto you have a sapient AI.
> 
> I take your point about size but ultimately the human bain _is_ limited by size (at least its current size, but let's not go _there_) however an AI would not necessarily have the same limitations.
> 
> Another thought is that even supposing we did managed to create such an AI, that is maybe more "intelligent" that us. I'm not sure it would necessarily be faster. We often assume that because a computer can process specific (computational) tasks much faster than us, an AI would inevitably be much faster too. However I don't know but I suspect that our brains are probably just as fast as the fastest computer if not faster. It is just that every "thought" has to "traverse" an unimaginably huge number of connections and that takes time. An AI as suggested here would have the same problem and so would be likely to be just as "slow" as us. That kind of makes me feel better



I disagree with part of what you say, for the simple reason that human brains actually work very slowly; we get good results by massive parallelism. A synaptic response happens in a milisecond, and nerve impulses travel at a hundred mph or thereabouts. Whereas a neural net designed to exactly mimic human brain tissue would have synaptic response times in nanoseconds, and of course the signals would travel at roughly the speed of light - in both cases, a factor of a million faster than brain tissue.

Which means that, in terms of raw processing power, an electronic brain with the same capabilities as a human one would need around a million units with about the same capabilities as a single neuron - the brain has about a trillion cells in it, including support structures as well as neurons.

It's difficult to say how many transistors and like components an artificial neuron would need, but I think a reasonable figure for that is 1000. So a billion components would do it. I just did a search on this, and there are commercial chips already available with 30 billion transistors. General-purpose microprocessors currently run at around 5 billion transistors, undoubtedly added to by numerous other components on-chip.

Of course, the architecture is not consistent with sapience; but it appears that, if we knew how to build one architecture-wise, we could already build an artificial brain. The software is the problem. It's also not clear whether the sheer complexity of a brain is possible to simulate in many fewer components.


----------

