I'm new here so I apologise if this is an old much discussed topic.
Having just finished Neal Asher's Gridlinked I find myself returning again, as I often do, to wondering about AIs in Science Fiction. I have never really had an opportunity to discuss this and thought some of you good folk here might be interested.
Many authors postulate AIs in one form or another and I am particularly interested in sentient AIs and the problems they present for me. Asher just has his AIs, Banks has his Minds, Simmons has the TechnoCore (if I remember it correctly), Hamilton dodges the issue by having his AI's throttled before sentience emerges barring one sentient example that is presented as a thoroughly enigmatic character. In many cases they seem to end up (not unreasonably) pretty much running all human affairs for us.
Most if not all of these AIs have processing abitilies so far beyond humans as to be virtually God like. Given sentience and reasonably presumed advances in processing powers this is not unreasonable. However I can think of no modern authors putting forward any kind of articificial constraint on these AIs akin to Asimov's Laws of Robotics. Given that I simply struggle to understand why such intelligences would be interested in or tolerant of the tirival goings on of their sluggish and limited human creators, let alone still seem to be dependant on them in way.
Then you have the likes of Asher's androids and Banks' drones. Again they are sentient AIs but this time in bodies far more mobile, faster and tougher than human bodies and yet it seems they still need the humans as well. Fequently they are paired up, for example Banks' Skaffen-Amtiskaw and Diziet Sma, or Asher's Golem and human Samarkind soldiers. The androids'/drones' abilities seem so far above those of the humans that you are really left wondering why bother with the humans at all.
Bottom line - why would such AI's bother with or even tolerate us; we seem to just get in the way and mess things up! Actually in the case of Simmons' TechnoCore he seems to be thinking along the same lines, with his AIs (or at least some of them) effectively becoming sinister enemies of humanity, and I guess you could also site examples like Matrix or Terminator.
Incidentally as a software engineer myself I do personally believe that sentient AIs are an inevitable future sooner or later.
Having just finished Neal Asher's Gridlinked I find myself returning again, as I often do, to wondering about AIs in Science Fiction. I have never really had an opportunity to discuss this and thought some of you good folk here might be interested.
Many authors postulate AIs in one form or another and I am particularly interested in sentient AIs and the problems they present for me. Asher just has his AIs, Banks has his Minds, Simmons has the TechnoCore (if I remember it correctly), Hamilton dodges the issue by having his AI's throttled before sentience emerges barring one sentient example that is presented as a thoroughly enigmatic character. In many cases they seem to end up (not unreasonably) pretty much running all human affairs for us.
Most if not all of these AIs have processing abitilies so far beyond humans as to be virtually God like. Given sentience and reasonably presumed advances in processing powers this is not unreasonable. However I can think of no modern authors putting forward any kind of articificial constraint on these AIs akin to Asimov's Laws of Robotics. Given that I simply struggle to understand why such intelligences would be interested in or tolerant of the tirival goings on of their sluggish and limited human creators, let alone still seem to be dependant on them in way.
Then you have the likes of Asher's androids and Banks' drones. Again they are sentient AIs but this time in bodies far more mobile, faster and tougher than human bodies and yet it seems they still need the humans as well. Fequently they are paired up, for example Banks' Skaffen-Amtiskaw and Diziet Sma, or Asher's Golem and human Samarkind soldiers. The androids'/drones' abilities seem so far above those of the humans that you are really left wondering why bother with the humans at all.
Bottom line - why would such AI's bother with or even tolerate us; we seem to just get in the way and mess things up! Actually in the case of Simmons' TechnoCore he seems to be thinking along the same lines, with his AIs (or at least some of them) effectively becoming sinister enemies of humanity, and I guess you could also site examples like Matrix or Terminator.
Incidentally as a software engineer myself I do personally believe that sentient AIs are an inevitable future sooner or later.