JonH
Refreshed and Renewed
It seems to me there are several possible approaches, and we're long way off all of them.
First I thought of is functional neuroimagery,an extension of fMRI, PET, EEG etc. At the moment you can only get a very crude idea of what's going on. Will this get substantially better, or is it a dead end technique?
Second, people will agree to have implanted chips with a neural interface. Prof. Warwick has been implanting simple chips into his body for some years. He connected himself to the Internet over a decade ago, and predicted we'll all have chips in our brains within 30 years. Personally I think those are the same ever-moving 30 years in the future that commercial fusion is always predicted to be.
Third. prediction by observation. If you know someone really well, you often know what they're thinking. You can finish each others sentences. Will machines be able to read all of us this way? With the advent of increasingly good sensors to monitor body movement, and better voice recognition and parsing, will what we do and say give away our thoughts with any level of reliability?
Any other approaches come to mind? [Just think into the interface.]
First I thought of is functional neuroimagery,an extension of fMRI, PET, EEG etc. At the moment you can only get a very crude idea of what's going on. Will this get substantially better, or is it a dead end technique?
Second, people will agree to have implanted chips with a neural interface. Prof. Warwick has been implanting simple chips into his body for some years. He connected himself to the Internet over a decade ago, and predicted we'll all have chips in our brains within 30 years. Personally I think those are the same ever-moving 30 years in the future that commercial fusion is always predicted to be.
Third. prediction by observation. If you know someone really well, you often know what they're thinking. You can finish each others sentences. Will machines be able to read all of us this way? With the advent of increasingly good sensors to monitor body movement, and better voice recognition and parsing, will what we do and say give away our thoughts with any level of reliability?
Any other approaches come to mind? [Just think into the interface.]