After reading an article in this month's New Scientist a few thoughts have popped in to my silly little head about Programming Languages.
Is it possible to have an infallible programming language? One that can not only always perform the tasks is was designed to do, but also one that cannot be misunderstood. If we look at natural languages, like English, it has evolved over hundred if not thousands of years, but still in conversations where the concepts discussed are detailed, intangible and ultimately quite complex we often ask for clarification. Could a language based on the logical functions of microchips ever surpass this level of universal understanding?
Programming languages (at least high level ones) use key words to express ideas that can be linked together in chains to form complex functions, much like a sentence can be formed from words to convey a complex idea, but often with sentences the economy of word use is directly related to the speakers/writers vocabulary, if I'm talking to someone and they use a word I don't understand I may ask for a meaning and they will use more words to describe that single word to me, in this sense where a microchip can only perform something like (really not sure on the number here) 8 to 16 actual functions (maybe less) the key words used in programming languages are made up of several (if not hundreds) of these functions in a particular order.
Programmers pride themselves on efficient code, in the same way that writers or linguists pride themselves on efficient word use and sentence structure (I haven't really mentioned the syntax of languages that is vitally important and relevant to the discussion). Could a programming language ever reach the level of a natural one?
All these thoughts bring me onto the idea of sentience, although it is not known yet how/what it really is, the idea of AI sentience and the philosophical paradox of Sorites. This has made me wonder (partially for a WIP) about the line that could (in the future) be drawn between AI sentience and just plain computing power.
If we say that to take away a single Quotient (IQ) of my brain power would not make me non sentient, but if we removed all of my brain power in increments eventually I would have dropped below (the very interesting link from Ursa in a different thread) the idiot threshold (should that have two h's) that makes me non-sentient (or indeed lower). I'm interested in how this would relate to AI sentience as it wouldn't be too great a leap to assume that the sentience of future AI's would and could be measured in terms of MHz or THz or whatever/teraflops/GB of ram (obviously ignore the wildly different sizes of those and assume they are relevant to the sentient AI's of the future). If I take 1 single flop from my sentient AI's mind, it wouldn't be considered not sentient, but at some point (if I kept removing flops) it would descend into regular AI/computer status.
Anyhoo, I wonder if you guys/gals have any thought on the matter (assuming I haven't dulled you into a state of moronic non-sentience)
Is it possible to have an infallible programming language? One that can not only always perform the tasks is was designed to do, but also one that cannot be misunderstood. If we look at natural languages, like English, it has evolved over hundred if not thousands of years, but still in conversations where the concepts discussed are detailed, intangible and ultimately quite complex we often ask for clarification. Could a language based on the logical functions of microchips ever surpass this level of universal understanding?
Programming languages (at least high level ones) use key words to express ideas that can be linked together in chains to form complex functions, much like a sentence can be formed from words to convey a complex idea, but often with sentences the economy of word use is directly related to the speakers/writers vocabulary, if I'm talking to someone and they use a word I don't understand I may ask for a meaning and they will use more words to describe that single word to me, in this sense where a microchip can only perform something like (really not sure on the number here) 8 to 16 actual functions (maybe less) the key words used in programming languages are made up of several (if not hundreds) of these functions in a particular order.
Programmers pride themselves on efficient code, in the same way that writers or linguists pride themselves on efficient word use and sentence structure (I haven't really mentioned the syntax of languages that is vitally important and relevant to the discussion). Could a programming language ever reach the level of a natural one?
All these thoughts bring me onto the idea of sentience, although it is not known yet how/what it really is, the idea of AI sentience and the philosophical paradox of Sorites. This has made me wonder (partially for a WIP) about the line that could (in the future) be drawn between AI sentience and just plain computing power.
If we say that to take away a single Quotient (IQ) of my brain power would not make me non sentient, but if we removed all of my brain power in increments eventually I would have dropped below (the very interesting link from Ursa in a different thread) the idiot threshold (should that have two h's) that makes me non-sentient (or indeed lower). I'm interested in how this would relate to AI sentience as it wouldn't be too great a leap to assume that the sentience of future AI's would and could be measured in terms of MHz or THz or whatever/teraflops/GB of ram (obviously ignore the wildly different sizes of those and assume they are relevant to the sentient AI's of the future). If I take 1 single flop from my sentient AI's mind, it wouldn't be considered not sentient, but at some point (if I kept removing flops) it would descend into regular AI/computer status.
Anyhoo, I wonder if you guys/gals have any thought on the matter (assuming I haven't dulled you into a state of moronic non-sentience)