The future of AI

Here’s a question that bothers me.
How can anybody using AI be certain that the program was not intentionally biased by the original programmers? This could be done to push a particular ideology over other options.

It seems to me that, in the future where there might be a selection of AI codes available, we might need something analogous to a virus checker to ensure AI impartiality and make an informed choice on the application to use.
I was exploring Chat GPT. I wanted to see if it was good at editing . I used an old short story to try it out . The plot revolved around drug use and sex with aliens . Chat GPT flashed up some red writing, saying my story was in violation of its moral code . If it has a moral code, it probably has a variety of other , maybe more subtle and less obvious, prejudices. Artificial intelligence doesn't exist. The machines are not a danger . It is the humans behind the machines and the possibility of installing subtle influences on a global population that is the worry.
 

Similar threads


Back
Top