I wonder more about the A.I. which is self modifying, expanding, evolving--becoming self-aware. For a while now, we've had software that optimizes itself. So, it's only reasonable to expect that eventually it surpasses our designed limitations and then changes in a direction to suit it.
But before it can surpass its own limitations and choose its own direction, it has to come to that decision through its original programming. What would exist in its programming that would allow it to take such an action? That's not a rhetorical question; it's a complicated one.
Think about us humans, who've been evolving our way from single-celled organisms to thinking, self-aware beings over billions of years. Do we ever violate our "programming"? Maaaaaybe, but most of us eat everyday, seek shelter, sleep, poop, seek mates, have kids, defend ourselves, etc. All of that is baked into our DNA. We call that behavior self-preservation, and it's extremely easy to justify it: we can't do anything else if we don't survive! But the impulse doesn't emerge out of consciousness; DNA just
is the molecule that survives and reproduces, whether in bacteria or humans, because
of course after billions of years of evolution we only see the thing that survived to this point.
Now, we humans have not just a self-preservation instinct but a sense of self. And maybe that sense of self lets us do all the wondrous and terrible things we're famous for. But would we have evolved a sense of self--the ability to look inward, learn from our past, plan for our future--without the enormous evolutionary pressure to preserve the self we've now been granted the power to observe?