> Is there any good reason to assume that future AGI will share our sense of morality
I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.
I think it would be surprising if it did. Just as our morality is shaped by our understanding of the world and our capabilities, a future AGI's morality would be shaped by its understanding of the world and capabilities. It might do something that we think is terrible but isn't because we lack the capacity to understand why it's doing what it's doing. I'm thinking about how a dog might think going to the vet is punishment but we are actually doing it out of love.