What is moral uncertainty? Why does it matter for moral decision making?
Moral uncertainty refers to not knowing which moral beliefs are correct. It matters because different views can conflict; without resolving these conflicts, we cannot know how to act morally.
Why is it important to address moral uncertainty when developing AI systems?
What are some potential advantages of using a moral parliament over hard-coding values into an AI?