Join the AISES course
Take action
Course overview
Curriculum
Download textbook
1. Overview of Catastrophic AI Risks
0.1
Preface
1.1
Overview of Catastrophic AI Risks
1.2
Malicious Use
1.3
AI Race
1.4
Organizational Risks
1.5
Rogue AIs
1.6
Discussion of Connections Between Risks
2. AI Fundamentals
2.1
AI Fundamentals
2.2
Artificial Intelligence & Machine Learning
2.3
Deep Learning
2.4
Scaling Laws
2.5
Speed of AI Development
2.6
AI Fundamentals Conclusion
3. Single Agent Safety
3.1
Single Agent Safety
3.2
Monitoring
3.3
Robustness
3.4
Alignment
3.5
Systemic Safety
3.6
Safety and General Capabilities
3.7
Conclusion
4. Safety Engineering
4.1
Safety Engineering
4.2
Risk Decomposition
4.3
Nines of Reliability
4.4
Safe Design Principles
4.5
Component Failure Accident Models and Methods
4.6
Systemic Factors
4.7
Tail Events and Black Swans
4.8
Conclusion
5. Complex Systems
5.1
Complex Systems
5.2
Introduction to Complex Systems
5.3
Complex Systems for AI Safety
5.4
Conclusion
6. Beneficial AI and Machine Ethics
6.1
Beneficial AI and Machine Ethics
6.2
Law
6.3
Fairness
6.4
The Economic Engine
6.5
Wellbeing
6.6
Preferences
6.7
Happiness
6.8
Social Welfare Functions
6.9
Moral Uncertainty
7. Collective Action Problems
7.1
Collective Action Problems
7.2
Game Theory
7.3
Cooperation
7.4
Conflict
7.5
Evolutionary Pressures
7.6
Conclusion
8. Governance
8.1
Governance
8.2
Growth
8.3
Distribution
8.4
Corporate Governance
8.5
National Governance
8.6
International Governance
8.7
Compute Governance
8.8
Conclusion
9. Appendices
9.1
App. A: Normative Ethics
10.1
App. B: Utility Functions
11.1
App. C: Reinforcement Learning
12.1
App. D: Long-Tailed and Thin-Tailed Distributions
13.1
App. E: Evolutionary Game Theory
14.1
App. F: Other Cooperation Mechanisms
15.1
App. G: Intrasystem Conflict Causes
16.1
Acknowledgements
Download textbook
Course
Curriculum
Take action
Join the course
All
Section
Appendix
15.1
App. G: Intrasystem Conflict Causes
This appendix provides further detail on intrasystem goal conflict, which was briefly discussed in the Evolutionary Pressures section of the Collective Action Problems chapter. Such conflict may cause an AI system not to behave as a unified agent.
No items found.
Review Questions
Answer:
View Answer
Hide Answer
Answer:
View Answer
Hide Answer
Answer:
View Answer
Hide Answer
0.1
1.1
1.2
1.3
1.4
1.5
1.6
2.1
2.2
2.3
2.4
2.5
2.6
3.1
3.2
3.3
3.4
3.5
3.6
3.7
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
5.1
5.2
5.3
5.4
6.1
6.2
6.3
6.4
6.5
6.6
6.7
6.8
6.9
7.1
7.2
7.3
7.4
7.5
7.6
8.1
8.2
8.3
8.4
8.5
8.6
8.7
8.8
9.1
10.1
11.1
12.1
13.1
14.1
15.1
16.1