We review a variety of AI risks that have the potential to lead to catastrophic societal outcomes. These risks are organised into four categories: malicious use, in which individuals or groups intentionally use AIs to cause harm; AI race, in which competitive environments compel actors to deploy unsafe AIs or cede control to AIs; organizational risks, highlighting how human factors and complex systems can increase the chances of catastrophic accidents; and rogue AIs, describing the inherent difficulty in controlling AI systems that may outperform humans in many tasks. For each category of risk, we look at various specific hazards falling under this category and stories that illustrate how such risks might play out.
C. Nelson and S. Rose, "Understanding AI-Facilitated Biological Weapon Development," Centre for Long-Term Resilience, October 2023.
Y. Shavit et al., "Practices for Governing Agentic AI Systems," OpenAI, December 2023.
P. Scharre, Army of None. W.W. Norton, 2018.
P. Park et al., "AI Deception: A Survey of Examples, Risks, and Potential Solutions," ArXiv abs/2308.14752, 2023.
Citation:
Dan Hendrycks. Introduction to AI Safety, Ethics and Society. Taylor & Francis, (forthcoming). ISBN: 9781032798028. URL: www.aisafetybook.com
Cookies Notice: This website uses cookies to identify pages that are being used most frequently. This helps us analyze data about web page traffic and improve our website. We only use this information for the purpose of statistical analysis and then the data is removed from the system. We do not and will never sell user data. Read more about our cookie policy on our privacy policy. Please contact us if you have any questions.