AI systems and the societies they operate within belong to the class of complex systems. This has important implications for ensuring AI safety. Complex systems exhibit surprising behaviors and defy conventional analysis methods that examine individual components in isolation. We explore how unintended side effects often result from interventions in complex systems. To develop effective strategies for AI safety, it is crucial to adopt approaches that account for the unique properties of complex systems and enable us to anticipate and address AI risks.
M. Gell-Mann, The Quark and the Jaguar: Adventures in the Simple and the Complex, St. Martin's Publishing Group, 1995.
D. H. Meadows, Thinking in Systems: A Primer. Chelsea Green Publishing, United Kingdom, 2008.
J. H. Miller and S. E. Page, Complex Adaptive Systems: An Introduction to Computational Models of Social Life. Princeton University Press, United Kingdom, 2009.
J. Gall, The Systems Bible. General Systemantics Press, 2002.
R. Cook, "How Complex Systems Fail," 1998.
Citation:
Dan Hendrycks. Introduction to AI Safety, Ethics and Society. Taylor & Francis, (forthcoming). ISBN: 9781032798028. URL: www.aisafetybook.com
Cookies Notice: This website uses cookies to identify pages that are being used most frequently. This helps us analyze data about web page traffic and improve our website. We only use this information for the purpose of statistical analysis and then the data is removed from the system. We do not and will never sell user data. Read more about our cookie policy on our privacy policy. Please contact us if you have any questions.