AI Safety, Ethics and Society Course

Our course is running from February 19 to May 9, 2025

Priority application deadline: January 31, 2025.
Final application deadline: February 5, 2025.

The past decade has seen swift progress in AI research. Today’s state of the art systems dramatically outperform humans in narrow areas such as understanding how proteins fold or playing chess and Go. They are closing in on expert performance in other areas: for example, they achieve marks that outperform the average doctor and lawyer in professional exams. 

Advances in AI could transform society for the better, for example by accelerating scientific innovation. However, they also present significant risks to society if managed poorly, including large-scale accidents, misuse, or loss of control. Researchers, policy-makers and others will need to mobilize major efforts to successfully address these challenges.

This course aims to provide a comprehensive introduction to how current AI systems work, why many experts are concerned that continued advances in AI may pose severe societal-scale risks, and how society can manage and mitigate these risks. The course does not assume prior technical knowledge of AI.

Why take this course? 

By taking this course, you will be able to:

  • Explore a variety of risks from advanced AI systems. The course explores a range of potential societal impacts as AI systems become more powerful, from automation to weaponization. It also describes rigorous frameworks to analyze and evaluate AI risks, along with proposed mitigation strategies.
  • Broaden your knowledge of the core challenges to safe and beneficial AI deployment and the opportunities to address these. A full understanding of the risks posed by AI requires knowledge from a variety of disciplines, not just machine learning. This course provides a structured overview of the core challenges to safe deployment of advanced AI systems and demonstrates the relevance of concepts and frameworks from engineering, economics and other disciplines.
  • Build connections with others interested in these topics. You will be part of a diverse cohort of participants who bring a variety of expertise and viewpoints. The connections formed during the course can provide meaningful support in navigating and contributing to the field of AI safety.
  • Receive tailored guidance during the course and support with your next steps. Facilitators will help you to understand course material, develop your own perspectives on each subject, and encourage constructive discussions with your peers. We will support you in identifying your next steps, whether that involves building upon your end-of-course project, pursuing further research, or applying for relevant opportunities.

Course structure

The course consists of two phases. In the first phase, lasting 8 weeks, participants work through the course content and take part in small-group discussions. The first week of this phase is the AI Fundamentals Week, which covers core concepts like deep learning and scaling laws. Applicants with adequate prior experience can request to be exempt from this initial week. In the second phase of the course, lasting 4 weeks, participants will work on a personal project to consolidate or extend what they have learned during the course. The expected time commitment for both phases is around 5 hours per week, allowing participants to take the course alongside full-time work or study.

Taught content

During the initial 8-week phase of the course, you will commit 2-3 hours per week to go through the assigned readings, lectures, and other content. You will also take part in a 2-hour group session each week with your cohort (via video call) led by an experienced facilitator. These group sessions provide an opportunity to raise questions, compare and debate different perspectives, and build connections with peers.

Projects

You will have the final 4 weeks to pursue a personal project that builds on the knowledge acquired during the previous phase of the course. You can focus on any topic that is related to the course, and invest as much time as you prefer. For example, you could write a short report that dives into a specific question relating to AI's impacts that you find interesting, or a critique of claims about AI safety that you disagree with. We will provide suggestions on potentially valuable projects.

There will be weekly online sessions of 1-2 hours with your cohort to check in on your progress and receive feedback. At the end of this phase you will share your project with other course participants. 

Based on attending the first phase of the course and submitting an output from your project, you will be awarded a certificate of completion.

AI Fundamentals

This course is designed to be suitable for people of all backgrounds, but some knowledge of basic machine learning concepts will be useful throughout the course. For students without a machine learning background, we will be holding an AI Fundamentals Week from February 19 - 23, covering the ‘AI Fundamentals’ chapter of the Introduction to AI Safety, Ethics, and Society textbook. The expected time commitment for this week is around 5 hours. Students with significant machine learning experience may request to opt out of this week as part of the application process.

Course Dates

  • AI Fundamentals Week: February 19-23, 2025‍
  • Main Course: February 24 - April 13, 2025
  • Projects: April 14 - May 9, 2025

Application Dates

Exact dates and times for each cohort's weekly meetings will be finalized after participants have been accepted and have confirmed their availability.

  • Priority application deadline: January 31, 11:59PM PST
  • Applications close: February 5, 11:59PM PST

Applications received by the priority deadline date will be given preference in admissions selection.

Requirements

  • Participants commit to make themselves available for at least 5 hours per week for course readings and discussions.
  • The course is fully online and open to participants around the world. You will need a reliable internet connection and webcam to join video calls.
  • The course is free of charge.

How is this course different from other courses on AI safety?

While there is some overlap with other courses in terms of the topics covered, this course has several distinctive features:

  • The course has a relatively broad scope in terms of the societal impacts and risks from AI covered, discussing not only loss of control or misalignment, but also other risks such as malicious use, accidents and enfeeblement
  • The course focuses strongly on connecting AI safety to other well-established research fields, demonstrating the relevance of existing concepts and frameworks that have stood the test of time, such as:
    • The importance of structural and organizational sources of risk
    • Safety engineering and risk management
    • Complex systems theory
    • Different lenses to analyze competitive dynamics in AI development and deployment, including game theory, theories of bargaining and evolutionary theory