AI Safety Careers Course

nn-black
AI Safety Careers Course

What is AI Safety and why should you care?

AI safety focuses on developing technology and governance interventions to prevent both short-term and long-term harm caused by AI systems. India is increasingly becoming relevant in the global AI ecosystem. In 2023, India was part of the 28 nations that attended the UK AI Safety Summit. The summit concluded with the Bletchley Declaration, which recognized the need for global cooperation to research, understand, and mitigate the risks of frontier Artificial Intelligence (AI) technologies.

As much as advanced AI systems represent the potential for unprecedented benefits to humanity, they also come with massive risks. To shape the future trajectory of this technology for human welfare, we need high-skilled talent to work on its most important problems. The AI Safety Careers Course (AISCC) aims to solve the problem by upskilling Indian talent interested in using their technical skills to further the state of research in AI safety.
We need talent like you to contribute to the solution!

Program Overview

The AI Safety Careers Course is designed to introduce the key concepts in AI safety and equip you with the knowledge to pursue a range of career paths in the domain.

  • The course is free of cost and runs in a completely online format for 8-10 weeks, with the option of pursuing an applied project for another 1 to 2 week(s).
  • It is designed to equip advanced undergraduates, graduate students, and working professionals with foundational education on the potential risks and management strategies associated with rapid AI advancement.

This course has been developed with the support from experts at leading organizations such as FAR.AI and BlueDot Impact.

Why do this course

Structured learning program on AI Safety concepts. When learning about AI safety for the first time, it can be difficult to know where to start. The program will give you the structure and accountability you need to explore a wide variety of AI Safety concepts and give you a rich conceptual map of the field.

Experts in AI Safety facilitate your learning. You will be guided by an expert facilitator who will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.

You’ll meet like-minded people. Your cohort will comprise people who are similarly new to AI safety but will bring a wealth of different expertise and perspectives to the discussions. Many participants form long-lasting and meaningful connections that support them in taking their first steps in the field.

You’ll be supported to take your next steps. This could involve doing further work on your end-of-course project, establishing independent research agendas, applying for programs and jobs at AI safety labs, Think Tanks, and Governments, or doing further independent study through pursuing PhD or Postdoc at top institutes. We also maintain relationships with many individuals and organizations working on AI alignment and will share relevant opportunities with you.

Priority consideration for our AI Safety Research Fellowship. Top-performing students will be directly considered for our full-time research fellowship. This program will include a competitive stipend and the opportunity to work at the London Initiative of Safe AI (LISA) in the UK. Additionally, fellows will receive mentorship from top researchers in AI safety.

Who is this course for

The course is intended for the following audience:

  • Working Professionals: some level of machine learning experience (optional) and are interested in exploring AI safety careers or managing or supporting technical AI researchers. We are also open to professionals who don't have a tech background and would like to understand the AI safety landscape.
  • PhD/Masters/Undergraduate/High School students with STEM backgrounds who are considering a career in technical AI safety to reduce risk from advanced AI.
  • The course is primarily aimed at Indian citizens, NRIs, OIC card holders, and Indians living, studying, and working abroad. Having said that we encourage candidates from all nationalities and regions to apply - especially other Global South countries.

If none of these sound like you, but you’re still interested in technical AI safety research, we still encourage you to apply. The research field needs people from a range of backgrounds and disciplines, and we can’t capture all of them in this list.

What this course is not

This course might not be right for you if you are looking for:

  • A course to teach you general programming, machine learning or AI skills. Our resources page lists a number of courses and textbooks that can help with this. Note that these skills are not hard prerequisites to taking our AI safety course.
  • A course that teaches general ML engineers common techniques for how to make systems safer. Instead, this course is for people involved or interested in AI safety and technical governance research, e.g. investigating novel methods for making AI systems safe.
  • A course that covers all possible AI risks and ethical concerns. Instead, our course primarily focuses on catastrophic risks from future AI systems. That said, many of the methods to target the catastrophic risks can also be applied to support with other areas of AI safety.
  • A course for government policymakers and related stakeholders to learn about AI governance proposals.

Program structure

The program curriculum is meticulously designed by experts to challenge you to think about problem statements in AI safety, intuitively grasp the nature of these risks, and be inspired to contribute to a positive AI future for humanity.

Participants must complete the reading materials each week. In addition, they need to attend a 90-minute seminar-form discussion with their cohort convened by our in-house facilitators.

Topics covered during the course include:

  • Introduction to Machine Learning (Optional)
  • Artificial General Intelligence
  • Reward misspecification and instrumental convergence
  • Goal misgeneralisation
  • Task decomposition for scalable oversight
  • Adversarial techniques for scalable oversight
  • Interpretability
  • Agent foundations / Technical governance
  • Careers in AI safety

Details and logistics

Course Dates: June 17 - August 18, 2024

Optional Applied Project: August 19 - September 1, 2024

Application Details:

  • Applications Open: April 23, 2024
  • Applications Close: May 19, 2024, 23:59 IST.
  • Notification of Acceptance: May 27, 2024

timeline

What are the requirements

  • Availability of at least 5 hours per week to study the readings assigned from the AI Safety Careers Curriculum.
  • Availability of 1.5 hours per week to attend the weekly facilitated sessions with your cohort.
  • A reliable internet connection and webcam (built-in is fine) to join video calls.
  • English language skills sufficient to constructively engage in live sessions on technical AI topics.

Application Process

[Applications are now closed]

We aim to make most application decisions by May 27th after the application deadline closes on EOD May 19th, 2024. Do keep an eye on your emails during this time, as if accepted, we’ll need you to confirm your place. All legitimate emails regarding the course will come from axiomfutures@impactacademy.org

Contact us if you have any questions about applying for the course.

About Us

📖 Who We Are

Since 2023, our team has been working on AI safety field-building by running cutting-edge educational programs and research fellowships. We believe we can support technical talent from across the world, especially regions like India, to contribute to safeguarding humanity's long-term future by aligning AI systems with human values. Our India-based AI safety research field-building initiative was previously called 'Axiom Futures'.

💻 What We Do

Our focus is to provide opportunities for advanced undergraduates, graduate researchers, and young professionals to develop their skills in AI safety and explore challenging ideas in the field. We also offer mentorship for career development and support for job placements. To achieve this, we conduct intensive, high-quality short courses and fellowships, along with networking opportunities with leading experts.

Starter Resources

Feel free to refer to the following resources to familiarize yourself with AI Safety:

With Collaborators From

Image 1 Image 2

Testimonials

Era

Era

The AISCC played a crucial role in my introduction to AI Safety. It has significantly shaped my career interests in AI Safety Research or Technical AI Governance. I'm deeply grateful to the facilitators for their ongoing support, particularly our mentor, who assisted us in every possible way—from research ideas and resources to discussions beyond the formal meetings.

Era Sarda is pursuing a BTech in Mathematics and Computing from the Indian Institute of Technology (IIT) Delhi. She participated in the AI Safety Careers Course 2024.

Rohan

Rohan

I found this course to be super illuminating at getting me caught up to the broad areas of research in AI safety, and being able to talk to and understand the work of AIS researchers. The field has a lot of jargon and is rather unapproachable from the outside, and this course helped me feel like more of an insider in the field.

Rohan Kapoor is pursuing a PhD in Mathematics at Dartmouth College. He participated in the AI Safety Careers Course 2024.

Urja

Urja

Loved the discussions each week, and I feel ready to pursue alignment research.

Urja Pawar is an AI governance and safety researcher with a PhD in Explainable AI from Munster Technological University, Ireland. She participated in the AI Safety Careers Course 2024.