AI Safety Careers Course India

AI Safety Careers Course India

What is AI Safety and why should you care?

AI safety focuses on developing technology and governance interventions to prevent both short-term and long-term harm caused by AI systems. India is increasingly becoming relevant in the global AI ecosystem. In 2023, India was part of the 28 nations that attended the UK AI Safety Summit. The summit concluded with the Bletchley Declaration, which recognized the need for global cooperation to research, understand, and mitigate the risks of frontier Artificial Intelligence (AI) technologies.

As much as advanced AI systems represent the potential for unprecedented benefits to humanity, they also come with massive risks. To shape the future trajectory of this technology for human welfare, we need high-skilled talent to work on its most important problems. The AI Safety Careers Course (AISCC) aims to solve the problem by upskilling Indian talent interested in using their technical skills to further the state of research in AI safety.
We need talent like you to contribute to the solution!

Program Overview

The AI Safety Careers Course is designed to introduce the key concepts in AI Safety and equip you with the knowledge to pursue a range of career paths, such as research, focused on AI safety to create safe and responsible AI.

  • The course is free of cost and runs in a completely online format for 8-10 weeks, with the option of pursuing an applied project for another 1 to 2 week(s).
  • It is designed to equip advanced undergraduates, graduate students, and working professionals with foundational education on the potential risks and management strategies associated with rapid AI advancement.

The course is run by Axiom Futures. It is incubated by  Impact Academy, an educational non-profit that enables global talent to become leaders, thinkers, and doers who use their careers to mitigate global catastrophic risks and contribute to a better future.

This course has been developed with the support from experts at leading organizations such as FAR AI and BlueDot Impact.

Apply Here

Why do this course

Structured learning program on AI safety concepts. When learning about AI safety for the first time, it can be difficult to know where to start. The program will give you the structure and accountability you need to explore a wide variety of AI Safety concepts and give you a rich conceptual map of the field.

Experts in AI safety facilitate your learning. You will be guided by an expert facilitator who will help you navigate the course content, develop your own views on each topic, and foster constructive debate between you and fellow participants.

You’ll meet like minded people. Your cohort will comprise people who are similarly new to AI safety but will bring a wealth of different expertise and perspectives to the discussions. Many participants form long-lasting and meaningful connections that support them in taking their first steps in the field.

You’ll be supported to take your next steps. This could involve doing further work on your end-of-course project, establishing independent research agendas, applying for programs and jobs at AI Safety labs, Think Tanks, and Governments, or doing further independent study through pursuing PhD or Postdoc at top institutes. We also maintain relationships with many individuals and organizations working on AI alignment and will share relevant opportunities with you.

Priority consideration for our 2-month Winter AI Safety Research fellowship. Top-performing students will be directly considered for our fully-funded 8-week winter fellowship program. This program will include a competitive stipend and the opportunity to work at the London Initiative of Safe AI (LISA) in the UK. During the second leg of the program, fellows will collaborate full-time in a co-working space in Hyderabad. Additionally, fellows will receive mentorship from top researchers in AI safety.

Who is this course for

The course is intended for the following audience:

  • Working Professionals: some level of machine learning experience (optional) and are interested in exploring AI safety careers or managing or supporting technical AI researchers. We are also open to professionals who don't have a tech background and would like to understand the AI safety landscape.
  • PhD/Masters/Undergraduate/High School students with STEM backgrounds who are considering a career in technical AI safety to reduce risk from advanced AI.
  • The course is primarily aimed at Indian citizens, NRIs, OIC card holders, and Indians living, studying, and working abroad. Having said that we encourage candidates from all nationalities and regions to apply - especially other Global South countries.

If none of these sound like you, but you’re still interested in technical AI safety research, we still encourage you to apply. The research field needs people from a range of backgrounds and disciplines, and we can’t capture all of them in this list.

What this course is not

This course might not be right for you if you are looking for:

  • A course to teach you general programming, machine learning or AI skills. Our resources page lists a number of courses and textbooks that can help with this. Note that these skills are not hard prerequisites to taking our AI safety course.
  • A course that teaches general ML engineers common techniques for how to make systems safer. Instead, this course is for people involved or interested in AI safety and technical governance research, e.g. investigating novel methods for making AI systems safe.
  • A course that covers all possible AI risks and ethical concerns. Instead, our course primarily focuses on catastrophic risks from future AI systems. That said, many of the methods to target the catastrophic risks can also be applied to support with other areas of AI safety.
  • A course for government policymakers and related stakeholders to learn about AI governance proposals.

Program structure

The program curriculum is meticulously designed by experts to challenge you to think about problem statements in AI safety, intuitively grasp the nature of these risks, and be inspired to contribute to a positive AI future for humanity.

Participants must complete the reading materials each week. In addition, they need to attend a 90-minute seminar-form discussion with their cohort convened by our in-house facilitators,  who will be experienced alignment researchers working on AI Safety research.

Topics covered during the course include:

  • Introduction to Machine Learning (Optional)
  • Artificial General Intelligence
  • Reward misspecification and instrumental convergence
  • Goal misgeneralisation
  • Task decomposition for scalable oversight
  • Adversarial techniques for scalable oversight
  • Interpretability
  • Agent foundations / Technical governance
  • Careers in AI Safety

Details and logistics

Course Dates: June 17 - August 18, 2024

Optional Applied Project: August 19 - September 1, 2024

Application Details:

  • Applications Open: April 23, 2024
  • Applications Close: May 19, 2024, 23:59 IST.
  • Notification of Acceptance: May 27, 2024


What are the requirements

  • Availability of at least 5 hours per week to study the readings assigned from the AI Safety Careers Curriculum
  • Availability of 1.5 hours per week to attend the weekly facilitated sessions with your cohort.
  • A reliable internet connection and webcam (built-in is fine) to join video calls
  • English language skills sufficient to constructively engage in live sessions on technical AI topics

Application Process

[Applications are now Closed]

We aim to make most application decisions by May 27th after the application deadline closes on EOD May 19th, 2024. Do keep an eye on your emails during this time, as if accepted, we’ll need you to confirm your place. All legitimate emails regarding the course will come from

Contact us if you have any questions about applying for the course.

About Axiom Futures

📖 Who We Are

Axiom Futures is an India-based startup that enables talent in India to join the AI safety field by running cutting-edge educational and research programs with professionals from leading organizations. We believe we can support India in playing an important role in create safe and ethical AI, especially by enabling Indian talent to contribute to safeguarding humanity's long-term future through aligning AI systems to human values.

💻 What We Do

At Axiom Futures, we collaborate with talented individuals in India to address technical and policy issues related to AI safety. Our focus is to provide opportunities for advanced undergraduates, graduate researchers, and young professionals to develop their skills in AI safety and explore challenging ideas. We also offer mentorship for career development and support for job placements. To achieve this, we conduct intensive, high-quality short courses and fellowships, along with networking opportunities with leading experts from around the world.

Starter Resources

Feel free to refer to the following resources to familiarize yourself with AI Safety:

With Collaborators From

Image 1 Image 2

Participants Testimonials

Basil Labib

Basil Labib

The AI Safety Careers Program at IIT Delhi was very engaging and provided a solid introduction to the fundamentals of AI safety. The facilitators were helpful and responsive and made sure that everyone had a productive time.

Basil is a final-year B.Tech student pursuing Textile Engineering with minor in Computer Science at IIT Delhi.

Shivam Gupta

Shivam Gupta

The AI Safety course run by Axiom Futures at IIT-D was instrumental in introducing me to the field of Alignment research. It has proved pivotal in consolidating my interest in pursuing research in AI Safety as a full time career. I would like to thank the facilitators for their continued support in refining my research agenda around AI Safety.

Shivam is a final-year B.Tech student pursuing Computational Mechanics at IIT Delhi.