Announcing winners of Future Academy v2: India Edition!

  • 8 min read
  • May 12 2024
  • AI Governance

  • AI Safety

  • Education

  • Future Academy

  • Impact project

Announcing winners of Future Academy v2: India Edition!

This month, we are thrilled to announce the winners of our Future Academy program and celebrate the achievements of our Fellows!

Future Academy is our flagship fellowship that aims to equip students and young professionals from all over the world with tools, knowledge, and networks to pursue ambitious careers contributing to a better future. The second iteration of Future Academy was conducted in India and ran from November 2023 to March 2024, culminating in a final Impact Summit between 29–31 March 2024 in Bengaluru, India. 

Fellows from three tracks of the fellowship (Incubator, Research, and General Career) delivered presentations on their projects. During the Final Impact summit, before the final submission in April, they had an opportunity to pitch their ideas in real time, receive feedback, and improve their work.

Some of the teams also had the opportunity to present in front of a larger audience of guests during the summit, which included individuals working on AI safety research, startups, think tanks, social impact organisations in India, as well as members of the EA Bangalore, ACX, Global Shapers and Emergent Ventures India communities.

The final written project submissions were assessed by a three-member jury of Vilhelm Skoglund, Kristian Rönn and Varun Deshpande. The jury distributed a prize amount of $50,000 among six teams based on their written project submissions, which assessed the projects based on specific criteria that scored the impact potential of projects as well as how it furthered the fellow's career goals.

Given the prize pool, we could only choose a limited number of projects even though other projects displayed high-quality effort as well.

Winners of Future Academy v2.0

The following were the projects that stood out in terms of their impact potential and won the prizes.

The first prize ($30,000) was claimed by one team:

  • Riccardo Varenna and their technical co founder won the first prize ($30,000) for the AI Chip Enclosure Project - [Incubator Track]

The risks posed by advanced artificial intelligence are significant and growing, making the need for effective governance mechanisms more urgent than ever. In response to this challenge, Riccardo's and their cofounder's AI Chip Enclosure project aims to start an organisation that will develop novel technology for compute governance, analogous to how seismometers enabled the enforcement of the Comprehensive Nuclear-Test-Ban Treaty. This technology involves a secure layer embedded with a "physical unclonable function" around AI chips, which ensures that security features on the chips cannot be disabled, tampered with, or replicated without detection.

Such a breakthrough would enable for the reliable verification of compliance with compute governance regulations, which is essential for building trust among international stakeholders. This would enable the enforcement of international regulations on AI development, potentially preventing the unchecked spread of powerful AI technologies. By providing a way to ensure compliance, this technology could foster international cooperation and help maintain a balance of power, thereby mitigating the risks of an arms race in AI development. You can read more about their project and how to support them over here.

The second prize ($7,000 each) went to two projects:

The AI safety research field is currently small with ~400-1000 researchers and ~20 research organizations focusing on preventing severely negative outcomes from AI. Resources for AI safety (talent & money) are becoming increasingly available and additional organizations can help harness these and translate them into high-quality research. These organizations can increase the number of thoroughly explored research bets and the number of jobs for research talent and make the AI safety field more scalable. To speed up and improve the process of growing the AI safety research field’s outputs, this project aims to create an organization to enable the founding of additional AI safety research organizations

.Catalyze aims to pilot a 3-month Charity Entrepreneurship-style incubation program for technical AI Safety research organizations, partly in-person in London (LISA). The program would help participants with a promising organization idea find a complementary co-founder, get access to a pool of mentors & advisors, a seed funding circle, tailored support, a community of fellow founders, and many networking opportunities.

Overall, Catalyze aims to increase the number of impactful AI safety research organisations by bringing together strong researchers, entrepreneurs and resources in an intensive incubation program. You can read more about their project here.

 

Their team conducted a quantitative and qualitative analysis on the potential of improving indoor air quality as a means of protecting against pandemics in sub - Saharan Africa. Poor indoor air quality (IAQ), laden with infectious bioaerosols, poses a significant challenge to global public health systems, leading to seasonal illnesses and even pandemic outbreaks. The costs of respiratory illness in sub-Saharan Africa are staggering, amounting to about 60.2 million DALYs, 11.8% of global DALYs in 2019 alone. Additionally, the rising concern over a future airborne pandemic is particularly threatening to this region, whose population is projected to triple by 2100. It is these pressing concerns that drove their investigation into the viability of improving IAQ as an intervention to bolster health, well-being, and pandemic preparedness efforts in the region.

The rationale for IAQ improvement in the region rests on several key assertions: firstly, that poor IAQ contributes to high rates of morbidity and mortality in sub-Saharan Africa; secondly, it poses a significant barrier to pandemic preparedness efforts, imperiling the lives of the millions of near- and far-future generations across Africa and beyond; thirdly, that tractable ways to improve IAQ in the region exist; and lastly, d. hits-based donors, who typically embrace higher-risk bets, may find supporting this cause particularly attractive. Furthermore, they also highlight the case against improving IAQ in sub-Saharan Africa, including concerns over feasibility and tractability, inadequate real-world data on its promises, enforcement and compliance challenges in the region, and concern over exacerbating the climate crisis. You can read more about their project here.

Three projects were awarded the third prize ($2,000 each):

  • ‘Modeling Cooperation Between Super-Intelligences and Humans’, by Mariana Meireles -  [Research Track]

Mariana's project investigates the dynamics of cooperation among intelligent agents, drawing from interdisciplinary research that identifies cooperation as a key attribute of human success globally. Her study focuses on potential interactions between humans and artificial general intelligences (AGIs) if both exist as distinct intelligent entities on Earth. This includes exploring possibilities for beneficial collaborations and the risks of competition and conflict over resources, where AGIs could engage in harmful power-seeking behaviours.To analyze these interactions, Mariana uses the Iterated Prisoner’s Dilemma (IPD) within a game theoretical framework, a proven method for modelling strategic interactions that involve conflict and cooperation, such as historical geopolitical conflicts. This approach simulates repeated interactions between highly capable AGIs and less capable humans, aiming to determine how differences in intelligence influence their cooperative or exploitative behaviours. The project plans to integrate model simulations with behavioural experiments using large-language models to identify conditions that promote cooperation over-exploitation. You can read more about their project here.

This project aimed to provide a comprehensive analysis of AI governance in India intended to serve as a reference for the AI safety community and guide India's strategic role in shaping global AI governance. The report emphasized India's importance as a major economy with significant market size, talent, and data resources, noting its influence in international AI safety discussions and partnerships. Domestically, India focuses on developing indigenous AI capabilities and infrastructure, attracting global tech investment despite challenges such as semiconductor development, ethical AI usage, and addressing bias and inclusion.

The report references key policies and initiatives, including NITI Aayog's National Strategy for AI and various expert group reports, highlighting Bhashini's role in enhancing digital public infrastructure for better service delivery. It also covers perspectives from influential industry groups and think tanks, mapping out India's AI discourse. Future steps suggested include soliciting expert feedback, engaging with key stakeholders, and promoting open forum discussions. Long-term strategies involve fostering dialogues on AI safety, partnering with think tanks and government bodies, enhancing public discourse, and expanding community-building initiatives in AI safety through scaling up Impact Academy's programs focused on AI Safety and Governance in India. You can read more about their project here.

This team's project proposes an organization that focuses on addressing critical issues in AI safety and governance, with an expanded interest in global catastrophic risks. Recognizing the disproportionate impact on the Global South, where 88% of the world's population resides, their project aims to boost their involvement in shaping future policies and strategies. Historical injustices and economic disparities exacerbated by globalization have left many communities vulnerable, particularly with the rapid technological advancements.To tackle these challenges, their plan involves implementing outreach campaigns, developing educational resources, and establishing capacity-building partnerships. We'll engage the public through social media to increase awareness of AI safety and create digital resources and introductory courses to educate and identify talent. By partnering with developed countries and offering scholarships and mentorships, we intend to empower individuals from underserved regions to actively participate in global governance and address global risks, promoting a more inclusive and equitable global landscape.

We would like to again congratulate all the winners and fellows for their unwavering dedication and hard work throughout the five-month program. We appreciate their efforts and hope that they continue to make a positive impact in their future endeavours and contribute to making the world a better place.

Meet our fellows!

Introducing Jayat! With a Master’s in Development Studies from the Indian Institute of Technology (IIT) Madras and research experience at prominent public policy think tanks, Jayat aims to solve governance problems for safe and responsible AI. Jayat is particularly excited about...

  • 3 min read
  • 3 min read

Meet our fellows!

Introducing Klara and Shankar, two Future Academy Fellows!   Klara, with a background in political science, is interning at the Danish Consulate in Bengaluru while writing her master’s thesis and participating in Future Academy. In her thesis, she explores how high school students...

  • 1 min read
  • 1 min read