top of page
background3.png

Addressing AI Safety’s Talent and Coordination Gaps

Untapped Talent in AI Safety: Many Eager Contributors, Few Opportunities

The AI safety field is witnessing a surge of interest and capable individuals, but available positions have not kept pace. Across major AI safety training programs, acceptance rates are often in the single digits. Collectively, AI safety fellowships accept <5% of applicants, meaning they turn away over 95% of aspiring contributors. In fact, these fellowships now receive more applications each year than the total number of people currently working in AI safety. This imbalance has been termed the “bycatch” problem – a vast pool of would-be researchers and engineers left on the sidelines simply due to limited slots. Many of these rejected candidates are still highly talented and motivated; it seems crazy to many that in such a crucial field we are not trying harder to incorporate as much of this talent as possible.

Awareness of AI existential risks has grown globally, creating far more potential talent than traditional pipelines can absorb. BlueDot, an AI safety education initiative, plans to train 100,000 people in alignment fundamentals in the next few years, reflecting the millions now aware of AI risk. Yet only a tiny fraction of those enthusiasts can progress to advanced research roles under the current system. As it was noted only couple of years ago, there may only be on the order of 300–500 people worldwide actively working on AI safety research today, even though the pool of individuals with the requisite intelligence or skills likely numbers in the hundreds of thousands. In other words, the field remains small and elite, while an untapped army of capable people stand ready to contribute if given the chance. Many are even willing to work for little or no pay initially, driven by altruism and the desire for career capital. Indeed, volunteer-driven projects in the AI community have shown that passionate contributors can produce significant results when given structure – for example, the AI Safety Camp (a volunteer-based research program) has “kickstarted high-impact projects with volunteer effort,” where some teams produced enough early results to later secure funding. Volunteers can gain experience and prove their value: one survey found that regular volunteering correlates with 27% better odds of employment, and 60% of hiring managers value volunteer experience as much as paid work. In short, there is a substantial reservoir of global talent eager to help on AI alignment – often willing to start as unpaid volunteers – if only we create the channels to engage them.

Limitations of Existing AI Safety Fellowship Programs

Traditional AI safety fellowships and internships (SERI MATS, Cambridge’s ERA fellowship, Anthropic’s Fellows program, AI Safety Camp, etc.) provide valuable training, but they reach only a select few. There are now 20+ full-time AI safety fellowship programs worldwide, yet spots in these programs are extremely scarce relative to demand. Many programs have acceptance rates well under 10% – the typical AI research fellowship is <5% acceptance, which means dozens of qualified candidates are turned away for each one admitted. Notably, opportunities for non-research roles are even more limited. Current fellowships overwhelmingly focus on grooming researchers, while talented engineers, managers, and communicators are largely “locked out” of the pipeline. Programs that do target policy, operations, or other non-technical roles receive an onslaught of applications, demonstrating huge unmet demand. This competitive filter not only leaves many capable people without a path in, but also sends a discouraging signal that “ (technical) research is the only way in” to have an impact. As a result, valuable skill sets in policy, advocacy, and organization-building get underutilized while everyone chases a few research spots.

Another problem is the fragmentation and short horizon of work done in these fellowships. Most AI safety fellowships are brief (8 weeks to a few months) and project-focused. Dozens of small, independent research projects run in parallel, but there is often no overarching coordination among them. Each fellow or team pursues its own idea, which encourages exploration but also means efforts can be scattered and duplicative. Crucially, many fellowship projects end once the program ends – with work left unfinished or papers unpublished – because the participants must return to school or jobs unless they secure further funding. Organizers explicitly acknowledge a “hits-based” approach where some projects succeed and others fail or fizzle out. Without a permanent institutional home, promising research threads risk dying on the vine. Furthermore, the lack of a unified strategy can lead to gaps in coverage: important alignment problems might fall through the cracks if no small team just so happens to pick them up. In the current model, we effectively have “100 independent mini-research groups” each semester, rather than one coordinated effort. This fragmentation definitely has its advantages, but it also has its downsides, which can be fixed by another structure. The status quo produces a lot of activity, but in a piecemeal, transient way – leaving much talent unused and many projects incomplete or unaligned with the bigger picture.

The research-centric nature of fellowships has also created talent gaps in the broader AI safety ecosystem. Non-research expertise – in areas like management, operations, communication, and policy – is in short supply relative to the field’s needs. There seems to be a consensus that “non-research roles are more important to recruit for at this time” in AI safety organizations. However, very few training programs exist for these roles, and the field has struggled to integrate people who don’t fit the “researcher” mold. This is a structural problem: when nearly all fellowships signal that research is the main path to impact, many who might excel in policy or coordination either try to force themselves into research (and often get filtered out), or give up on the field. The result is an underutilization of people who could be top contributors in non-research domains. For instance, someone in the 94th percentile of research ability (not quite making the cut for a fellowship) might be in the 99th percentile at government or advocacy work – yet current introductory programs provide little avenue for them. This mismatch leaves critical functions understaffed. Experts have pointed to shortages of organizations and leaders in AI safety as key bottlenecks, not just a shortage of technical ideas. In other words, the community needs more builders, organizers, and specialists in implementation to translate research into impact. Existing fellowships do very little to cultivate that broader talent pool.

Toward a Coordinated, Inclusive Model for AI Safety Research

The challenges above underscore why Theomachia Labs’ approach may be valuable. By creating a volunteer-powered, long-term research organization, Theomachia aims to solve the talent utilization and coordination problems with a new format.

Theomachia Labs recognizes that the AI safety community’s greatest asset may be the thousands of capable individuals eager to contribute outside the tiny elite of fellowship winners. Rather than letting this “bycatch” go to waste, Theomachia provides an open door for anyone globally who is motivated and qualified to help – including those who can only contribute part-time or cannot relocate. This inclusive ethos meets people “where they are,” much like recent part-time programs (e.g. TARA) have done to accommodate professionals with other commitments. By structuring as a volunteer organization, Theomachia taps into altruistic energy that already exists in droves. History shows that volunteers, when well-coordinated, can significantly amplify a field’s capacity. For example, AI Safety Camp has a many-year track record of incubating new researchers through volunteer-led projects; as of 2024, alumni from its volunteer teams went on to found 10 organizations and land 43 jobs in AI safety, proving the model’s efficacy. Theomachia Labs extends this concept by giving volunteers a permanent home to continue contributing beyond a short sprint. This benefits the individuals (who gain experience and a network) and the field (which gains their labor and ideas). As one AI Safety Camp mentor noted, many high-neglect areas can be “kickstarted with volunteer effort” and then attract funding after initial successes. Theomachia Labs is built to systematically unlock that volunteer potential at scale, globally.

Unlike the ad-hoc project selection in many fellowships, Theomachia Labs will pursue a unified research agenda guided by domain experts and prediction markets. This centralized prioritization ensures that volunteer researchers aren’t each reinventing the wheel or chasing pet projects in isolation. Instead, efforts will align with the most pressing unsolved problems in AI alignment, as identified by expert consensus. This addresses the critique that current fellowship outputs are scattershot and lack a clear strategic focus. By operating as one cohesive organization rather than disparate cohorts, Theomachia can direct dozens of contributors toward common goals with clarity and purpose. Clear structure and internal accountability mean projects are less likely to fall through the cracks. Moreover, a permanent lab can undertake multi-phase or long-term research that a 8-week fellowship simply can’t. Promising work won’t be abandoned for lack of next-step support – Theomachia provides the scaffolding to carry research from initial idea to published result and beyond. In essence, the lab’s coordinated model turns what are currently fragmented efforts into a synergistic program. This responds directly to calls in the community for more organizational capacity: field-builders have noted a shortage of structured institutions to absorb and organize new talent. 

A key innovation of Theomachia Labs is its ecosystem approach– welcoming volunteers in operations, outreach, HR, and other support roles, not only technical research. This is crucial because effective AI safety work is multidisciplinary and requires more than just theorists; it needs project managers, communicators, engineers, policy analysts, community-builders, etc. By explicitly valuing every function, Theomachia can leverage talents that other programs overlook. By providing pathways for people with diverse backgrounds – whether an HR specialist or a SMM manager – the lab builds out the robust support structure that a growing field demands. For example, even tasks like coordinating research efforts and improving organizational processes can have outsized impact on AI alignment progress. Theomachia Labs’ volunteers in operations and management will directly address those needs, amplifying the productivity of the research teams. In the long run, this creates a more resilient talent pipeline: someone who starts in an ops or communications volunteer role can later transition into a paid leadership position as they gain experience and prove their dedication. 

Finally, Theomachia Labs explicitly serves as a launchpad for careers – a response to the frustration many feel about “no way in” unless you get a top fellowship. Contributors to Theomachia will gain real project experience, mentorship from expert advisors, and demonstrable achievements, all of which make them strong candidates for paid roles in the wider AI safety ecosystem. It is well-known that many existing organizations have hired staff who initially came from volunteer or fellowship backgrounds. Theomachia formalizes this pathway: volunteers who show impact and leadership can advance to core team roles with compensation as the lab grows. This creates an incentive for talented people to participate even if unpaid at first – there is a clear meritocratic ladder to climb. Additionally, by rotating volunteers through different functions and projects, Theomachia will help them build a broad skill set. This addresses the “experience gap” problem: after programs like SERI MATS, many alumni still struggle to find jobs because of limited publication records or niche expertise. In Theomachia’s model, however, a volunteer might spend a year or even two contributing and end up with a few co-authored papers, a network of professional contacts, and leadership experience organizing a team – all of which significantly improve their employability. By bridging the gap between eager beginner and hired expert, Theomachia Labs could vastly increase the flow of talent into the AI alignment field.

We believe that value proposition of Theomachia Labs is strongly supported by current data and trends in AI safety. The field is overflowing with capable people who want to help with the problem but far too many are currently left out or underutilized. Existing fellowship programs, while helpful, are insufficient and sometimes inefficient in some dimensions, which can be fixed by complementary institutions: they cherry-pick a few individuals, splinter efforts into short projects, and leave systemic talent gaps in their wake. Theomachia Labs’ coordinated, volunteer-centric model directly addresses these issues by scaling opportunities to everyone globally, focusing efforts on priority research, and nurturing an inclusive community where all roles can contribute. This approach aligns with expert recommendations to widen the AI safety pipeline and build more sustaining infrastructure for the field. By converting latent enthusiasm into organized action, Theomachia Labs aims to produce alignment research at a greater scale and consistency – and to turn today’s passionate volunteers into tomorrow’s leaders in the fight for safe AI. 

The Team

We are actively looking for all roles - both in operations and research!

Express interest here.

unnamed.jpg

Ihor Kendiukhov

CEO

Ihor got degrees in math, bioinformatics, economics, and biology, founded a business analytics startup, did research in quant finance and AI for biology, and worked on AI safety at AI Safety Camp and SPAR.  

  • LinkedIn
  • Twitter
Headshot.jpeg

Sidrah Hassan

PauseAI Projects Manager

Sidrah has an academic background in International Politics and MA in Public Policy and experience in leading AI Ethics and AI governance in corporate organisations and most recently has co-founded her own AI Governance platform. Sidrah is passionate about AI policy and governance being an important defence against societal harms and x-risks. 

  • LinkedIn
IMG_1824.jpeg

Noah Chaikin

AI Governance Project Manager

Noah is completing a Master’s in International Relations at Harvard, where his work focuses on AI governance, geostrategic risk, and global policy. He is the founder of The Diplomatic Wire, a platform that translates complex global and technological issues into accessible intelligence. Noah has experience in consulting, humanitarian work, and policy research, and is particularly interested in how cooperative AI governance can strengthen international stability and mitigate existential risk.

  • LinkedIn
John Lund.jpg

John Lund

Director of Operations

John has a BA and MA in philosophy specializing in applied ethics and philosophy of science. He has founded multiple community organizations and small businesses. Most recently, he worked as a technical director building apps for Fortune 500 companies before transitioning into AI safety.

  • LinkedIn
Taraz_Lichtbild.jpeg

Joe Taraz

Operations Manager

Joe has degrees in applied mathematics and physics. Before transitioning to AI safety, he worked on scientific machine learning, computational chemistry and statistical physics.

  • LinkedIn

Advisors

artworks-nVM2DLAtoPWg9qdl-B2S0Aw-t500x500.jpg

Holly Elmore

Executive Director of PauseAI US

Trevor-Lohrbeer-Square-500x500-300x300.png

Trevor Lohrbeer  

Independent Researcher

1758013136144.jpeg

Jaime Raldua

CEO of Apart Research

Partners

Screenshot 2025-10-19 at 15.50.02.png

Join our mailing list for updates on publications and events

© 2025 by Theomachia Labs

bottom of page