Unifying the AI Safety Research Landscape

AI Safety Connect was founded with a simple observation: brilliant minds working on AI safety are scattered across academia, the Effective Altruism community, and the rationalist sphere. Despite shared goals, these communities often work in isolation, missing opportunities for collaboration and cross-pollination of ideas.

Our Mission

Accelerating AI Safety Through Connection

We believe that the path to beneficial AI requires the collective wisdom of researchers from diverse backgrounds. Academic rigor, EA's focus on impact, and LessWrong's epistemic culture each bring unique strengths to the table.

By mapping the landscape of AI safety research, we make it easier for researchers to find collaborators, discover relevant work, and build on each other's insights rather than duplicating efforts in isolation.

Connect Communities

Break down silos between academic institutions, EA organizations, and the rationalist community.

Surface Insights

Make it easy to discover relevant research, regardless of where it was published.

Enable Collaboration

Facilitate meaningful partnerships that accelerate progress on AI safety.

The Problem We're Solving

AI safety research is fragmented across different platforms, publication venues, and communities with distinct cultures and terminologies.

70%

of researchers report difficulty finding relevant work outside their primary community

3x

duplication rate for foundational concepts across communities

18mo

average delay for ideas to cross community boundaries

Our Data Sources

We aggregate and index content from three primary ecosystems, each with its own strengths.

A

Academic Sources

Papers from arXiv, top ML conferences (NeurIPS, ICML, ICLR), and AI safety-focused journals. Rigorous peer review and formal methodology.

  • 35,000+ papers indexed
  • Real-time arXiv sync
  • Citation network analysis
EA

EA Forum

Posts from the Effective Altruism Forum, focused on impact-driven approaches to AI safety and practical interventions.

  • 8,000+ posts indexed
  • Karma-weighted relevance
  • Organization mapping
LW

LessWrong

The rationalist community's intellectual home, with deep dives into alignment theory, decision theory, and AI risk scenarios.

  • 12,000+ posts indexed
  • Sequence tracking
  • Concept graph building

Built by Researchers, for Researchers

AI Safety Connect is developed by a team with roots in both academic AI research and the EA/rationalist communities. We understand the challenges of navigating this landscape firsthand.