Softmax/Media

Talks and interviews on organic alignment and the future of AGI as cooperative beings. For essays and research, see our writing.


Scaling Up Organic Alignment

Technical overview of how Softmax uses multi-agent reinforcement learning to evolve cooperation.

Softmax Introduction·July 28, 2025

Why Nature Holds the Answer to Alignment

Exploring how biological systems achieve alignment through collective identities and why we should mimic this in AI.

Win-Win Podcast·September 29, 2025

Building AI That Actually Cares

Emmett challenges the steering-control paradigm and explains why AGI must be treated as a teammate rather than a tool.

a16z Podcast·November 17, 2025

Controlling Tools or Aligning Creatures?

A deep dive into multi-agent simulations, theory of mind, and the moral status of advanced systems.

The Cognitive Revolution·December 27, 2025

Explaining AI to the Humanities

A philosophical exploration of AI, discussing interiority, art, and why alignment is a shared direction, not a restriction.

The Hope Axis·January 31, 2026