Inside AI Warfare
A newsletter that provides in-depth research and analysis of AI Risk and related safety topics

Latest

May
02
Is The UK's AI Safety Institute Dominated by X-Risk Advocates?

Is The UK's AI Safety Institute Dominated by X-Risk Advocates?

The UK and the U.S. are simultaneously standing up AI Safety Institutes. There needs to be full disclosure to avoid regulatory capture by the AI-as-Existential-Risk lobby.
4 min read
Apr
26
Exploring the Esoteric Pathways to AI Sentience (Part One)

Exploring the Esoteric Pathways to AI Sentience (Part One)

Exploring Shoggoths, Tulpas, and other supernatural entities as a vehicle for consciousness for a Human-level Artificial Intelligence.
3 min read
Apr
12
OpenAI Fires Two Researchers For Leaking Information

OpenAI Fires Two Researchers For Leaking Information

On April 11, 2024, The Information reported that Leopold Aschenbrenner and Pavel Izmailov were fired from OpenAI for leaking information. Exactly what they stand accused of leaking wasn't reported.
2 min read
Apr
11
AI Experts Differ Widely On Existential Risk

AI Experts Differ Widely On Existential Risk

3 min read
Apr
01
An Open Source Investigation into the Future Of Life Institute

An Open Source Investigation into the Future Of Life Institute

For reasons unknown, the Future of Life Institute doesn't want to be associated with Effective Altruism even though its President and its Board have repeatedly been involved with EA and CFAR over the years
5 min read
Mar
28
The Open Philanthropy Maze Fueling the Speculative Harms of AI

The Open Philanthropy Maze Fueling the Speculative Harms of AI

Every time I learned of a non-profit safety research organization, I checked to see if they had received a grant from Open Philanthropy, and if its founder(s) was associated with the Alignment AI forum, the EA Forum, or LessWrong. They were, and they had.
1 min read