1 min read

The Open Philanthropy Maze Fueling the Speculative Harms of AI

Every time I learned of a non-profit safety research organization, I checked to see if they had received a grant from Open Philanthropy, and if its founder(s) was associated with the Alignment AI forum, the EA Forum, or LessWrong. They were, and they had.
The Open Philanthropy Maze Fueling the Speculative Harms of AI
A Partial draft of the EA funding ecosystem in the U.S. and UK

When I first started researching my chapter on the risks associated with Artificial Intelligence for the third edition of Inside Cyber Warfare, I was under the impression that existential risk (x-risk) was a legitimate concern, and that the different non-profits doing research on the Alignment problem were independently coming up with the same conclusion.

It wasn't until OpenAI's Thanksgiving massacre that I became aware of the role that Effective Altruism and Open Philanthropy played in propagating that concern.

Every time I learned of a non-profit safety research organization, I checked to see if they had received a grant from Open Philanthropy, and if its founder(s) was associated with the Alignment AI forum, the EA Forum, or LessWrong.

They were, and they had.

I checked with a few AI computer scientists who I knew weren't affiliated with EA and asked their opinions about AI safety and existential risk. All of them agreed that too much emphasis was being placed on the Alignment problem, which was at best a speculative harm, instead of working on present day threats from AI such as Cognitive warfare (disinformation/misinformation), bias, autonomous weapons, privacy, and more.

I've started this newsletter to help researchers, policymakers, and the general public understand the messaging around existential risk and be able to make a more informed decision about the dangers that are real and the ones that are hypothetical.

I'll be publishing on a more or less weekly basis as well as provide fresh drafts of the EA MindMap. If you think this effort is worthwhile, please consider becoming a subscriber.