avatar

E12: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz

ericmckay4 's Listen Later
ericmckay4 's Listen Later
Episode • Apr 25, 2023 • 2h 12m
Podcast: "Moment of Zen" (LS 38 · TOP 2% what is this?)
Episode: E12: Effective Accelerationism and the AI Safety Debate with Bayeslord, Beff Jezoz, and Nathan Labenz
Pub date: 2023-03-11



Anonymous founders of the Effective Accelerationist (e/acc) movement @Bayeslord and Beff Jezoz (@BasedBeff) join Erik Torenberg, Dan Romero, and Nathan Labenz to debate views on AI safety. We record our interviews with Riverside. Go to https://bit.ly/Riverside_MoZ + use code ZEN for 20%.

(3:00) Intro to effective accelerationism

(8:00) Differences between effective accelerationism and effective altruism

(23:00) Effective accelerationism is bottoms-up

(42:00) Transhumanism

(46:00) "Equanimity amidst the singularity"

(48:30) Why AI safety is the wrong frame

(56:00) Pushing back against effective accelerationism

(1:06:00) The case for AI safety

(1:24:00) Upgrading civilizational infrastructure

(1:33:00) Effective accelerationism is anti-fragile

(1:39:00) Will we botch AI like we botched nuclear?

(1:46:00) Hidden costs of emphasizing downsides

(2:00:00) Are we in the same position as neanderthals, before humans?

(2:09:00) "Doomerism has an unpriced opportunity cost of upside"

More shownotes and reading material released in our Substack:
https://momentofzen.substack.com/

Thank you Secureframe for sponsoring (Use "Moment of Zen" for 20% discount) and Graham Bessellieu for production.



The podcast and artwork embedded on this page are from Erik Torenberg, Dan Romero, Antonio Garcia Martinez, which is the property of its owner and not affiliated with or endorsed by Listen Notes, Inc.