Discussion about this post

User's avatar
Avik Garg's avatar

What always sold me abt effective altruism was that it said most things ppl think make the world better either aren’t cost-effective or are even counterproductive because making the world better is hard and counterintuitive (As Caplan says, EA is a rebuke to social desirability bias).

I think the root of problems is fast timelines. Bc in fast timeline worlds, ppl feel forced to take epistemic shortcuts. There’s no time to do math, much less double check the assumptions. And so the relatively quick judgements of experts has to take over.

I’m worried this will go quite wrong, and so if anything, I think you understate the issue here. As EA principles get relegated to the back burner for when smart EAs have power and resources, the internal debate needed to handle complex decisions is lost. It’s not sufficient for leaders to believe in EA; there needs to be a place to identify, research, and debate problems.

This doesn’t mean being paralyzed by every uncertainty but as you mention in the post, it means constantly doing cause prio and having the institutional ability to shift in response to innovations in cause prio.

Expand full comment
Habryka's avatar

Thank you! I disagree with you on elevating veganism as a moral standard or measure of anything going well in EA any view, but agree with you on a lot of the rest. Thanks for putting in clear (and less seeping-with-frustration) language a lot of stuff I've been trying to express.

Re veganism: It's really not cost-effective! It's among the least cost-effective interventions out there! The whole obsession with veganism as moral signaling was one of the things that always put me most off of the historical EA community. Why do people keep elevating it so weirdly?

Expand full comment
15 more comments...

No posts