What always sold me abt effective altruism was that it said most things ppl think make the world better either aren’t cost-effective or are even counterproductive because making the world better is hard and counterintuitive (As Caplan says, EA is a rebuke to social desirability bias).
I think the root of problems is fast timelines. Bc in fast timeline worlds, ppl feel forced to take epistemic shortcuts. There’s no time to do math, much less double check the assumptions. And so the relatively quick judgements of experts has to take over.
I’m worried this will go quite wrong, and so if anything, I think you understate the issue here. As EA principles get relegated to the back burner for when smart EAs have power and resources, the internal debate needed to handle complex decisions is lost. It’s not sufficient for leaders to believe in EA; there needs to be a place to identify, research, and debate problems.
This doesn’t mean being paralyzed by every uncertainty but as you mention in the post, it means constantly doing cause prio and having the institutional ability to shift in response to innovations in cause prio.
Completely agree. I also think there's a bit of chicken and egg happening with fast timelines, there were already big shifts towards AI-first thinking before that became a dominant talking point and you're much more likely to be sold on short timelines once you've already committed to AI as the thing and you're looking for ways to build alliances
Thank you! I disagree with you on elevating veganism as a moral standard or measure of anything going well in EA any view, but agree with you on a lot of the rest. Thanks for putting in clear (and less seeping-with-frustration) language a lot of stuff I've been trying to express.
Re veganism: It's really not cost-effective! It's among the least cost-effective interventions out there! The whole obsession with veganism as moral signaling was one of the things that always put me most off of the historical EA community. Why do people keep elevating it so weirdly?
I completely agree on effectiveness of personal veganism specifically. I'm gesturing at two things. One is just using veganism as shorthand for animal welfare generally. Another is using it as an example of a personal sacrifice that suggests a given person is emotionally in tune with the moral world (which is not distinctly EA at all, but points to something I think is foundational for EA).
Oh, yeah, I do think it's a decent-ish proxy for that. I do personally feel like the GWWC pledge or something in that space is a better proxy (and relatedly, I am particularly sad that many of the people who are ending up working on AI Safety at very high paying labs are doing relatively little in terms of charitable giving, though I do have a bias here in that Lightcone would be a natural recipient of those donations, and so I trust my reasoning here less than usual)
Last summer, I stumbled upon 80,000 Hours. More accurately, they showed up on my Instagram feed as an advertisement. I never click on ads, but something about 80k called to me and I ended up on their website. I read through their guides, implemented their advice in my own life, and eventually scheduled a call with one of their advisors and received great advice. I spent much of the summer trying to get my friends to do the same, alongside reading up on Effective Altruism and slowly growing "EA-adjacent."
I no longer recommend 80,000 Hours to people. Their AI rebrand has stopped me from doing so: while I still occasionally share the career guide, 80k now primarily exists to funnel people into the AI safety space. I can't in good conscience direct people to the site, not when I know they don't want to become AI-safety-pilled and wouldn't appreciate an attempt for me to sway them. I myself am no longer a huge X-risk worrier, and have accordingly drifted away from the EA space as it has drifted away from me.
I wish that things would be different, but I can't help but wonder if this is the natural conclusion of a fundamentally rationalist and high-modernist project. You write that AI safety identitarians "care so much about putting every last chip down on the highest-marginal-EV bet that they risk losing themselves", but I don't think they've lost themselves at all. If you really buy the arguments they make about longtermism and AI Doom, how could you do anything else?
Like you, EAs simply don’t have to buy the AI xrisk arguments. They can just stop, hedge more, do something else. EA ideas endorse doing this. The actions of the most legible, high status community members don’t have to define the ideas.
Thank you for writing this, Matt. It articulates many of the thoughts that I've been having and writing here and there (privately or publicly), and it takes courage to share them the way you do.
I think it would be a grave error to reduce EA's "what should one do" to a solved problem - all in on AI Safety. I am yet to be convinced and even if I was there are some people myself included who feel their skills don't lend themselves well to addressing that particular issue. And pragmatically it is much better to be a more effective altruist than you were thanks to EA than to "bounce off" EA entirely because it looks like it's all about one issue.
"They care so much about putting every last chip down on the highest-marginal-EV bet that they risk losing themselves"
I don't think this is accurate.
If a mistake is being made, it is people considering whether they should shift their focus to AI safety from an individual lens without considering the extent to which others are also considering the same decision and that leading to a bigger shift than people were anticipating.
To me the salient angle to this story is from the funding perspective: Open Phil funds people who agree with their perspective on AI issues, their GCR team majority focuses on AI > orgs outside that paradigm like ALLFED wither > feedback loop in which what OP believes really spreads, because that cluster of beliefs is the one that most supports people working on it full time; other people move away from EA or are not as successful. You could imagine a version of this that is benign, but it becomes really dangerous if the beliefs which OP has are wrong, because the feedback loops select for fixation.
If you find the arguments of "AI 2027" convincing, and I find them very convincing, *we are simply running out of time on AI*.
It's not that doom is inevitable... It's that the growth in capabilities is far outstripping the evaluations of these systems. By late 2027, we may easily lose the ability to evaluate the systems at all.
(if you haven't read "2027", read it) (if you have read it but don't find it convincing, put your reasoning out in the open)
What always sold me abt effective altruism was that it said most things ppl think make the world better either aren’t cost-effective or are even counterproductive because making the world better is hard and counterintuitive (As Caplan says, EA is a rebuke to social desirability bias).
I think the root of problems is fast timelines. Bc in fast timeline worlds, ppl feel forced to take epistemic shortcuts. There’s no time to do math, much less double check the assumptions. And so the relatively quick judgements of experts has to take over.
I’m worried this will go quite wrong, and so if anything, I think you understate the issue here. As EA principles get relegated to the back burner for when smart EAs have power and resources, the internal debate needed to handle complex decisions is lost. It’s not sufficient for leaders to believe in EA; there needs to be a place to identify, research, and debate problems.
This doesn’t mean being paralyzed by every uncertainty but as you mention in the post, it means constantly doing cause prio and having the institutional ability to shift in response to innovations in cause prio.
Completely agree. I also think there's a bit of chicken and egg happening with fast timelines, there were already big shifts towards AI-first thinking before that became a dominant talking point and you're much more likely to be sold on short timelines once you've already committed to AI as the thing and you're looking for ways to build alliances
Thank you! I disagree with you on elevating veganism as a moral standard or measure of anything going well in EA any view, but agree with you on a lot of the rest. Thanks for putting in clear (and less seeping-with-frustration) language a lot of stuff I've been trying to express.
Re veganism: It's really not cost-effective! It's among the least cost-effective interventions out there! The whole obsession with veganism as moral signaling was one of the things that always put me most off of the historical EA community. Why do people keep elevating it so weirdly?
I completely agree on effectiveness of personal veganism specifically. I'm gesturing at two things. One is just using veganism as shorthand for animal welfare generally. Another is using it as an example of a personal sacrifice that suggests a given person is emotionally in tune with the moral world (which is not distinctly EA at all, but points to something I think is foundational for EA).
Oh, yeah, I do think it's a decent-ish proxy for that. I do personally feel like the GWWC pledge or something in that space is a better proxy (and relatedly, I am particularly sad that many of the people who are ending up working on AI Safety at very high paying labs are doing relatively little in terms of charitable giving, though I do have a bias here in that Lightcone would be a natural recipient of those donations, and so I trust my reasoning here less than usual)
Last summer, I stumbled upon 80,000 Hours. More accurately, they showed up on my Instagram feed as an advertisement. I never click on ads, but something about 80k called to me and I ended up on their website. I read through their guides, implemented their advice in my own life, and eventually scheduled a call with one of their advisors and received great advice. I spent much of the summer trying to get my friends to do the same, alongside reading up on Effective Altruism and slowly growing "EA-adjacent."
I no longer recommend 80,000 Hours to people. Their AI rebrand has stopped me from doing so: while I still occasionally share the career guide, 80k now primarily exists to funnel people into the AI safety space. I can't in good conscience direct people to the site, not when I know they don't want to become AI-safety-pilled and wouldn't appreciate an attempt for me to sway them. I myself am no longer a huge X-risk worrier, and have accordingly drifted away from the EA space as it has drifted away from me.
I wish that things would be different, but I can't help but wonder if this is the natural conclusion of a fundamentally rationalist and high-modernist project. You write that AI safety identitarians "care so much about putting every last chip down on the highest-marginal-EV bet that they risk losing themselves", but I don't think they've lost themselves at all. If you really buy the arguments they make about longtermism and AI Doom, how could you do anything else?
Like you, EAs simply don’t have to buy the AI xrisk arguments. They can just stop, hedge more, do something else. EA ideas endorse doing this. The actions of the most legible, high status community members don’t have to define the ideas.
Have you seen Probably Good (https://probablygood.org/)? It might be more your vibe.
Thank you for writing this, Matt. It articulates many of the thoughts that I've been having and writing here and there (privately or publicly), and it takes courage to share them the way you do.
Thanks Alix! Your post from last year on the free-rider problem with EA association was very good and animates a lot of my thinking here.
Thank you! I'm glad it resonated with your thinking!
I think it would be a grave error to reduce EA's "what should one do" to a solved problem - all in on AI Safety. I am yet to be convinced and even if I was there are some people myself included who feel their skills don't lend themselves well to addressing that particular issue. And pragmatically it is much better to be a more effective altruist than you were thanks to EA than to "bounce off" EA entirely because it looks like it's all about one issue.
Thank you for writing this Matt! I intend to refer many others to it in the near future.
I totally agree with Avik that a key crux here is how fast your timelines are.
I would be VERY tempted to write the antithesis of this post if I felt strongly about ~5 year timelines.
This should be a point in favor of arguing more for intellectual humility and higher levels of
uncertainty regarding AI and Cause Prio.
"They care so much about putting every last chip down on the highest-marginal-EV bet that they risk losing themselves"
I don't think this is accurate.
If a mistake is being made, it is people considering whether they should shift their focus to AI safety from an individual lens without considering the extent to which others are also considering the same decision and that leading to a bigger shift than people were anticipating.
To me the salient angle to this story is from the funding perspective: Open Phil funds people who agree with their perspective on AI issues, their GCR team majority focuses on AI > orgs outside that paradigm like ALLFED wither > feedback loop in which what OP believes really spreads, because that cluster of beliefs is the one that most supports people working on it full time; other people move away from EA or are not as successful. You could imagine a version of this that is benign, but it becomes really dangerous if the beliefs which OP has are wrong, because the feedback loops select for fixation.
If you find the arguments of "AI 2027" convincing, and I find them very convincing, *we are simply running out of time on AI*.
It's not that doom is inevitable... It's that the growth in capabilities is far outstripping the evaluations of these systems. By late 2027, we may easily lose the ability to evaluate the systems at all.
(if you haven't read "2027", read it) (if you have read it but don't find it convincing, put your reasoning out in the open)