19 Comments
User's avatar
Bella's avatar

Hey Matt!

First off, thanks for the thoughtful, good-faith critique of 80k, and thanks for sending us an outline of this post a couple of days ago so that we could put together a response ahead of time if we wanted. That was kind of you, and it prompted some good discussion between 80k folks on the issues you raise.

A broad reaction I have is that these issues are really thorny, and I’m not sure that 80k is striking the right balance on any of them — though I do, overall, feel better about both our epistemics and transparency than it seems you do.

I wanted to briefly comment on each of the issues raised by your post:

1. Epistemics & deference internally (did 80k pivot more of our effort to preventing harms from AGI because of community insularity and in-group signalling? )

- It’s very hard to accurately introspect about one’s reasons for holding particular beliefs, and I personally am very wary about how I am (or might be) influenced by social dynamics, epistemic cascades, and (unconscious) desire to signal in-group status.

- I will say that there’s been a lot (hundreds of hours? Certainly 100s of pages written) of internal discussion at 80k over the 3.5 years of my tenure about whether to focus more on AGI. So I feel confident in saying that we didn’t make our strategic pivot without thinking through alternatives and downsides like the ones you mention.

- We definitely care a lot about trying our best to make high-stakes decisions well & on the basis of the best available evidence and arguments. It’s hard to know whether we’ve successfully done that, because these topics are so complicated and reasonable people disagree about how to approach them.

- Speaking very personally for a second here, I feel like the world at large is catching up to the small in-group you're talking about, in a pretty major way. AI companies make headline news regularly. My random British aunt is a ChatGPT power-user and is really worried about what it's going to do to her kids. Politicians from JD Vance to Bernie Sanders have voiced concerns. I think this is a sign, albeit a pretty weak one, that we're not totally getting taken in by niche, in-group worries.

2. Epistemics & deference with respect to our audience (Insofar as 80k is “giving advice” and stating claims like those made in the AI 2027 video, are we being insufficiently humble about what it takes to get an audience to come around?)

- I don’t think it’s manipulative to an audience to tell them things that we think are true and important, and I think everything we say in the video [1] meets that bar.

- The AI 2027 scenario was written by people with impressive track records as an exercise in storytelling and forecasting. I think it's fine to tell a story as a story (I think 'there isn't a model presented here' because that's not what the video is trying to do or be), and I think the video shows real emotions and reactions that Aric and others have had in response to the story.

- Personally, I find illustrations / stories / “fleshed-out possibilities” like this helpful — like how politicians use wargames — but you might disagree.

- As a more general point, we do really care about good epistemics & helping people think well — that’s why we try to present clear arguments for our conclusions, give counterarguments, and give readers a sense of how we ended up with our beliefs / why they might disagree.

3. Transparency ("hiding" our contingent empirical beliefs behind frameworks and examples)

- Your post points to the underlying tension between 80k giving out a framework for how to think about doing good in the world, and 80k having particular beliefs about how to do that (which are partly grounded in not-universally-held empirical beliefs, like “AGI might come soon”).

- This presents a communication challenge: how do we honestly tell readers things we think are true and important about the world, while also trying to provide value through a career framework that we think is useful regardless of whether they share (all of) those beliefs?

- I think this is a real tension. It’s genuinely unclear to me how to be “upfront” about all of your beliefs, when there’s lots of load-bearing beliefs you hold, and understanding exactly how you came to them requires possibly a hundred hours of reading — you can’t tell newcomers all things at once.

- We're still figuring out how best to balance more AI-focused empirical claims & more framework-y ones; but I don’t think it’d be best for us to go all in on one or the other (because I think both provide a bunch of value to our users and the world!).

4. 80k will alienate its most important target audience as a result of this shift

- I think this is possible, but neither of us have enough empirical results yet to feel confident.

- I think we still have lots to offer these kinds of people, and I think those we lose might well be worth what we've gained in terms of honesty and ability to have much better resources on our top cause area — but time will tell :)

[1] Probably also our site, but there’s so much there and I haven’t read all of it. I do generally think we have good research principles and do our best to say things we think are true based on our read of the best available evidence.

Expand full comment
Rubi Hudson's avatar

I first got pointed to 80k in ~2017 as a site with good general career advice, not even necessarily for trying to do good. That was one of several exposures to the EA community that collectively changed the direction of my career.

Well before the pivot to center AI safety, I got the impression that 80k was pivoting to EA more broadly. It became a better resource for those who already wanted do good, at the cost of being less appealing to people who were uncertain what they wanted out of their career. I see the shift towards AI safety more as a continuation of that trend, rather than a sharp change of direction, sacrificing broader appeal to focus on the people who are almost there. I don't think that prioritization is necessarily wrong, though I expect the Pareto frontier would allow for improvements in both directions.

Expand full comment
Matt Reardon's avatar

Strong theoretical argument for the pivot: focus in on a growing subset of more tractable users.

80k was founded in 2011 by Ben Todd and Will MacAskill as a project within the Centre for Effective Altruism though. It remains possible they wanted to have broader appeal earlier and let their opinionated-ness grow over time, but that isn't my read re EA always being the guiding philosophy.

Expand full comment
Avik Garg's avatar

I have a lot of critiques of 80k’s transition but I’m skeptical of this take. Making AI 2027 claims—and having them on TikTok—seems a great way of reaching many smart 19 year olds. AI 2027 itself was quite successful and AI Safety clubs at unis are far more appealing than principles first EA. And from my experience with AI Safety clubs, it’s far more persuasive to give prospective members extreme messages than the most intellectually honest case.

I agree talking abt AI in 2018 wouldn’t have been so successful; there are weirdos like me who like podcasts abt deworming more than anything abt AI still; and I agree that talking abt longtermism as a philosophy would turn viewers away.

But AI is sexy and I would be shocked if this shift doesn’t get more ppl in the door. I think there are smart ppl that 80k will lose, but on net they’ll prob get more smart ppl in the door—especially among those likely to go work and succeed in technical ai safety (being drawn to AI off the bat is a predictor of future success in the field).

Expand full comment
Matt Reardon's avatar

This is a fair criticism. How much does footnote 2 resolve it? It seems better to me to just make aicareers.com (though why even constrain yourself to careers if hype is the point) than to repurpose 80k and that's the straightforward analogy to student groups.

I'll also say that it depends on your model of success here. Do you think ~80% of 80k's future impact will come from its top 50 plan changes or its top 500 plan changes? I think this hurts the top 50 model much more than the top 500 model.

But yes, the risk that I'm pining for times gone by is real. I think the test of much EA -> AIS rebranding will be how well projects maintain focus on x-risk and obviously full marks to 80k and the video on that score.

Expand full comment
Avik Garg's avatar

I think your point that another org could have done this (and the rest of what new 80k is aiming for!) is persuasive…Futures Project itself could get some video ppl for example.

The idea that this is turning away top 50 talent for top 500 talent seems wrong to me. If you want the future top 50 academics (specifically academics outside of STEM) to spend their time on AI, this video isn’t gonna do the job. But for the best mathematicians, programmers, future leaders, policy researchers, entrepreneurs, ops ppl, etc—this video is far better than the intellectually rigorous version.

I just think the AI versus EA student club experience is pretty good evidence here. AI clubs across top universities do a better job of getting top talent.

Expand full comment
Matt Reardon's avatar

Yep. There's a good chance I'm just way too rooted in how people thought about opportunities and what was possible in the world as of 2015. Maybe Trump, Covid, and the deep nihilism of the internet, not to mention AI itself, young people now are just much more willing to entertain extreme possibilities than people who came of age during the end of history.

Ironically, this might mean the pivot is good for young people but bad for people in their 30s-40s who are seen as most valuable for short timelines (but less tractable for intellectual openness reasons under any model).

Expand full comment
Daniel Kokotajlo's avatar

Thanks for this critique, it stings & presumably is supposed to. I think it's pretty unfair to us & to 80k and want to plea with you a bit for mercy/understanding. You say:

"Watch the video yourself, but my read is that it is essentially epistemically bankrupt. Aside from some disclaimers about how this is just one model and things are very unlikely to play out in exactly this way (and, erm, neither 80k nor the authors of the paper even believe the prediction), the lion’s share of the video is an immersive dramatization of the AI 2027 scenario, complete with:

...

I don’t think playing a few clips or people who disagree and noting this is “unlikely, but plausible” lets you get away with how you spend the other 80% of the video. There isn’t a model presented here or many “why” or “how” questions raised or answered – just an exchange of assertions from authorities about how we should project lines on graphs forward and the ghost story one such projection might imply. It’s hard to imagine an intelligent, capable potential contributor to AI risk mitigation *who hasn’t already heard and accepted the premises of AI risk* seeing this video as their first introduction to the concept coming away compelled to learn more."

Is this a criticism of AI 2027 or of 80k's video? Insofar as it's a criticism of AI 2027, I mean, what would you do in our position? If you thought that sometime by the end of this decade, probably but not necessarily, one or more of these companies would succeed in building superintelligence (which is literally what they say they are trying to do)... what would you do? Our thought was "Let's try to take a best guess at what that might look like, and really think hard about it and write it up, and then publish it. More people need to be thinking about this."

Expand full comment
Matt Reardon's avatar

Sorry if I didn't make it clear enough that this post is really about 80k and whether it's appropriate for 80k to be investing a bunch in the scenario portion of AI 2027 specifically.

AI 2027 is an okay product that adds value to the discourse. I think the explicit forecasts and methodology are defensible and even the scenario is defensible when taken in the context of the forecasts and methodology.

What disappoints me (and what is obviously zero percent the fault of the AI Futures Project) is that you're the only ones to have put out something this thorough and put this much energy into disseminating it. Your timelines are way out on the tail of the AI Safety community, but the community hasn't been able to produce anything close to this in terms of reach and authority. This is despite more moderate timelines (and implied scenarios) being far more defensible in my view.

Perhaps you agree that given the differences of opinion you have with others around here, it would make more sense for them to take AI 2027 as a challenge to make something better (by their lights) rather than adopting AI 2027 as their cultural lodestar. It's the adopters – 80k in this case – I'm upset with, not you.

Expand full comment
Daniel Kokotajlo's avatar

OK, thanks. I totally agree that people should take AI 2027 as a challenge to make their own scenario forecasts that they think are better; we explicitly said when we launched that part of our goal was to inspire people to do that (especially critics).

But it kinda got buried amongst all the other things we were saying. If I could do it all over again I would have changed around what things I emphasized and repeated most.

Expand full comment
Daniel Kokotajlo's avatar

More detailed thoughts, less important than my high-level question above, in case interested:

--You say it's epistemically bankrupt. But basically your only argument is that it's an immersive dramatization of a scenario. Are you opposed to immersive dramatizations of scenarios in principle, or do you think there's a way to do them that's not epistemically bankrupt?

--They don't answer many why or how questions, but the AI 2027 website does, insofar as it's possible to do so given the format. (It's a scenario! It has lots of footnotes, but it doesn't justify literally every choice made. But it overall does quite a lot of work to explain and justify itself.)

--You say it's hard to imagine this compelling anyone to learn more unless they already heard and accepted the premises. Maybe, but also maybe not, it's gotten a lot of views and positive feedback already and I think we can just wait and see empirically whether it reaches people.

Expand full comment
Matt Reardon's avatar

Again this is not about AI 2027 as a whole. Just the video as an 80k product. I do think scenarios without models generally have zero epistemic content and I do expect epistemic content from 80k. Sorry if "bankrupt" makes it sound malicious on anyone's part, especially yours since this isn't about AI 2027!

Expand full comment
Alix Pham's avatar

I'm late to the party but I finally got around to reading this. Thanks Matt for writing this! It encapsulates a lot of my critical thoughts about 80k's shift. I miss principles-first 80k. 2020 80k resonated a lot with 2020 me, and I don't think I would have made my way to where I am now with 2025 80k, as it took me a lot of time and processing to make the choices I made and end up in the AI space. I think I would have dismissed it much earlier. I expect 2020 me is not too dissimilar to your average "proto-EA" looking for an impactful job, that might now bounce off instead of digging in.

Very well written!

Expand full comment
Dylan Richardson's avatar

This is correct. 80k was one of the major orgs that stood for cause neutrality and I'm sad to see this happen (particularly in regard to the podcast!). I still don't entirely understand why it happened - was it a handful of senior people that shifted views? Or did it value-shift itself over time by hiring AI people disproportionatly?

Expand full comment
Matt Reardon's avatar

"Cause neutrality" always meant "cause impartiality." You can't be truly neutral between causes and maintain a coherent commitment to effectiveness. One way to frame the 80k shift is to say they've stopped doing *cause prioritization* to do role prioritization or intervention prioritization within AI safety because AI safety is far and away the most important cause.

As a conceptual matter, cause impartiality endorses this. If an asteroid is five years away from Earth and no one knows it and you need at least 10,000 of our best and brightest working on it, it makes sense to focus almost exclusively on that absent some galaxy-brained reasons for taking a different approach. If you accept the AI risk premises that 80k does, I'm the one arguing for something galaxy brained here.

Expand full comment
Dylan Richardson's avatar

That's fair. I suppose it may well be the case that they may still have retained the important foundational principles and frameworks. I just worry that they may not be persistent. It seems easier to narrow focus than to re-widen it (I don't expect them to do that, but hopefully they prove me wrong if AI dev lags). Cause neutrality/impartiality just seems like a particularly fragile value in general. Once institutions shift away, individuals may follow and EA as a collective project may trifurcate.

Seems worthwhile to keep a show of doing prioritization to some extent, even if you are clearly leaning one way.

Expand full comment
Noah Birnbaum's avatar

Great points all around. A few things to add:

1) 80k is a general funnel for EA (see 2022 survey). We just won’t funnel the right people if we’re funneling for a specific take on AI rather than the general EA first principles. Another funnel is uni groups and uni groups rely on 80k to give career advice! As a Uchicago organizer, I cannot honestly tell people to go to 80k for EA career advice when I know they’re just trying to AI pill everyone.

2) AI is a means - not an end. The end is EA first principles. If you maximize too hard on an application of your principles rather than your principles themselves, you just become another specific ideology. My favorite thing about EA by far is the idea that they take ideas seriously wherever they go and are willing to shift in light of new evidence - this is MUCH harder to do if you put your eggs in one basket.

3) EA generally make a good point about the diminishing returns not really existing on doing good (at least in our position, probably). The issue is (as you point out) that reputation does have diminishing returns — massive ones, in fact. EA getting AI wrong will be a SBF-level PR catastrophe. Consider: what if EA is wrong about AI? What happens to EA? If you think EA is likely to diminish the chance of some x risk by x%, then losing EA is x% (plus more) an existential risk itself!

To defend 80k, I had a call with them (after you recommend doing so despite the transition to AGI). and they framed it different than the way you’re describing it here (and my impression of their post); it was less you should go into AI and more you should at least take this seriously when you’re thinking about careers (ie animals, etc). Tbh, I’m a little skeptical that this is how they say it to anyone (maybe I shouldn’t be), but if they are, that doesn’t seem nearly as bad.

Expand full comment
Matt Reardon's avatar

I think EA values endorse letting EA get subsumed by the top problem if it's sufficiently pressing and subsuming EA itself is worth it. This will be the topic of episode 8 of Expected Volume. I (and maybe you) are too enamored with the ideas themselves on this telling.

What your advisor said seems super wrong to me by the way. There's a 50% chance we go through the looking glass on AI, so think about what that means for factory farming (which will either have the same shape it does now or be totally unrecognizable through the looking glass??)

Expand full comment
Richard Meadows's avatar

Good post. Rob is a goated podcast host, the change of direction was so disappointing (altho I understand why they did it). Hopefully they'll be able to course-correct in a few years time rather than keep doubling down.

Expand full comment