Discussion about this post

User's avatar
Bella's avatar

Hey Matt!

First off, thanks for the thoughtful, good-faith critique of 80k, and thanks for sending us an outline of this post a couple of days ago so that we could put together a response ahead of time if we wanted. That was kind of you, and it prompted some good discussion between 80k folks on the issues you raise.

A broad reaction I have is that these issues are really thorny, and I’m not sure that 80k is striking the right balance on any of them — though I do, overall, feel better about both our epistemics and transparency than it seems you do.

I wanted to briefly comment on each of the issues raised by your post:

1. Epistemics & deference internally (did 80k pivot more of our effort to preventing harms from AGI because of community insularity and in-group signalling? )

- It’s very hard to accurately introspect about one’s reasons for holding particular beliefs, and I personally am very wary about how I am (or might be) influenced by social dynamics, epistemic cascades, and (unconscious) desire to signal in-group status.

- I will say that there’s been a lot (hundreds of hours? Certainly 100s of pages written) of internal discussion at 80k over the 3.5 years of my tenure about whether to focus more on AGI. So I feel confident in saying that we didn’t make our strategic pivot without thinking through alternatives and downsides like the ones you mention.

- We definitely care a lot about trying our best to make high-stakes decisions well & on the basis of the best available evidence and arguments. It’s hard to know whether we’ve successfully done that, because these topics are so complicated and reasonable people disagree about how to approach them.

- Speaking very personally for a second here, I feel like the world at large is catching up to the small in-group you're talking about, in a pretty major way. AI companies make headline news regularly. My random British aunt is a ChatGPT power-user and is really worried about what it's going to do to her kids. Politicians from JD Vance to Bernie Sanders have voiced concerns. I think this is a sign, albeit a pretty weak one, that we're not totally getting taken in by niche, in-group worries.

2. Epistemics & deference with respect to our audience (Insofar as 80k is “giving advice” and stating claims like those made in the AI 2027 video, are we being insufficiently humble about what it takes to get an audience to come around?)

- I don’t think it’s manipulative to an audience to tell them things that we think are true and important, and I think everything we say in the video [1] meets that bar.

- The AI 2027 scenario was written by people with impressive track records as an exercise in storytelling and forecasting. I think it's fine to tell a story as a story (I think 'there isn't a model presented here' because that's not what the video is trying to do or be), and I think the video shows real emotions and reactions that Aric and others have had in response to the story.

- Personally, I find illustrations / stories / “fleshed-out possibilities” like this helpful — like how politicians use wargames — but you might disagree.

- As a more general point, we do really care about good epistemics & helping people think well — that’s why we try to present clear arguments for our conclusions, give counterarguments, and give readers a sense of how we ended up with our beliefs / why they might disagree.

3. Transparency ("hiding" our contingent empirical beliefs behind frameworks and examples)

- Your post points to the underlying tension between 80k giving out a framework for how to think about doing good in the world, and 80k having particular beliefs about how to do that (which are partly grounded in not-universally-held empirical beliefs, like “AGI might come soon”).

- This presents a communication challenge: how do we honestly tell readers things we think are true and important about the world, while also trying to provide value through a career framework that we think is useful regardless of whether they share (all of) those beliefs?

- I think this is a real tension. It’s genuinely unclear to me how to be “upfront” about all of your beliefs, when there’s lots of load-bearing beliefs you hold, and understanding exactly how you came to them requires possibly a hundred hours of reading — you can’t tell newcomers all things at once.

- We're still figuring out how best to balance more AI-focused empirical claims & more framework-y ones; but I don’t think it’d be best for us to go all in on one or the other (because I think both provide a bunch of value to our users and the world!).

4. 80k will alienate its most important target audience as a result of this shift

- I think this is possible, but neither of us have enough empirical results yet to feel confident.

- I think we still have lots to offer these kinds of people, and I think those we lose might well be worth what we've gained in terms of honesty and ability to have much better resources on our top cause area — but time will tell :)

[1] Probably also our site, but there’s so much there and I haven’t read all of it. I do generally think we have good research principles and do our best to say things we think are true based on our read of the best available evidence.

Expand full comment
Rubi Hudson's avatar

I first got pointed to 80k in ~2017 as a site with good general career advice, not even necessarily for trying to do good. That was one of several exposures to the EA community that collectively changed the direction of my career.

Well before the pivot to center AI safety, I got the impression that 80k was pivoting to EA more broadly. It became a better resource for those who already wanted do good, at the cost of being less appealing to people who were uncertain what they wanted out of their career. I see the shift towards AI safety more as a continuation of that trend, rather than a sharp change of direction, sacrificing broader appeal to focus on the people who are almost there. I don't think that prioritization is necessarily wrong, though I expect the Pareto frontier would allow for improvements in both directions.

Expand full comment
17 more comments...

No posts