Author’s note: I worked at 80,000 Hours for three years, but everything I write here is based on public content or personal reflections
Update: I’m [more] convinced that post-AGI-shift 80k will succeed and be a good and valuable project. There’s a lot of highly-tractable talent looking for AI-specific takes and the ‘do good-first’ approach is substantially less likely to catch them. I stand by this article as a testament to the costs of the shift being very high. It would have been better to preserve old-80k to appeal to first-principles thinkers and start a separate project to do what 80k is doing now. Thanks to Avik for his comment and Chana (80k video producer) for this conversation.
80,000 Hours’s new AI 2027 video is the latest embodiment of 80k’s pivot to focusing more on AGI. I think the video’s goal is to shock and evoke some productive fear about AI risk and maybe some amount of sadness that we aren’t thinking big and paying enough attention to the fate of the world. My sadness is for 80k itself though, as I watch it abandon its roots and perhaps its principles. It may have already failed on my terms, but I worry it may fail on its own terms too.
Watch the video yourself, but my read is that it is essentially epistemically bankrupt. Aside from some disclaimers about how this is just one model and things are very unlikely to play out in exactly this way (and, erm, neither 80k nor the authors of the paper even believe the prediction), the lion’s share of the video is an immersive dramatization of the AI 2027 scenario, complete with:
Appeals to the authority of AI 2027’s author and fellow travelers
Frequent admonitions that you the viewer will be in the dark about what’s really going on
[Not-argued-for] assertions about model-to-model scale and capabilities progression, dramatized by dumping blocks on a table
Ominous music and shadows
Another authority cited to say that refusing to grapple with superintelligence (i.e. disputing the premise of this video) is a sign of “total unseriousness”
A final plea to talk to your family and congress member
I don’t think playing a few clips of people who disagree and noting this is “unlikely, but plausible” lets you get away with how you spend the other 80% of the video. There isn’t a model presented here or many “why” or “how” questions raised or answered — just an exchange of assertions from authorities about how we should extend lines on graphs and the ghost story one such extension might imply. It’s hard to imagine an intelligent, capable potential contributor to AI risk mitigation *who hasn’t already heard and accepted the premises of AI risk* seeing this video as their first introduction to the concept coming away compelled to learn more.
And that’s really the rub — what does someone learning about these ideas for the first time think? Particularly someone who, if they thought a lot more about this, could end up doing vital work in the space. That profile of person is why 80,000 Hours exists and most particularly why the viral and splashy parts of 80,000 Hours exist. This video, the new home page, the queue of Constellation guests on the podcast, and other elements of the shift to AGI make me think 80k is losing touch with the audience it exists to serve. I don’t know what other org is there to make the case for understanding AI risks to the people who don’t understand them. I don’t think we make progress on AI risk or any other problem by dispensing with humility about how easily people will come around.
My history with (thinking about) 80k
On that note, maybe I should say a bit about how I came around and what 80k had to do with that.
I stumbled on EA through reading Scott Alexander (ironically the chief dramatizer of AI 2027) in 2014, but don’t remember how I first heard about 80k. I never read the articles or the guide, but got quickly hooked on the podcast in its early days around 2017-18.
I think listening to these conversations (at 1x speed no less!) was deeply formative for me intellectually and I’ll always be grateful for that. I skewed towards a mix of the philosophy, global health, animal, and meta episodes (the poor-audio-quality #21 with Holden is still essential listening). The 2019 Peter Singer episode probably sowed the seeds of my sympathy for longtermism even more than two years of running a law school EA group with Cullen O’Keefe, but I was still far from bought in, for more or less outside view/tractability reasons.
Flash forward to 2020 when the 80k advising team reached out to me about doing AI policy work. I had been such a fan of the philosophy and recent Covid episodes of the podcast that I was quite excited to engage with the org directly and take their ideas seriously. Both AI and DC policymaking seemed daunting and far outside my technical wheelhouse though. Still, my advisor (and later advisors, plural) stuck with me and helped me generate ideas to contribute somehow. In retrospect, it's wild how much time 80k and 80k-adjacent folks gave me despite my personal nervousness around diving into something unfamiliar as a job.
80k itself ended up being the place. Having absorbed the ideas and argumentative rigor for so long, I felt like I already lived a big part of my intellectual life there and was proud to come in and contribute to shaping that vital discourse, scary as moving countries and taking a 65% pay cut was.
The job itself was as a one-on-one career advisor. I would get to talk to smart, altruistic people about arguments and ideas in a relatively unfiltered way and help them make big decisions — from a position of relative authority no less. Talk about a dream job. Crucially, I did see it as discourse as opposed to sales or evangelism. Discourse had been my experience as a podcast listener and I felt respected the whole way along. Rob and his guests had an incredible knack for understanding the breadth of intelligent perspectives and predispositions in the world and a knack for framing their self-awarely unusual ideas faithfully in those terms. This aspect of things was central to my willingness to take a call — and later a job — with 80k.
Of course, it wasn’t all impartial, arms-length discourse. 80k had a mission and its own particular conception of the good. This isn’t academia — 80k is here to drive career-plan changes. I think that’s a good thing, in part because I agree with their conception of the good and in part because I don’t think there’s any such thing as true neutrality on these things and too great a pretense of neutrality is often net-bad for transparency.
On this point, 80k was reasonably transparent about its longtermist priorities, but many — especially those like me whose consumption of 80k was weighted towards the podcast — didn’t perceive this. The advising team also tried to be transparent about this on its FAQ, but I think it’s reasonable to say we leaned into the ambiguity. We were excited about pitching people who understood EA principles on reading their first longtermist/AI book or blog. I think dissonance played a sizable role in my personal excitement here (and I think dissonance gets a bad rap generally), but even so, I thought of myself as someone who benefitted from just such a pitch.
The 80,000 Hours Model
That segues us nicely to my basic model of how 80k has driven its biggest and best plan changes: being the first place highly capable and altruistic people encounter EA ideas and turning them on to important problems with a rewarding, two-way intellectual exchange.
On my telling, this starts by offering the carrot of career advice: something ~everyone worries about, but for which there are paltry good general resources. Among the most common user experiences is “I googled ‘how to do good in a career’ or ‘job with high social impact’ and found 80k.”
The first thing a user sees from there is the Career Guide: a compelling and logical framework for thinking about careers that unpacks the important cruxes from clarifying terminal goals, to prioritizing issue areas, to reflecting on your own skills, to tactically impressing an org you’re excited about.
Smart people will notice how well 80k has given structure to the uncertainty ahead of them, helpfully ordering and taxonomizing ideas the reader has probably thought about for a while, but hasn’t put the sharp and well-organized words to them that 80k has.
Crucially, readers can validate (and feel validated by) this framework *regardless of their empirical beliefs about the world.* Comprehensive, logical, good decision making processes don’t discriminate on that basis.
Only after clarifying the framework (with some helpful examples), 80k turns toward its contingent empirical views. “If you like our framework, you might also be curious to see how we apply it.” And there you get into problem profiles and rankings and podcast guests and job board selections, all with a fairly wide remit in terms of EA priorities: animals and global health were not neglected in those products, though they were on the website and in advising, at least when compared to some abstract notion of parity across causes.
Ultimately, 80k was set up to have its biggest wins be new leaders in important fields. People who — through their engagement, refinement, and endorsement of the principles of impact laid out in 80k content — thought of themselves as stakeholders in a grand moral project to seed and execute on scalable ideas to improve the world as much as possible. Of course it’s great to inspire an individual contributor to join a great project as the best hire for their particular role and leave it at that, but the real prize was people who understood deeply and independently — people who could see what was needed and build it, or move others to it when the time came.1
One way to look at 80k is as the outgrowth of an undergrad philosophy student group. That group hit on some key questions and concepts, but knew they didn’t have all the answers. For that, they would need to convey the questions so compellingly that others were inspired to come on board and lead projects that members of the Oxford class of 2010 couldn’t have built or even conceived of themselves. Those others would need to be individually smarter and more capable than the staff at 80k if 80k’s true ambitions were to be achieved and so the tone struck was a humble one, focused on the questions and cruxes around what it meant for a problem, role, or intervention to be impactful, with 80k’s own opinions taking a relative backseat.
Despite all the growth and success, on the scale of the world, we as EAs and 80k itself are still as undergrads to all who might really make a difference.
The AGI shift
On this reading, the thing the AGI shift changed more than just making the podcast, and some web content fall in line with the primary articles and advising. While 80k’s forum post on the shift leans into this innocuous framing, this move is inherently threatening to the model I presented above. The video program is obviously a big, new investment and so far (one video) a complete 180 from the old model. It’s an example of how opinionated, goal-driven 80k is swallowing deliberate, analytical, framework-first 80k.
It has always been kind of easy for this to happen. There were never clear lines or guardrails (even in the abstract) to stop it. The goals are the goals after all and the deliberative (one might say cautious, nervous) vibe was just a vibe — the product of 80k staff viewing themselves in a social context where they were small and niche and the people whose plans they hoped to change were vast in number and generally skeptical. 80k worked so well for so long in large part because of this norm of defending the framework-and-examples-first approach from encroachment by inside-view 80k. The AGI pivot has expressly blown up that guardrail. First by putting AGI prominently on the navigation bar and homepage, then by having only 4 of 19 podcasts this year concern not-AI topics, and now by kicking off the video program with the AI 2027 video.
I don’t think a skeptical new reader is going to have enough time to find resonance with fundamental ideas around prioritization, trade-offs, and epistemics before they’re beat over the head with AGI stuff and, understandably, bounce off. I fear we’ll lose many of the people we most need working on AGI (and other problems) over this.
And for what? Were 80k users really missing this completely before? They may well have mis-gauged how strong of a priority it was for 80k staff, but that isn’t the relevant question for the user. The user certainly noticed when it was at the top of the problem profiles or the subject of 40% of podcast episodes (rather than 75%). If they flowed through to the EA Forum or EAG, there’s no way they could avoid engaging with AI arguments to some degree.
So the impression the user takes away from the change is: 80,000 Hours is here to help you choose to work on any problem you want, as long as it’s AGI. Less flippantly, the impression is “we have a nice framework for thinking about career choice, but we haven’t thought about how it would cash out in anything other than an AGI safety career for a long time.” The upshot here is that smart users will suspect the framework is secondary to the org’s desired output from the framework, and disengage with it earlier rather than later.
Where is this coming from?
The charitable read of this shift2 is that 80k is just being deeply honest about its priorities. Yes the framework is nice and we’ll keep it, but at this point, the framework recommends urgent AI safety work even more strongly than it recommends itself, so let’s lean into that. Moreover, we were increasingly uneasy about the old model in light of how much more strongly we endorsed AI safety work compared to work on other causes. Sure, you couldn’t miss AI safety, but merely not-missing it failed to reflect how much we were doing everything else in service of it. Honesty requires us to let you know how freaked out we are and we’ll accept the cost of you potentially bouncing off to do that (assuming Matt is right).
A less charitable read says this is a product of insularity — the highest status EAs have durably prioritized working on AI in some form since 2011 or 2012. Over time, they got more and more comfortable making the basic arguments and building a lot of internal social momentum around them. More projects and blogs and podcasts and videos each year than the last, more empirical evidence, more money from early EAs going to work at increasingly massive and powerful AI companies. AI being the thing feels more and more like the inevitable march of history for highly engaged EAs, so why are we pussyfooting around pretending it’s still 2016 making introductory arguments and drawing comparisons to other problems? Aren’t we past that?
A related and more dangerous insularity is the idea that 80,000 Hours’ main route to impact is directing its users rather than providing them a resource. 80k is a powerful intellectual brand that people defer to. If that’s the case, then 80k should be clear about what they want people doing. Orienting too much around the framework and talking across less-important causes risks confusion and there is high opportunity cost to failing to further improve AI safety discourse.
I think this view is understandable, but it has two major problems. Most fundamental is that 80k’s social capital is a product of the framework-first approach and could decline rapidly if that approach is neglected in favor of spending down the capital it built by trying to direct people. The other is a sampling problem: though one hears 80k frequently invoked as an authority on what jobs and orgs matter, people citing 80k alone without reference to some deeper model of the world are more likely to be EA/80k/AI safety bandwagoners than potential top plan changes (who I model as more critical and independent-minded).
The scariest thought of all is that 80k have become bandwagoners onto themselves. I think it’s reasonably clear that community-building work takes a backseat to research or policy or grantmaking in terms of intra-EA status. Indeed, my own story from above reflected a worry that I couldn’t hack it in DC, so I better just talk about EA at a safe EA org and set up my advisees to do the real work.
So maybe 80k sees itself not as an independent research organization giving you their arduously-debated and well-worked-out models of the world and of careers, but rather as a mere popularizer of the must-be-good views of serious thinkers at think tanks and dedicated research orgs which 80k isn’t qualified to process or critique. And which researchers does 80k defer to? Whichever ones have the most clout and energy behind their projects. And right now, that’s AI 2027 — an easy to critique model written by someone with a strong claim to being an in-community authority.
Compounding this worry is a common feature of groupthink where extreme claims are socially elevated because they can serve as a litmus test for loyalty (i.e. if you’re willing to publicly declare X, and X seems especially crazy to most people outside the group, it’s going to be harder for you to join some other group or leave this one — you aren’t keeping your options open, a credible sign of commitment). Now, in the video and in AI 2027 itself, we’ve retained the virtuous tick of noting that this is unlikely (and indeed that no one even believes this), but it still troubles me that people put so much effort and pride into celebrating the literally-most-extreme and different-from-outsiders version of their beliefs.
I worry that the smart 19 year old googling how to do good with their career is getting this impression much more than they’re getting scared or finding resonance in the argument about secret “agents” and Chinese spies and surprise bioattacks. I think we’ll all suffer for that.
You don’t have to literally be a founder for this, but rather be a strategic leader of some kind who can multiply your impact and keep others inside and outside of your org on track towards a robust conception of impact.
It’s possible the most charitable read is something like “we need to make splashy content to cast a wide net and get people talking about AGI, even if it costs us some intellectual credibility — good arguments will win out in an ecosystem where more parties come to discuss this.” And of course, “We need to very-visibly stake ourselves out as powerful-AI obsessed early, to enhance our credibility later when new capabilities arrive.” Both are pretty fair if pretty risky and I would have rather seen a different org do them (it seems like both effects would still be achieved without the costs to 80k I contemplate here).
You don’t have to literally be a founder for this, but rather be a strategic leader of some kind who can multiply your impact and keep others inside and outside of your org on track towards a robust conception of impact.
It’s possible the most charitable read is something like “we need to make splashy content to cast a wide net and get people talking about AGI, even if it costs us some intellectual credibility – good arguments will win out in an ecosystem where more parties come to discuss this.” And of course, “We need to very-visibly stake ourselves out as powerful-AI obsessed early, to enhance our credibility later when new capabilities arrive.” Both are pretty fair if pretty risky and I would have rather seen a different org do them (it seems like both effects would still be achieved without the costs to 80k I contemplate here).
Hey Matt!
First off, thanks for the thoughtful, good-faith critique of 80k, and thanks for sending us an outline of this post a couple of days ago so that we could put together a response ahead of time if we wanted. That was kind of you, and it prompted some good discussion between 80k folks on the issues you raise.
A broad reaction I have is that these issues are really thorny, and I’m not sure that 80k is striking the right balance on any of them — though I do, overall, feel better about both our epistemics and transparency than it seems you do.
I wanted to briefly comment on each of the issues raised by your post:
1. Epistemics & deference internally (did 80k pivot more of our effort to preventing harms from AGI because of community insularity and in-group signalling? )
- It’s very hard to accurately introspect about one’s reasons for holding particular beliefs, and I personally am very wary about how I am (or might be) influenced by social dynamics, epistemic cascades, and (unconscious) desire to signal in-group status.
- I will say that there’s been a lot (hundreds of hours? Certainly 100s of pages written) of internal discussion at 80k over the 3.5 years of my tenure about whether to focus more on AGI. So I feel confident in saying that we didn’t make our strategic pivot without thinking through alternatives and downsides like the ones you mention.
- We definitely care a lot about trying our best to make high-stakes decisions well & on the basis of the best available evidence and arguments. It’s hard to know whether we’ve successfully done that, because these topics are so complicated and reasonable people disagree about how to approach them.
- Speaking very personally for a second here, I feel like the world at large is catching up to the small in-group you're talking about, in a pretty major way. AI companies make headline news regularly. My random British aunt is a ChatGPT power-user and is really worried about what it's going to do to her kids. Politicians from JD Vance to Bernie Sanders have voiced concerns. I think this is a sign, albeit a pretty weak one, that we're not totally getting taken in by niche, in-group worries.
2. Epistemics & deference with respect to our audience (Insofar as 80k is “giving advice” and stating claims like those made in the AI 2027 video, are we being insufficiently humble about what it takes to get an audience to come around?)
- I don’t think it’s manipulative to an audience to tell them things that we think are true and important, and I think everything we say in the video [1] meets that bar.
- The AI 2027 scenario was written by people with impressive track records as an exercise in storytelling and forecasting. I think it's fine to tell a story as a story (I think 'there isn't a model presented here' because that's not what the video is trying to do or be), and I think the video shows real emotions and reactions that Aric and others have had in response to the story.
- Personally, I find illustrations / stories / “fleshed-out possibilities” like this helpful — like how politicians use wargames — but you might disagree.
- As a more general point, we do really care about good epistemics & helping people think well — that’s why we try to present clear arguments for our conclusions, give counterarguments, and give readers a sense of how we ended up with our beliefs / why they might disagree.
3. Transparency ("hiding" our contingent empirical beliefs behind frameworks and examples)
- Your post points to the underlying tension between 80k giving out a framework for how to think about doing good in the world, and 80k having particular beliefs about how to do that (which are partly grounded in not-universally-held empirical beliefs, like “AGI might come soon”).
- This presents a communication challenge: how do we honestly tell readers things we think are true and important about the world, while also trying to provide value through a career framework that we think is useful regardless of whether they share (all of) those beliefs?
- I think this is a real tension. It’s genuinely unclear to me how to be “upfront” about all of your beliefs, when there’s lots of load-bearing beliefs you hold, and understanding exactly how you came to them requires possibly a hundred hours of reading — you can’t tell newcomers all things at once.
- We're still figuring out how best to balance more AI-focused empirical claims & more framework-y ones; but I don’t think it’d be best for us to go all in on one or the other (because I think both provide a bunch of value to our users and the world!).
4. 80k will alienate its most important target audience as a result of this shift
- I think this is possible, but neither of us have enough empirical results yet to feel confident.
- I think we still have lots to offer these kinds of people, and I think those we lose might well be worth what we've gained in terms of honesty and ability to have much better resources on our top cause area — but time will tell :)
[1] Probably also our site, but there’s so much there and I haven’t read all of it. I do generally think we have good research principles and do our best to say things we think are true based on our read of the best available evidence.
I first got pointed to 80k in ~2017 as a site with good general career advice, not even necessarily for trying to do good. That was one of several exposures to the EA community that collectively changed the direction of my career.
Well before the pivot to center AI safety, I got the impression that 80k was pivoting to EA more broadly. It became a better resource for those who already wanted do good, at the cost of being less appealing to people who were uncertain what they wanted out of their career. I see the shift towards AI safety more as a continuation of that trend, rather than a sharp change of direction, sacrificing broader appeal to focus on the people who are almost there. I don't think that prioritization is necessarily wrong, though I expect the Pareto frontier would allow for improvements in both directions.