Effective Altruism presents itself as an objective arbiter of where marginal resources can do the most good. Its framework emphasizes cause-neutrality and cost-effectiveness, aiming to direct support to whatever opportunities are most impactful at the current margin. Yet EA is also a movement with its own institutions, career paths, and organizational interests. This dual identity—as both impartial judge and institutional player—creates a fundamental tension that grows more acute as the movement expands.
This is best observed by zooming in on neglectedness. When EA is small, thinking on the margin poses no problem—individual EAs can freely direct their resources to whatever cause seems most compelling at the current margin. But as EA grows, any cause it champions becomes significantly less neglected. In theory, this is not a problem but a sign of success—solving problems quickly and effectively frees up bandwidth to look for the next problem to solve.. But as the movement grows larger, it will likely stand in the way of its own stated objectives.
Consider the fact that EA jobs are currently significantly oversubscribed.. Operations roles at EA organizations that pay well under $100,000 receive several hundreds, if not thousands of applications. EA orgs can afford long drawn-out evaluation processes spanning 3-6 months. Some in EA defend this oversubscription with arguments about power laws - claiming these roles are so much higher impact that the marginal value of an additional applicant doesn't diminish even with thousands of applicants.
But this seems implausible for most roles with bounded autonomy, even in exceptionally impactful organizations. The exceptions are high leverage roles like leading organizations or specialized technical positions. For a marketing manager or operations coordinator, it's hard to make the case that from a pool of 2000 qualified applicants, the delta between the best and second-best candidate justifies this insistence on working for an EA organization.
More importantly, this rationalization illustrates precisely how institutional forces can warp even well-intentioned movements. Even if we grant the power law argument, we should be wary of how conveniently it justifies the existing institutional structure. It's a perfect example of how EA organizations, despite their commitment to cause neutrality, can develop self-perpetuating logics that resist external scrutiny.
This points to a deeper challenge that EA faces through the lens of public choice theory. EA is not just a handful of grantmakers trying to allocate resources but also the social and intellectual capital of the movement. Consider how a substantial portion of EA's intellectual capital is now building careers in AI safety. These individuals, whether motivated by career advancement or genuine belief in the cause's importance, aren't immune to institutional incentives.
When you build a career in a specific domain, you naturally become slower to update downward on its relative importance. You see more arguments for its significance, develop deeper understanding of its complexities, and become better positioned to articulate why it matters. Even with purely altruistic motives, expertise breeds advocacy: "I understand this deeply now, so I need to make sure others appreciate its importance
This creates a form of intellectual and institutional lock-in. When EA identifies a cause area and invests in it, it's not just allocating money - it's creating careers, expertise, and institutional infrastructure. Any movement sufficiently large and invested in specific causes will face pressure to maintain these structures, potentially at the expense of pure cause neutrality.
One potential solution is to transform EA into a movement that primarily focuses on raising and allocating capital, rather than providing subsidized labor to "important causes." Under this model, EA would leverage market mechanisms and incentives to pay for results, with movement-building efforts centered primarily around earning to give.
While some might object that ambitious EA projects require high-trust, value-aligned teams since impact can't be tracked purely through metrics, this argument deserves more scrutiny. Yes, corporations at the highest level have a clearer optimization target in profits, but at each lower level of hierarchy, they face the same challenges of incentive alignment and goodharting that EA organizations do. Despite this, good companies manage to build effective hierarchies and get important things done. EA could similarly harness incentives and competitive dynamics to its advantage.
The challenge facing EA isn't just theoretical—it's structural. As long as EA tries to be both an objective arbiter of impact and a builder of permanent institutions, it will face increasingly difficult tensions between these roles. The movement's future effectiveness may depend on choosing a clearer path: either embracing its role as an institution-builder with all the path dependencies that entails, or transforming into a lean capital allocator that can remain truly neutral about where resources should flow.
One thing that jumps out to me is how much potential EA is leaving on the table by not doing more to push idealistic, talented people into the institutions that already shape the world—government agencies, major media outlets, high-profile companies. These are the places with real power to nudge the future in better directions, yet they’re practically starved of EA-aligned perspectives. Think about it: there are probably only a handful of EAs working in U.S. foreign policy agencies, OMB, Treasury, or major newsrooms. These institutions are massive levers for change, but they’re not exactly overflowing with people who think seriously about doing the most good.
EA has built this big, dynamic community, but its focus feels strangely narrow. The prevailing model seems to be about taking a tiny handful of exceptional individuals and steering them into hyper-specific, high-priority fields like AI alignment or biosecurity. Fine, those are important. But what happens to the rest of the talented people EA attracts—the ones who don’t land one of these highly competitive roles? Right now, it doesn’t seem like there’s much effort to help them channel their energy and skills into the other critical institutions that could use them.
The world has more than a few problems, and EA has the scope to think bigger—not just in terms of maximizing impact per person but in deploying more people across a broader range of challenges. It’s not about abandoning AI or biosecurity; it’s about realizing that embedding EA-aligned thinking in influential institutions—places that already hold enormous sway—could massively expand the movement’s impact. EA has built the network; now it needs to act like it.