I’ve been on Substack for just over a month. My favorite articles to write—and statistically, your least favorite articles to read, dear audience—have been about effective altruism and international politics. These are the ones where I think through what different findings and theories of international politics might have to tell us about how to do the most good in the world.
For example: Should increasing fertility be an EA cause area? The security studies literature supports the widely held position in the EA community that raising fertility probably doesn’t do much to lower existential risk, and there are lots of more impactful things that we can do instead. The one large-N study on war and population aging actually finds that younger and more fertile societies tend to enter conflicts more often than older and less fertile societies, as they have more resources to devote to military spending and their populations have more aggressive foreign policy preferences. Another body of literature, hegemonic stability theory, suggests that if we do want to raise fertility, we should encourage births in whichever country we expect to be the most powerful, as that would bolster the hegemon’s relative capabilities and better deter great power wars.
Another example: If OpenAI’s o1 model is a sign that AI is about to get really good at solving highly technical engineering problems, then we should think it’s going to get a lot easier for states to imitate other states’ advanced military systems, so long as they have the defense-industrial capacity to produce them at scale. That portends ill for global stability because it means we’re looking at a lot more even distribution of power, which raises the chance that secondary states are going to challenge the hegemon and start a great power war.
(The basic rule here is that, all else being equal, a more highly skewed distribution of power means a lower risk of great power conflict, while a more even distribution of power means a higher risk of conflict. This may go against your priors if you’re a Millennial or a Zoomer and your paradigm for world order is the Iraq War, but hegemony is actually a very good thing.)
These questions strike me as being obviously significant both to EA stakeholders and political scientists. Government is an immensely powerful institution, and it affects just about every potential EA cause area you can think of. Global health and poverty? Factory farming? AI? Climate change? Nuclear weapons? Stable totalitarianism? Political science theory can help you think about:
What variables make some of these issues better or worse candidates for systemic change compared to others?
What makes some political strategies more likely to be successful than others?1
Under what conditions should we expect transnational cooperation to promote optimal outcomes?
Under what conditions should we expect international organizations to work or fail to work toward their intended goals?
How should we calculate existential risks from issues affected by international politics, like nuclear war?
While political science can give us insight into some key considerations in EA, however, I’m not aware of any trained political scientist who focuses directly or even tangentially on EA in their professional work. And when I search the EA Forum, I can find only one poster who’s a political scientist who talks about political science, and they haven’t written anything of substance since before the FTX collapse.
This is deeply puzzling. In general, EA is well represented across the disciplines. There are tons of philosophers (obviously). There are economists, social psychologists, and probably a few sociologists and historians—at least, I’ve seen EA-aligned activists engage seriously with sociology and history. There are also computer scientists, biologists, and all other manner of experts across different practical fields. There are even policy experts and lobbyists, and both animal welfare and AI safety are increasingly embracing politics as a movement strategy. As Toby Ord suggests in The Precipice, addressing existential risks and advancing EA principles is going to require broad and deep interdisciplinary collaboration. But there seems to be very little practical engagement within EA with the scholars who study the political institutions and processes that EA activists inevitably will have to deal with.
Take the section on nuclear risk from Ord’s book: It’s heavy on history but completely devoid of theory. Will MacAskill’s What We Owe The Future does slightly better, and MacAskill cites international security research on conflict risk and transnational norm adoption, but he barely scratches the surface. The 80,000 Hours podcast, to its credit, has hosted political scientists Bear Braumoeller and Allan Dafoe for in-depth discussions. Besides that, however, it doesn’t look like EA organizations are actively seeking to collaborate with political scientists. Nor are political scientists rushing to get involved with EA in any meaningful numbers, as far as I can tell.
I can’t say exactly why this is. You might think that political scientists just really, really enjoy abstraction and theorizing—and that’s true, as I’ve written before, often to a fault. EA is a practical movement, and naturally it doesn’t have a lot of overlap with some academic fields. Yet, if that was the only issue, then as political science increasingly comes to care about policy relevance, you would expect there to be more political scientists who embrace EA. But even as new organizations like Bridging the Gap have sought to bring researchers into dialogue with nonprofits and government policymakers, EA stakeholders have remained outside of the conversation. Even as academics increasingly have expressed concerns about theorizing from the ivory tower, and it’s become more acceptable for scholars to make normative claims, few have begun talking about EA. I know of only one who’s gotten close: Jan Dutkiewicz, who is a solid public-facing advocate for animal rights and veganism. But there isn’t anyone like Dutkiewicz for EA more broadly!
I don’t have PhD in political science, and I’m torn about whether to get one—the tiebreaker will probably be how much patience I have for grading hundreds upon hundreds of inchoate undergrad essays about (sigh) the paradigms. If I did have a PhD, though, I’d think that trying to reconcile political science and EA looks like some very juicy, very tasty—possibly very well-funded—and very low-hanging academic fruit.
Here I linked to some works in my subfield, international relations, but there are also rich literatures in American politics and comparative politics which are much more relevant to potential EA political strategies.
This was not the sorely-needed piece on the lack of political science expertise at Electronic Arts that the headline led me to expect.
As someone who has a PhD in political science, let me put forward the hypothesis that political scientists tend not to be adventurous in their choice of research topic. EA would have to become a much bigger and more influential movement before it gets interesting.