Page 2

An Interdisciplinary Conversation

A conversation with Tracy Dennison, cross-posted from Broadstreet, a new blog devoted to historical political economy.

Scott: Tracy, this is the first post at Broadstreet for both of us. Welcome!

Tracy: And welcome to you! It feels a bit like cheating to make the first post a shared effort, but it’s very much in the interdisciplinary spirit of the blog itself.

Scott: A historian and a political economist talking to each other…sort of like one of Gail Collins and Bret Stephens’ periodic conversations.

Tracy: Just like that, but different! Better, even. Like political economy without the politics.

Scott: In fairness, we’ve had practice. Last year, in Madison, we held the first annual Summer Workshop in the Economic History and Historical Political Economy of Russia. Economists, historians, and political scientists all in the same room together, discussing each others’ papers. It was mostly wonderful, though I recall we did have a spirited conversation about “the relationship” at the end.

Tracy: It’s inevitable. I think it’s part of the peace process. I can only assume there were no hard feelings, as everyone wanted to come back and do it again this year!

Scott: And that’s what we would have done—in Paris, moreover—if the world hadn’t turned upside down this past spring. In the end, we moved the workshop online, like most every other part of our daily existence.

Tracy: It was a big disappointment to have to move to Zoom. Not only because it meant giving up Paris, but because the inaugural meeting in Madison had fostered so many productive in-person interactions. It wasn’t clear how we’d be able to recreate that atmosphere online.

Scott: That’s an interesting question—whether we were able to do so. We opened up the workshop to anybody in the field, which attracted a much larger audience than we were aware existed. But do you think we were talking to each other or only past each other?

Tracy: I think the constraints of Zoom were evident—it’s difficult to hash out those thorny questions of methodology or interpretation without being in the same room with one another and without being able to continue those conversations later in the bar. That said, I was delighted by how well it went in the end.

Scott: We had some great papers, and an online poster session to boot. (Here’s the program.) Mark Harrison, one of the deans of the field, led off the workshop with a keynote address on the KGB in Lithuania.

Tracy: Yes, it was the perfect opening; it really set up the interdisciplinary context nicely. The paper examines a question that has long interested historians of the Soviet Union: how effective were attempts to prevent deviant—as defined by Soviet authorities—behavior? And how can we measure these effects? It touched on a number of important issues, including the reliability of Soviet source materials, variation across space and time, and the changing priorities of both officials and Soviet citizens.

Scott: Once we moved to the paper sessions, it was interesting to see certain themes emerge. We had two papers on deportations under Stalin, and two on what we might call “rural political economy.” I don’t think that was an intentional choice on our part. It just reflects the state of the field.

Tracy: It was exciting to see so much new work on some of the bigger problems in Soviet history—industrialization, markets and finance, political repression, collectivization—from younger colleagues in the field. I was impressed by the ways in which they were using new sources and new methods to approach these questions. They certainly provided the group with much to discuss and debate.

Scott: You and I have been in the room where these discussions happen for awhile now. One recurring debate has centered on the use of digitized archival records by economists and political scientists. Roughly speaking, I would characterize the conversation as, Historian: You have no idea how noisy these data are and how biased is their selection into the archives. Social scientist: I understand at least some of that, and my fancy research design is intended to compensate.

Tracy: That sounds about right. I think it often boils down to one really fundamental issue: historians’ skepticism about numbers and social scientists’ doubts about textual evidence (please don’t call it anecdotal!). Somehow our “relationship discussions” always seem to revolve around some version of this problem. This is something I hope to explore in greater detail in future blogposts.

Scott: We had some variant of this in our session devoted to a new paper by Andrei Markevich, Natalya Naumenko, and Nancy Qian on the Stalin-era “Great Famine.” It’s really fascinating work that addresses an old question—whether Ukrainians, who died in enormous numbers during the famine, were targeted deliberately or merely caught up in a monumentally catastrophic policy. Many well-known historians have weighed in on this—Stephen Kotkin, Timothy Snyder, and Anne Applebaum come to mind—but Andrei, Natalya, and Nancy are the first to try to answer this question econometrically. (The working paper is online for those interested in their answer.)

Tracy: Yes, it generated quite a lively discussion—heated at times (friendly heated!), but exactly the sort of thing we are striving for with this workshop. It is worth noting that many of those who participated in the discussion were continuing a debate on the same set of questions that had been initiated in person last year in Madison. I interpreted this as strong evidence that the workshop is facilitating sustained interactions across battle—er, disciplinary—lines.

Scott: Part of that discussion was whether we should be looking for comments by Stalin or others as evidence of intent, or whether we should let the numbers do the talking. We ended up in this sort of absurd situation where we could not agree on the meaning of the word “anecdote.”

Tracy: True. But it highlighted some important ways in which we do talk past one another. Social scientists see the use of text to support an argument as arbitrary and unreliable, since, in their view, one can just look through a source and cherry-pick some quotes. Historians feel the same way about quantitative evidence. Each side assumes the other understands the value of its methodology and the reasons for skepticism, but this is almost never the case. We need more group therapy, I guess.

Scott: Or more joint training. My History colleague Stephen Pincus and I have discussed teaching a graduate course on “reading across disciplinary divides,” or something like that. We don’t need to turn historians into economists or political scientists, and vice versa, but we do need to be able to read each others’ work critically. You’re one of the few scholars I know who really invested in learning the language of more than one discipline. How did you do it?

Tracy: That is a great idea for a graduate course, and I completely agree about the value of being able to critically read each others’ work. I did my doctoral research in the UK, where there are economic history departments in which a large number of historians and social scientists are forced to cohabit. It was a conscious decision to train as a historian, but along the way I seem to have internalized a social science way of looking at the world.

But Scott, you are also a scholar of this kind. What’s your story?

Scott: I remember struggling before graduate school with the path I would take: political science or economics. Ultimately, I enrolled in a political science program, but I never really resolved the dilemma. I ended up doing as much coursework in economics as in political science, and my degree is in both. And then I discovered history as a field of study when I was a newly tenured professor. But I’m still working to learn the language of history as a discipline. The workshop and all of our conference panels help.

Tracy: I think having intellectual relationships with people from other disciplines is particularly important. As I recall, one of our aims with this workshop was to pave a way to interdisciplinary engagement for younger colleagues in our respective fields, to ensure they would have the ability to make connections across disciplines. Their participation in the project has been invaluable, and gives me hope for enduring collaborative efforts of this sort.

Scott: Since we have the megaphone for a day, we should give a shout-out to the many junior scholars who participated: Gerhard Toews, Natalya Naumenko, Dmitrii Kofanov, Giovanni Cadioli, Otto Kienitz, Viktor Malein, Brendan McElroy, Timur Natkhov, Natalia Vasilenok, Matthew Reichert, Eugenia Nazrullaeva, and Imil Nurutdinov. And we had presentations by a quartet of outstanding senior scholars: Ekaterina Zhuravskaya, Pierre-Louis Vézina, Mark Harrison, and Andrei Markevich. The state of the field is strong!

Tracy: It is! Fingers crossed we’ll be able to revisit these debates in person at next year’s workshop. In the meantime, there will be plenty of interdisciplinary collaboration on display here at Broadstreet. Stay tuned!

Leaving Paris

On a warm summer day in late August 2019, we moved into a third-floor apartment in Paris’s residential 15th arrondissement. Standing on our balcony and looking to the left, we could see the district’s mairie—its town hall—and behind it the Eiffel Tower. Directly across the small park in front of us was a row of apartment buildings and a contemporary office structure. Just up the street was the Vaugirard metro station: our gateway to the rest of Paris.

With a bit more foresight, I might have anticipated that we would lose access to that gateway as one of France’s periodic transit strikes disrupted transportation throughout the nation’s capital, and beyond. No understanding of French political economy, however, could have prepared me for the city’s complete lockdown in March as a pandemic hit Paris with particular ferocity. Even after the country’s reopening in May and June, I experienced Paris as Baron Hausmann had envisioned—as a pedestrian.1

In The Walkable City, Mary Soderstrom posits a conversation between Georges-Eugène Hausmann, who under Emperor Napoleon III transformed Paris from medieval city to modern metropolis, and Jane Jacobs, who fought against a similar effort to clear New York’s “slums” a century later. Baron Hausmann was the Robert Moses of his day: an ambitious planner unencumbered by nostalgia for the city’s past. Jacobs would have hated him. And yet, while American urban planning has turned sharply against Moses’s vision, Paris survives as a model for the walkable city.

The difference, I suspect, has something to do with technology. Not only did Hausmann work in a pre-automobile age, but steel-frame construction had yet to emerge as America’s seminal contribution to modern architecture. The quintessential Parisian neighborhood is thus capped at seven stories. With numerous such buildings, Paris is dense, but it is rarely crowded, and it is always light. Many neighborhoods have the feel of a village.

Parisians did not take to the Eiffel Tower at first, but it proved such a draw for tourists that they learned to like it. (The same cannot be said for the unfortunate Tour Montparnasse.) Today, the Eiffel Tower is the iconic symbol of a city that annually draws tens of millions of sightseers. No living Parisian could remember Paris without either tourists or, for those old enough, foreign troops—until this spring. Sometime in March, the city emptied out. During the height of the confinement, I would go for long runs down deserted streets, jogging in the roadway to avoid the few pedestrians about. Even after restrictions were lifted and well-off Parisians returned to the city, the tourists were absent. We could sit on the Champ de Mars in early July with only a handful of picnicking locals in proximity.

People ask if I feel cheated that my sabbatical coincided with the pandemic. Au contraire! I got to see Paris as few have. We were scared at first, but the fear passed even before the danger did, leaving solidarity in its wake. Every evening at 8:00, we joined our neighbors on the balcony to applaud the city’s healthcare workers—and ourselves. For weeks, this daily ritual was our primary social interaction outside of the home. Elderly couple waving a white scarf, woman on distant balcony with long black hair, man in glasses (Nicolas, we eventually learned) one building over…we shared the confinement with you.

By July, even as infections spiked anew at home, we felt secure enough to travel around France by car. After a whirlwind tour of WWI battlefields and the Normandy beaches, we settled into a comfortable rhythm, staying 2–3 nights in each region. We joined our friends Sergei and Katia in Brittany, and we met up with our friends Mike and Brandi in Fontainebleau and Valery and Sonya in Provence. We ate fresh oysters on the Atlantic shore and fresh trout in the Alps, we visited chateaux in the Loire Valley and hillside villages in Provence, and we drank the local wines wherever we went. It was wonderful.

Memory works in interesting ways. If I fast forward through part an audiobook that I have already heard, I can remember where I was at the precise moment that a particular phrase was spoken. But when I again walk the street where I listened to that book, my mind is elsewhere—no visual cue prompts remembrance of the spoken word.

What will I remember when I can finally visit Paris again? I do not know, but I can guess: the embankment of the Seine during the morning hours when running was allowed, the square in front of the mairie where I played tag with my son as we took a break from online schooling, the view from our balcony and the sound of the city applauding. Paris, j’espère te voir bientôt. Tu me manques déjà.

Place Adolphe Chérioux. Artwork by Mike Duncan.


  1. Underground rail came late to Paris: the first Metro line opened only in 1900.

Nondemocracy (Second Edition)

When I wrote the first edition of my textbook on Formal Models of Domestic Politics, I made a conscious decision not to include models of autocracy. The literature was too new, the big picture insufficiently clear. There was enough to cover on more established topics. Nondemocracy could wait.

That first edition went to press in 2012. In the eight years since, there has been an explosion of interest in authoritarian politics. I see it at academic conferences, where panels on autocracy draw overflow audiences. Creative empirical work has cast new light on the governance of nondemocratic regimes. Simultaneously, and in dialogue with empirical research, political scientists and economists have explored the institutions of autocracy using the language of game theory. The themes of the theoretical literature are now sufficiently clear to deserve a chapter in the second edition of my textbook.

Of all the topics in the text, this is the one in which I have worked most directly. I have drawn substantially on a review article I published a few years ago with Konstantin Sonin and Milan Svolik, though I haven’t stopped there. In summarizing the literature, I have tried to be a faithful ambassador, including important models that may or may not reflect my personal views on the nature of autocracy. Nonetheless, where the literature has branched—in how to model information manipulation, for example—I have made some choices that inevitably reflect my personal modeling experience, even as I provide signposts to other parts of the literature.

With that as background, let me offer a brief survey. The chapter begins with Daron Acemoglu, Georgy Egorov, and Konstantin Sonin’s model of coalition formation in nondemocracies. Theirs is an explicitly institution-free setting: politics is governed only by the relative power of competing factions. The model thus serves as a benchmark against which more institutionally rich models can be evaluated. Anybody who has seen the brilliant film The Death of Stalin will immediately recognize the environment.

The chapter then moves to incorporate institutions into the analysis. The selectorate model of Bruce Bueno de Mesquita, James Morrow, Randolph Siverson, and Alastair Smith appears here, transplanted from the chapter on coalitions in which it previously appeared. The basic idea is to show how coalition choice and economic policy depend on the institutional environment, as measured by the relative size of the “selectorate” and the “winning coalition.” A complementary approach is to ask where institutions come from to begin with. Here, I present the simplified version of Roger Myerson’s model of institutions as a commitment device that Konstantin, Milan, and I developed for our review piece.

Myerson’s model brings information into the story; this is the focus for the remainder of the chapter. To set the stage, the chapter includes a general discussion of Bayesian persuasion, the explicit or implicit framework for many models of information manipulation in autocracies. What distinguishes this framework from the cheap-talk model of Crawford and Sobel is the assumption that the sender can commit to a probability distribution over signals for every state of the world—say, to send the signal “economy is weak” with probability 0.6 and the signal “economy is strong” with probability 0.4 if the economy is in fact weak, and to send the signal “economy is strong” with certainty if the economy is in fact strong. Is this assumption reasonable? Here is what I write:

The commitment assumption that characterizes models of Bayesian persuasion is not without controversy. It is reasonable to ask in which settings it is likely to hold. In their seminal contribution, Kamenica and Gentzkow (2011) provide the example of a prosecutor attempting to persuade a judge of a defendant’s guilt. The prosecutor might call a witness, not knowing exactly what she will say. Or she might order a DNA test, hoping for a positive result but constrained by law to share exculpatory evidence. In either case, the prosecutor de facto commits to a distribution of signals for each possible state of the world. Similarly, an autocrat might choose to disqualify candidates from an election (Ma, 2020), not knowing with certainty the distribution of voters’ preferences and thus the mapping from candidates to election outcomes. Alternatively, we can think of commitment as reflecting delegation to an actor of more or less sympathy with the sender’s point of view (think of how reporting on incumbents’ economic performance is affected by the identity of cable news personalities), where the actor is costly to replace (e.g., because charismatic hosts are in short supply). The latter environment may reflect the reality that any dictator relies on others to disseminate her message.

As this discussion illustrates, the commitment assumption is useful in two important environments in nondemocracies: government control of the media and electoral manipulation. The chapter includes some discussion of each, focusing on my model of media control with Konstantin Sonin and on work by Arturas Rozenas and by Alberto Simpser and me on electoral manipulation. This leads naturally to the final section of the chapter—a simplified version of Sergei Guriev and Daniel Treisman’s model of “informational autocracy,” which incorporates information manipulation into the models of political agency covered elsewhere.

As with other chapters, there is a lot of theory in the exercises as well as the main text. Alongside various extensions to the models discussed above, I include a model of “ex post” (i.e., after the election result is known to the incumbent) electoral fraud borrowed from a recent paper by Zhaotian Luo and Arturas Rozenas. (The full paper jointly examines ex ante and ex post manipulation.) Building on work by Gary Cox and by Andrew Little, I also consider authoritarian elections as a mechanism for gathering information about potential challenges to the incumbent regime.


Someone recently thanked me for my “service” in writing Formal Models of Domestic Politics. Oddly, I have never thought of it in those terms. Writing a textbook is one of the most enjoyable things I have done in my academic career. It satisfies various compulsions: to figure things out, to tinker with models, to write as clearly and efficiently as possible. As Gregory Mankiw has recently argued, it’s not for everyone—but it is for me. All the better that others have found the text useful. If things go as planned, the next edition will be out next summer.

Reform and Rebellion in Weak States

Sometime during my first year as a junior faculty member, I was wandering the stacks in Wisconsin’s Memorial Library. I can’t remember what I was looking for, but I remember what I found: a multi-volume chronicle of the peasant movement in Imperial Russia. That serendipitous discovery aroused an interest in economic history and historical political economy that has become one of the pillars of my academic personality. With Evgeny Finkel, Tricia Olsen, Paul Castañeda Dower, Steve Nafziger, and Dmitrii Kofanov, I have examined peasant unrest during the Great Reforms of Alexander II and the Bolshevik Revolution. And now, this week, Evgeny and I have published a short book on Reform and Rebellion in Weak States that returns to the themes of our very first paper with Tricia.1

We begin with the question that motivated that paper: When does reform provoke unrest among its intended beneficiaries? It is an old question, one that arguably dates to Alexis de Tocqueville’s observation (in The Ancien Regime and the Revolution) that “The regime that a revolution destroys is almost always better than the one that immediately preceded it, and experience teaches that the most dangerous time for a bad government is usually when it begins to reform.” Our examination of peasant unrest before and after Russia’s emancipation of the serfs in 1861 suggests a possible answer.

To understand our argument, consider a puzzle: How is it possible that the tsar and his ministers—overseers of a weak state that had just suffered a humiliating loss in the Crimean War—were able to effect a radical transformation of state and society that not only granted legal freedom to twenty million serfs but initiated a lengthy process of land reform that extended into the twentieth century? The short story is that, with innovation born of necessity, they relied on the very nobility whose land was to be transferred to the newly freed peasantry. Empowered by the reform statutes with control over the process, numerous landowners jumped at the opportunity to keep the estate’s best land for themselves and to ensure that former serfs received as little valuable land as possible. This opened a gap between what peasants thought they had been promised in the tsar’s Emancipation Manifesto (read out in village churches across European Russia) and what they actually received. That grievance translated into the largest wave of peasant unrest in the nineteenth century.

Our intuition, then, is that the delegation of reform’s implementation to those with a stake in the status quo plants the seeds of rebellion. But this raises further questions: To what extent do those with the power to block implementation internalize the resulting unrest? And is there always a trade-off between stability and reform, or is it possible to pursue the latter without risking the former? These are the sort of questions best answered with a model. In the book, we provide a simple one that builds on recent work in economics on contracts as reference points. In our context, the promise of reform represents an implicit contract against which its subsequent implementation is measured. From the tsar’s perspective, this promise is a double-edged sword. On the one hand, by allowing for disappointment, reform raises the possibility of unrest. On the other, the possibility of unrest encourages those with control over implementation—local agents, in our telling—to abide by the spirit of reform. A very general result is that any reform that incentivizes local implementation necessarily also increases the risk of rebellion.

This theoretical framework provides a lens through which to reexamine the Russian case. Relative to our earlier work, the analysis in the book pays more attention to the dynamics of unrest over the two-year period that follows publication of the Emancipation Manifesto in early 1861. A key finding is that it took time and the arrival of “peace arbitrators” (Leo Tolstoy was one) for peasants’ understanding of the intent of reform to settle. Once it did, and the actual process of negotiating terms began in earnest, landowner opportunism and peasant unrest were joined at the hip.

But wait: there’s more. In addition to a detailed analysis of the Russian case, for which we have rich data and a setting that allows for causal identification (emancipation directly affected only one of two large classes of peasants distributed unevenly across European Russia), we provide an extended discussion of four additional cases, each of which supplies some form of empirical leverage. The Tanzimat Reforms of the Ottoman Empire are a “most likely” case for our argument, given similarities in the nature of reform and the broader political context with the Russian case that motivates our theory. In contrast, the Lex Sempronia of Tiberius Gracchus—perhaps the world’s first major land reform—exhibits important internal variation in reform design, with top-down implementation among Roman citizens but not Rome’s Italian allies. (A big shoutout to my podcaster friend Mike Duncan of History of Rome and Revolutions fame for suggesting that we look at Ancient Rome.) Our extended riff on Tocqueville demands that we reexamine the French Revolution, where a detailed historical record allows us to trace the process by which the National Assembly’s declaration of the end of feudalism ultimately led to increased unrest. Finally, we briefly analyze several instances of land reform in twentieth-century Latin America, exploiting important variation not only across but within states.

It’s a fun read, and a short one. And it’s available for free download through June 17. Check it out.


  1. Technically speaking, what Evgeny and I have written is an “Element”: what Cambridge University Press calls their new short-book format. Ours is in David Stasavage’s series on Cambridge Elements in Political Economy.

Political Agency (Second Edition)

One of the pleasures of revising my textbook on Formal Models of Domestic Politics has been discovering work that speaks to our current politics. I began a series of posts on forthcoming changes to the text with a discussion of Wiola Dziuda and Antoine Loeper’s model of dynamic veto bargaining, which helps to explain Republican resistance to economic aid that might outlast the COVID crisis. Last month, I wrote about the Krasa/Polborn model of issue ownership, which suggests that a “Downsian” change in electoral platform may not resolve the GOP’s dilemma in the face of shifting demographics. Today, I turn attention to the politics of natural disasters, of which the COVID pandemic is a prime example.

Existing work—empirical and theoretical—suggests that a pandemic could have at least three possible effects. First, one might expect voters to react “irrationally,” punishing incumbents for events beyond their control, just as voters are alleged to have responded to shark attacks or losses in college football games. Even if one is willing to assume, however, that nothing about the course of the pandemic is within the control of governing elites, recent scholarship casts doubt on whether claims in the empirical literature generalize or hold in the cases originally examined. Second, it is in fact not obvious that voter “competence” is necessary for political accountability. Indeed, the strategic incentives of politicians who want to communicate that they are deserving of reelection may rely on the inability of voters to observe incumbents’ actions or to respond rationally.

Third, and perhaps most relevant to our times, the presence of a disaster such as a pandemic may provide an exceptional opportunity for voters to evaluate an incumbent politician’s fitness for office. Scott Ashworth, Ethan Bueno de Mesquita, and Amanda Friedenberg argue this point in a model of pure selection that is a primary addition to my chapter on political agency. In the Ashworth-Bueno de Mesquita-Friedenberg model, the possible impact of a disaster on an incumbent’s fortunes depends on whether she was previously “ahead” or “behind” in a race for reelection. If voters ex ante believe that an incumbent is more competent than her challenger, then a disaster—which may reveal the incumbent to be either competent or incompetent—can only reduce her probability of reelection. In contrast, those (possibly exceptional) incumbents who are disadvantaged relative to their competitors may benefit from events beyond their control: in essence, nature has gambled on their resurrection, providing them an opportunity to reveal their competence, should that be present.

Whether you think the pandemic increases or decreases Donald Trump’s chances of reelection thus depends on whether you think he entered 2020 the favorite or the underdog. If you believe, as I do, that he was behind, then COVID likely increases the probability of both a landslide loss and a narrow victory. Before you dismiss the latter possibility, ask yourself the following question: How will the American electorate respond if Trump moves beyond Lysol-gate and somehow manages to badger the FDA and the pharmaceutical industry into an effective vaccine ahead of schedule? It’s maybe not the most likely outcome, but it’s the sort of outcome that is more likely in the presence of a pandemic.

One more question: Why does it take so long to develop a vaccine anyway? I am no expert, but Bill Gates asserts that, had we invested heavily in vaccine production over the past decade, we could have been across the finish line in a year. Empirically, we know that voters fail to reward politicians for disaster preparedness, even though the returns to such investments are enormous. In the exercises (one could learn an enormous amount of theory just working through the exercises), I present a model by Sean Gailmard and John Patty that makes sense of such behavior. In the Gailmard-Patty model, voters prefer prevention spending but worry that politicians are investing in preparedness for the wrong reasons—say, to steer government contracts to friends or supporters. In equilibrium, even honest politicians may therefore fail to invest in prevention, as doing so may convey the impression that they are corrupt.


So, that’s about it. I have one more blog post planned on changes to my textbook, and it’s a big one: a discussion of a new chapter on models of nondemocracy. Stay tuned.

Delegation (Second Edition)

I consider the chapter on delegation in Formal Models of Domestic Politics to be unusually coherent. This is not patting myself on the back. Rather, the literature that this chapter summarizes, with its origins in the seminal work of Holmström, is of a piece. One paper follows another; all I had to do was follow the bread crumbs.

What, then, to add? As I have discussed previously, the second edition of my textbook features new models, new exercises, and clarifications. The new models push on both substantive and methodological frontiers, providing a mix of novel argument and novel technique. With this chapter, I was able to extend the theoretical narrative in an interesting direction while introducing a class of models that belonged somewhere in the text.

In particular: In the first edition, the chapter on delegation closed with a discussion of Gilligan and Krehbiel’s extension of the Crawford-Sobel cheap-talk model to delegation and information transmission in legislatures. This seemed a natural segue to a recent literature on cheap talk with multiple senders, where Galeotti, Ghiglino, and Squintani’s model of strategic information transmission networks has provided a framework for analysis of numerous political environments. Happily, a paper by Torun Dewan and Francesco Squintani provided the hook: their model of “Leadership with Trustworthy Associates” examines delegation to leaders. My chapter, which began with a model of delegation to bureaucratic agencies and grew to encompass delegation to legislative committees, thus now incorporates delegation by party elites to a leader tasked with making decisions on their behalf. The trail of bread crumbs is intact.

I do have one regret with this chapter. A pedagogical principle that guides the text is to present no more analysis than can be written on the blackboard and understood by Ph.D. students who have taken a semester of game theory and a semester of calculus. When that proves impossible with the original version of a model, I typically strip it down to something simpler; I try to avoid statements of the sort “X and Y show that Z is true.” I wasn’t quite able to do that with the Dewan-Squintani model without abandoning entirely the theoretical framework I wanted to demonstrate. There are a couple of hand waves. I hope the accompanying discussion compensates, but readers will be the judge.

Coalitions (Second Edition)

Halfway through my posts on the second edition of Formal Models of Domestic Politics, I am reaching the finish line for the manuscript itself. If all goes as planned, Cambridge will have the draft by next weekend. If you are reading this post, you might be reading the manuscript itself in a few weeks. Thank you in advance for your comments!

One of the joys of writing, and now rewriting, this book has been discovering papers that otherwise I might not have known. This is perhaps especially the case for the literature on legislative bargaining and coalition formation, which is not an area in which I have worked. I remember writing the first edition and discovering this amazing paper by Hülya Eraslan that showed that, notwithstanding the multiplicity of equilibria in the Baron-Ferejohn bargaining model, any stationary equilibrium (given particular recognition probabilities and discount factors) generates the same vector of expected payoffs. This is a remarkable result with great potential for applied work, where one might want to “plug in” a legislative bargaining game to proxy for the stakes in some contest for political power.

My reaction upon stumbling across a recent paper by Vincent Anesi and Daniel Seidmann was more bittersweet. The Baron-Ferejohn model that Eraslan examines assumes that bargaining is once-and-for-all: there is just one pie to be divided. In many contexts, however, a budget is negotiated annually or biannually, with the outcome in one period the default in the next. (I discussed a similar environment in my post on veto players last month.) Anesi and Seidmann show that, in this setting, most anything can happen in equilibrium. Contra Riker, winning coalitions may be more than minimal; in some equilibria, everyone gets a piece of the pie. Even more surprisingly, in period after period, some of the pie may be left on the table. (I have a hard time envisioning this with rhubarb pie, but appeals to welfare maximization will not get you far in this environment.) This is an “anything goes” result more typical of equilibria with history dependence, but Anesi and Seidmann work in a world of stationary strategies. It is a brilliant paper that fits naturally in the chapter on coalitions. It is a disappointing result, if you were hoping for predictive power similar to that of the Baron-Ferejohn model.

As with other chapters, I use the exercises to present related work. There is so much that could be included here; I have a long list of ideas for future exams. For the moment, I have stopped at two new exercises—one based on Maggie Penn‘s model of farsighted voting, in which the status quo is endogenous but broad societal forces rather than legislators drive the agenda, and one on Craig Volden and Alan Wiseman‘s model of bargaining over public and private goods. If I include any more, I will be past my deadline and over my page limit.

Electoral Competition (Second Edition)

I have been describing the changes in store for my textbook on Formal Models of Domestic Politics. Let me now turn to the three opening chapters: electoral competition under certainty, electoral competition under uncertainty, and special interest politics (the last of these substantially but not only applied models of electoral competition).

The formal analysis of elections was among the first applications of game theory to the study of politics. The field is well developed, and at times it seems there must not be much more to say. In fact, the study of elections has been reinvigorated with the incorporation of assumptions about parties’ (or candidates’) “immutable” characteristics—their honesty or competence, for example—which both intuition and empirical evidence suggest citizens weigh in deciding how to vote. When voters’ preferences over these characteristics are homogeneous and separable from their preferences over policy, we say that parties may possess a “valence” advantage. Numerous models explore platform choice in the presence of such advantages. I discuss some key findings briefly as an exercise in the chapter on electoral competition under certainty, and then more extensively in the following chapter on electoral competition under uncertainty.

The real payoff from this approach, in my opinion, comes when we drop the assumption that preferences over parties’ immutable characteristics and preferences over policy are separable. Consider a phenomenon well known to students of political behavior: the “ownership” of issues by one party or another. In the context of American politics, for example, the Republican Party historically has been more trusted on national defense, the Democratic Party more on education and healthcare. This is not about a party’s general competence but about its competence in producing some policy outcomes over others.

Stefan Krasa and Mattias Polborn have an elegant model of electoral competition with issue ownership; it is the most substantial addition to the first three chapters of Formal Models of Domestic Politics. The environment is straightforward—voters differ in the weight they place on different public goods; parties differ in their efficiency in providing those goods, to which they allocate more or less spending—but equilibrium behavior is truly surprising for anyone schooled in the Downsian tradition. Parties propose distinct bundles of public goods, even though they are office-seeking rather than policy-seeking, with each party providing more of the good that it produces more efficiently. And the positions they adopt are independent of uncertainty about voters’ preferences. Shift the expected median to the right or the left, and parties stay put; it is their probability of winning that changes, not their platforms. Applied to American politics, one can see the dilemma of the GOP in the face of changing demographics: so long as voters as constant in their evaluation of parties’ ownership of various issues, no shift in platform can prevent the erosion of support among an electorate increasingly attuned to “Democratic” issues.

There is, of course, another tradition of modeling elections as a mechanism for selecting politicians according to their type: the formal analysis of “political agency,” in which an incumbent politician plays agent to voters’ principals. But that is another chapter, a topic for another post.

Regime Change (Second Edition)

In previous posts, I began to describe the changes to my textbook on Formal Models of Domestic Politics, with a second edition planned for next year. Most of those changes involve new material: additional models and exercises, a new chapter on nondemocracy. There are, however, a handful of clarifications—small edits suggested by eagle-eyed students at UW and instructors using the textbook elsewhere, but also one big one in the chapter on regime change. A large portion of that chapter is devoted to the Acemoglu-Robinson model of regime change: an important theoretical framework in its own right, and also an opportunity to teach Markov games and the associated solution concept, Markov perfect equilibrium. Unfortunately, there was an error in the analysis, which nobody seemed to catch until Acemoglu and Robinson posted a correction in 2017.

I discussed this correction shortly after it first circulated. I wrote then:

Daron Acemoglu and Jim Robinson recently posted a correction to the key proposition in “Why Did the West Extend the Franchise? Democracy, Inequality, and Growth in Historical Perspective,” the seminal paper in what has proven to be an enormously influential research enterprise. That proposition characterizes equilibrium in terms of the parameter q, which measures the probability of future unrest in an undemocratic regime. When q is large, then promises of future redistribution are fully credible and democratization is unnecessary, whereas when q is small the elite democratize to prevent revolution the first time that the poor pose a credible threat of unrest.

Those results still hold in the corrected proposition, but it turns out that for intermediate values of q, the unique equilibrium is in mixed strategies: the elite democratizes with probability strictly between zero and one, and revolution occurs on the equilibrium path. Technically, this correction is driven by a failure in the original analysis to check for all possible deviations. Substantively, the issue arises because institutional change in the Acemoglu-Robinson model is treated as a discrete choice: democratize/not. This discreteness implies that democratization, when it takes place, leaves the poor with strictly more than their payoff from revolution, thus creating scope for the deviation that Acemoglu and Robinson discuss in their correction.

The basic idea, as I wrote in a subsequent paper with Paul Castañeda Dower, Evgeny Finkel, and Steven Nafziger, is that

[O]ffering maximal redistribution whenever the poor pose a credible threat of unrest, holding constant the elite’s equilibrium strategy to extend the franchise the first time that the poor subsequently have de facto political power…is profitable to the elite if the poor respond by not revolting. That this may be possible in principle follows from the fact that democratization only works as a commitment device when the value to the poor from democracy, in which distribution is maximal in every period, is greater than that from revolution. If the poor are sufficiently patient, maximal distribution in the current period, while deferring franchise expansion to the next time that the poor post a credible threat of unrest, is sufficient to prevent revolution.

As I also overlooked this, the main text and a couple of exercises in Formal Models of Domestic Politics required correction. Done. And making lemonade out of lemons, there is some pedagogic value in the revised discussion. In my experience, the one-deviation property is typically not learned on first exposure. What better way to illustrate its application than to walk through a deviation sufficiently subtle that it was missed for nearly twenty years?

What else? Well, the reason I knew about this correction is that Evgeny discovered it just as our article on “Collective Action and Representation in Autocracies” was going to press. In that paper, by way of setting up the empirical work, we generalize the Acemoglu-Robinson model to allow for a continuous institutional choice, which serendipitously—and instructively—sidesteps the issue discussed above. There is a brief discussion of that in the second edition of Formal Models of Domestic Politics. I also riff a bit on equilibrium multiplicity in “global games,” building on work by Mehdi Shadmehr and Ethan Bueno de Mesquita, and I provide a couple of new exercises. But mostly the revisions to this chapter—in contrast to all other chapters in the text—are about fixing what was previously incorrect.

Veto Players (Second Edition)

In my last post, I discussed in broad terms work on a second edition of my textbook, Formal Models of Domestic Politics. Beginning with this post, I will lay out the specific changes I have made to the text.

For the existing chapters, most changes fall into one of three categories: new models, new exercises (which sometimes cover new models), and clarifications. The chapter on veto players has a bit of each. In the first edition, this covered a grab bag of models in which some players had the ability to block change in policy from the status quo. This included both the social-choice theoretic treatment of George Tsebelis, with its normative emphasis on “stability,” and Keith Krehbiel’s theory of pivots, with its mirror-image focus on “gridlock.” One contribution of the textbook, I believe, is to show the close relationship between these two perspectives. Beyond that, the chapter presented models of portfolio allocation in parliamentary systems, à la Austen-Smith and Banks and Laver and Shepsle, as well as bargaining between veto players and special interests, as in my work with Eddy Malesky.

For each of these models, the analysis assumes an exogenous status quo; the motivating question is what movement, if any, is possible from that policy. But whence the status quo? A good answer is that the status quo is the outcome of some unmodeled prior bargaining process—for example, when government spending is “mandatory” rather than “discretionary,” as in Bowen, Chen, and Eraslan. Yet just as veto bargaining in the past may affect political behavior in the present, so might veto bargaining today affect political behavior tomorrow. Taking this idea seriously requires an explicitly dynamic framework, in which the political outcome in one period becomes the status quo in the next.

The big addition to the chapter on veto players is thus a discussion of dynamic veto bargaining. But what to include? Here it may be useful to articulate a basic principle I have tried to follow: textbooks are not literature reviews. I make no claim of comprehensiveness in either presentation or citation, though I do try to get seminal contributions right. Rather, in approaching a topic, I am typically looking for a single model with a sharp message that can be written on a blackboard, with extra points if the mechanics are novel.

Wiola Dziuda and Antoine Loeper’s wonderful model of “Dynamic Pivotal Politics” fits the bill. In contrast to other work in the literature, the model emphasizes shifts in the preferences of veto players rather than their identity or agenda-setting power. For concreteness, think of two veto players—two pivots, if you will—who have “left” and “right” preferences, respectively. Much of the time the two actors will disagree about whether a shift from the status quo is desirable, but with a shock to the political-economic environment sufficiently large their preferences will align. (Congressional action during the 2008 financial crisis may be a case in point.) Critically, whether the actors disagree tomorrow depends on the policy chosen today, which with the passage of time becomes the status quo. Each actor therefore has an incentive to reject changes to the status quo that from a purely static perspective are preferable, as by doing so they can lock in policy benefits in the future. In contrast to the environment of Krehbiel (or Tsebelis), gridlock in the Dziuda-Loeper model can be inefficient, as there are situations in which the status quo prevails even though the veto players would agree to overturn the status quo, if only they could commit to doing so for one period only.

As is typical, presenting this model in textbook form required some simplification, which in this case implied focusing on a two-actor, two-period setting. The first of these is just a special case of the N-player model in Dziuda and Loeper. The second is in fact a change to the environment, which in the original paper is an infinite-horizon setting. Moving to a finite horizon is not always my first instinct—I simplify the Acemoglu-Robinson model, for example, by stripping away much of the economy while retaining the dynamics—but in this instance it simplifies the analysis considerably, while retaining the core insights of the original model.

There are other approaches to dynamic veto bargaining beyond Dziuda and Loeper’s. In the exercises, I include Peter Buisseret and Dan Bernhardt’s model of the “Dynamics of Policymaking,” in which the identity rather than the preferences of veto players may shift over time. I like using exercises to introduce models that can be presented efficiently, and in this chapter I also add Chiou and Rothenberg’s discussion of pivots in a bicameral setting and Dragu, Fan, and Kuklinski’s analysis of constitutional review as a veto-players game. Oh, and I have tinkered with the original discussion of pivots, with what I hope is an even clearer presentation than before.

Next up: The chapter on regime change, which for surprising reasons required some fairly substantial revisions.