What is it that the most senior national security policymakers want from international relations scholars? An answer to this question matters, write Paul Avey and Michael Desch, because there has been recurrent interest among policymakers in drawing upon academic social science expertise in support of more effective national security policymaking.
Despite this high-level interest, there has also been enduring frustration on both sides of the “theory-policy gap” with our inability to bridge it. One of the primary obstacles to building this bridge is the lack of systemic data about when and how academic social science is useful to policymakers.
- This post originally appeared on The Monkey Cage. The introduction above has been adapted from the opening paragraphs of the original piece.
As early as 1971, a National Academy of Science study concluded that “what are required are assessments of the research needs and resources from the point of view of policymakers.” (Advisory Committee on the Management of Behavioral Science Research in the Department of Defence, 1971:28)
Working with the Teaching and Research in International Politics (TRIP) project at the College of William and Mary, we have taken a first step to get a better sense of when and under what conditions policymakers pay attention to the work of academic social scientists. Our unique survey of nearly 1,000 current and former national security decision-makers (of whom 25 percent responded) provides the most systematic evidence to date of what the highest-level national security decision-makers want from academic international relations scholars.
Among many other things, we asked about the usefulness of various approaches or methodologies for conducting social science research. For policymakers, the most useful approaches included area studies, contemporary case studies, historical case studies and policy analysis (see figure 1). As one respondent put it in the open-ended responses, “most of the useful writing is done by practitioners or journalists. Some area studies work is useful as background material/context.” Another cited “any analysis (e.g., in area studies) that gets at the UNDERLYING causes, rather than current symptoms, of problems has deep policy value.” A third listed “case studies — Kennedy School, Maxwell School, Georgetown-Pew” as an example of social science research that has been, is, or will be useful to policymakers in the formulation and/or implementation of foreign policy.
Conversely, the more sophisticated social science methods such as formal models, operations research, theoretical analysis and quantitative analysis tended to be categorized more often as “not very useful” or “not useful at all.” Indeed, the only methodology that more than half of the respondents characterized as “not very useful” or “not useful at all” was formal models. To be sure, one respondent observed that “the work of scholars such as Howard Raiffa and Thomas Schelling in the area of game theory and systems analysis has been of great utility.”
But more typical were the negative responses to our open-ended invitation for policymakers to “list an example of social science research that you believe has NOT been, is NOT, or will NOT be useful to policymakers in the formulation and/or implementation of foreign policy.” Some replies:
— “Most formal modeling”
— “Large-N studies”
— “The time spent on computer modeling of international systems or conflict resolution is a complete waste. Much of the theory work is as well”
— “Highly theoretical and quantitative analysis that seems to be more concerned about the elegance of the model than the policy utility”
— “Many micro-economic models and fitting of history into larger theories is not very useful. Many professors do not want to influence contemporary policy.”
— “Highly theoretical writings[;] complex statistical analysis of social science topics (except Economics). Writings that use arcane academic jargon”
— “Most any quantitative study; virtually every article in APSR”
— “Formal/game theoretical work and quant in Political Science — most of what passes as ‘methodologically sophisticated’ international relations work.”
One exception to policymakers’ aversion to quantitative social science was in the area of public opinion analysis: Respondents included among “useful” approaches “public opinion research/analysis of foreign audiences by whomever.” Another argued that “polling data and its analysis is perhaps the most basic and certainly among the most useful such products.” Multiple policymakers specifically cited the Pew Research Center as doing useful survey research. Finally, a fourth agreed that “opinion polling can be very useful in trying to determine what populations think, especially in countries where freedom of expression is limited.”
Overall, we find that policymakers do regularly follow academic social science research and scholarship on national security affairs in hopes of drawing upon its substantive expertise. But our results also call into question the direct relevance to policymakers of the most scientific approaches to international relations. It is worth noting, as the TRIP project reports, that the majority of articles in top international relations journals utilize these approaches, and graduate programs increasingly emphasize formal and quantitative training. To be clear, we agree with Farrell that “methodological sophistication is not a natural enemy of public interest.” Rather, we are making a more nuanced argument: That policymakers often find contemporary scholarship less than helpful when it employs such methods across the board, for their own sake, and without a clear sense of how such scholarship will contribute to policymaking.
Farrell raises several additional reasonable questions about our findings that readers will see we acknowledge ourselves in the piece. For example, it is highly likely that the senior national security policymakers of 10 years from now will have very different attitudes toward the Internet as a source of information. Indeed, our results showed that policymakers rely heavily on newspapers, which suggests that blogs associated with prominent newspaper outlets – such as FiveThirtyEight and the Monkey Cage – could have a great deal of influence in coming years (if they’re not already). In addition, while we did not survey high-level domestic policymakers, we have also come across anecdotal evidence that they have upon occasion found the scholarship of our colleagues who use the most sophisticated techniques, including one of our colleagues here at Notre Dame, of relevance to their work. It would be useful, then, to have more systematic data of when and under what conditions such work is of use. Perhaps our approach might be replicated on that sample?
Such information would be invaluable to political scientists who would like not only to produce rigorous scholarship but also have it influence policies that they care about. Likewise, at a time in which there is a widespread sense that money spent on political science might be better spent on finding a cure for cancer, it would be useful to be able to have hard data showing that, in fact, our scholarship can advance the commonweal.
[…] Original source – Manchester Policy Blogs […]