The Art of Modeling Dynamic Systems by Foster Morrison
Cities and Complexity by Michael Batty
Data Analysis Using Regression and Multilevel/Hierarchal Models by Andrew Gelman
The Art of Modeling Dynamic Systems by Foster Morrison
Cities and Complexity by Michael Batty
Data Analysis Using Regression and Multilevel/Hierarchal Models by Andrew Gelman
Trying to survive end of semester. Dealing with a lot of (continuing) dislocations in my living situation. Having to deal with construction projects that make, say, the F-35 acquisitions cycle look like a model of program management. So this blog is a process of coming up for air before I dig in and finish the remaining two weeks.
Some big news: moving from current university and PhD (international relations) to George Mason University’s School of Computational Social Science (CSS) for a PhD in CSS. I’ll be joining fun people like David Masad, Jackie Kazil, and Russell Thomas. I’m also going to be excited to learn from pioneers of computational analysis like Robert Axtell and Maksim Tsvetovat.
To make it short: I learned a lot as an IR PhD student in IR but my research interests have changed, which won’t be a surprise to readers of this blog. Aaron Frank was kind enough to introduce me to the folks at GMU CSS in March. But nonetheless, there are some things I will really appreciate about my time as a IR PhD student: learning comparative politics, qualitative and quantitative research design, philosophy of science, social theory, and a broad-ranging view of iR theory.
I’m not sure I will stay in IR per se—-right now it seems fairly limiting compared to the interdisciplinary work in the place I will be entering. But who knows? It’s what I was trained in, what I’ve pursued in some shape or form since college, and it’ll likely stay with me for the rest of my life even if I make my way in something more interdisciplinary.
The heading on this blog will thus change to “Doctorate in Computational Social Science.”
On to a more mundane subject: books received since last update (super-cheap used books sites for the win!)
Michael Woolfson, Mathematics for Physics
Peter E. Kennedy, A Guide to Econometrics
James P. Stehna, Entropy, Order Parameters, and Complexity
Cosma Shalizi, Advanced Data Analysis from an Elementary Point of View
—PhD dissertation, “Causal Architecture, Complexity, and Self-Organization in Time Series and Cellular Automata”
Diran Basmidjan, Modeling Physical Systems
SAGE Monographs, A Mathematical Primer for Social Statistics
Clarke and Primo, A Model Discipline
Stoffer and Shumway, Time Series Analysis
Simon Beninga, Financial Modeling
Nino Boccara, Modeling Complex Systems
Richard McElreath, Mathematical Models of Social Evolution
Patrick Billingsley, Probability and Measure
Geofrrey R. Grimmett, Probability and Random Processes
Geof Givens, Computational Statistics
Sheldon Ross, An Introduction to Probability Models
Craig Enders, Applied Missing Data Analysis
Rosario and Mantegna, Introduction to Econophysics
Frederick H. Hillier, Introduction to Operations Research
Barbara Tabachnik, Using Multivariate Statistics
Alexandre Chorin and Ole H. Hald, Stochastic Tools for Mathematics and Finance
Alex Smola, Introduction to Machine Learning
Laurens De Haan, Extreme Value Theory
Dennis Wackerly, Mathematical Statistics with Applications
William J. Kennedy, Statistical Computing
A lot of these are mathematics, science, and statistics books. Some earlier entries may have recorded some purchases of books/downloads of free books on network analysis, complexity modeling, physics, ecology, mathematical finance, computer science, statistics, and political methodology. I have more books on the way about mathematical thinking (proofs, formal logic, etc) as well as stochastic analysis.
I haven’t forsaken my research interests, despite my methodological book-buying/downloading, but I think said interests are very amorphous at the moment and knowing more of the underlying “how” of the methods will spur more creativity about how to structure my very free-floating ideas. That, and getting to a point where you are excited by lemmas and Greek letters as opposed to being afraid of them is immensely exciting for me. I have a mathematical notation dictionary that helpfully explains things like formal logic, set theory, and linear algebra.
In terms of my classes this semester, my final paper for my class on social theory is (as per instructions) a debate between myself, Robert Cox, Emile Durkheim, Charles Tilly, Max Weber, and Fernand Braudel. Structured as an ISA panel. One of the more interesting experiences this semester was finding Cox much more interesting than I thought pre-semester. I’ll be looping in my own thoughts, influenced by Peter Turchin, the big history literature, my reading in computational social science, and Dan Sperber’s cultural model.
I also completed a large quantitative research project for a research design class. The paper is on the military effectiveness literature in IR. I learned both a really hard lesson in real world data analysis vs. idealized data analysis, particularly in making even venerable datasets like the Correlates of War behave. That, and my experience with Stata has reinforced my conviction to learn open-source statistical computing and programming software so I will never have to deal with some of my more ridiculous 4AM debugging experiences.
And I’m currently finishing up a paper on regional IR institutional theory for a class on that field comparing security organizations in Europe and Asia. I have to admit that I found a lot of the literature lackluster, but the notion of the “region” and “metageography” did spur some interesting thoughts and research ideas for later on. I just wish much of it was more social-scientific, particularly in light of reading mid-00s paens to ASEAN in the midst of some major regional crises in Asia right now.
I’m not entirely sure what I’ll do with my summer right now, beyond a lot of reading and programming self-learning. Taking some classes on statistics, programming, and machine learning would be fun. I’ll also try to edit my ISA 2013 paper into something publishable. I got good feedback from my discussants and I think it was the first real academic paper (as opposed to policy and military or police-specific theoretical writings) that I wrote for ISA that I feel comfortable with.
Finally, I’ve had some major challenges this semester dealing with a host of convergent life challenges that unfortunately decided to cluster themselves in this particular time and space. My outlets have been reading snippets of interesting books (I save the bulk of the actual reading for when I am free) and engaging on Twitter, as my free time to go out drastically declines as PhD student.
This blog has also been a valuable place for me to get out complicated and dense thoughts that don’t fit with my policy-oriented writing in other places. So thanks to everyone that reads and engages. It helped get me through some very rough times and feelings that usually come fairly later in the PhD process—not the first year.
Now, back to those papers.
Exploring causal modeling and DAGs at the kabob place
Reading much of Turchin’s papers spurred me to re-read this and the more technical version. Much better equipped to understand now that I have done a little bit of reading of mathematical dynamics
It turns out that I have some more material from my first ISA plane flight, which I did some light editing on after doing some reading. I was spurred to edit and post after having some mixed thoughts about this Guardian op-ed on social science. It does, however, ask a valuable question: why does social science have such a hard time explaining itself?
I’ve been thinking a bit about W.K Winekoff’s blogs and tweets on the NSF defunding as well as some of the faulty arguments for and against political science funding. So this will take on social science more generally but with an emphasis on building on some of Winekoff and Phil Arena’s constructive critiques of the analytical problems with disciplinary justifications.
The op-ed makes a strong case to differentiate political science and focus on the solving of difficult problems:
If we don’t start to see how social science broadly differs from natural science it will be easy to relegate the former to a deservedly poor cousin of the latter. A better answer would be to focus on the nature of the problem domains that each of the many disciplines are engaged with – and to point out that social science is just harder because the data is more unruly. As Albert Einstein once put it “understanding physics is child’s play compared to understanding child’s play”.
To try to understand child’s play (or wellbeing, or conflict resolution, or social mobility, or the causes of crime, political persuasion, racism or, indeed, the end of the cold war) is to grapple with “wicked problems”. These, while critically important to analyse, are human problems which don’t often have right or wrong answers and don’t tend to offer up easy scientific laws. But they can have better or worse answers and their study can cumulatively deepen our understanding over time, even if the impact is often relatively slow, diffuse and hard won. Along the way social scientists often introduce concepts that articulate and frame public debates and encourage critical, nuanced thinking.
A social scientific scrutiny of the human, rather than natural, world doesn’t easily lend itself to generalisable laws, cast-iron predictions, nor can it always preserve a distinction between fact and value. Defenders of social science need to say that, and to argue that careful, theoretically and methodologically rigorous exploration of these subjects are fundamental to a healthy society even if finding unarguable evidence is extremely difficult.
I don’t think this particularly helpful or useful, and to explain why I’ll touch on a little intellectual history and some philosophy of social science.
The idea that rigorous, government-funded study of human beings could improve the public interest is actually fairly recent in human history. It is, in fact, an artifact of modernity. There is a fair degree of evidence that the 19th century accelerated social change on a massive scale in many parts of the world. European states also were accumulating overseas empires and had become intimately aware of other peoples and their differing ways.
Such a rise in social complexity and the rapid and disruptive shifts it created created demands that had not existed before for a science of man. Running a complex, industrialized society required unique forms of knowledge qualitatively different from the classical arts taught to elites since the tutoring of Alexander. Whether you are reading August de Comte or Emile Durkheim, the fathers of social theory thought humans and human organizations could be scientifically studied in a manner similar to the emerging “hard” sciences.
Since then, social science has always cast itself as a means of genuinely improving the human condition through scientific analysis. In reference to the US: American anti-intellectualism and populism is historically continuous but social scientists were once respected in a way they are not now. They benefited from a general deference to technical experts that once was more plentiful in American society than it is today.
Herbert Hoover saw the adjustment of America to high modernity as a harmonious managed society, in which captains of industry, government, and technical experts would collaborate to manage a transition to greater prominence. Franklin Delano Roosevelt’s New Deal represented not necessarily a break with this but an acceleration of it in response to dire social conditions. The postwar Consensus era was a boom time for experts of all stripes, and they once enjoyed deference.
But this consensus era was was rooted in a rather fragile material-political foundation. What could have very well have been political violence on a European level was averted by elite agreement about the basic nature of economic, cultural, and political life that was rather unprecedented in American history. As George Packer noted in an Foreign Affairs op-ed, this agreement had a dark side as well: social conformity and politics as elite agreement. Of course, this was also the time in which the modern, federally-funded American research university really emerged.
But let’s move away from history for a bit and look at the Guardian op-ed again. It states a hoary cliche: that human beings are really complex, therefore the social sciences are different. The biggest difference, however, between social science and natural science is not that humans are too complex to quantify. This, like many humanist critiques, often sets up false oppositions between social science and natural sciences that face similar challenges of explanation and prediction. As Peter Turchin notes in his eloquent call for an analytical, predictive science of history, social scientists have a lot in common with natural scientists that try to trace out causally challenging relationships, complex and interactive systems, and aggregate macrobehavior.
The chief difference lies instead in the unique temptations faced by social scientists, and how those temptations can lead social scientists astray. To understand this, I’m going to once again return to Max Weber.
Weber strictly differentiated “science as a vocation,”—the objective pursuit of scientific explanation—-from “politics as a vocation”—the practical art of gaining a preferred political end. Like any other Weberian concept, science and politics are ideal-types. The real world makes these sorts of distinctions fuzzy. In order to gain resources and status to begin with, social scientists had to claim that science would advance politics. The problem with this is that social science—-as a form of scientific explanation—is mainly about post-hoc explanations for social events or a means of a probability-based method of predicting future events.
This is different altogether from the ability to use knowledge to effect large-scale political and social designs. Weber noted this in his writing on objectivity in the social sciences that how we select topics of interest and what we make of them will always be shaped by our own value-perceptions. We can make scientific explanations and predictions, but those are not the same thing as a science of action.
It is one thing to achieve scientific consensus on an issue. But it is another to formulate a policy concept of what to do about it. The latter process inherently cannot be completely scientific (although it can be rigorous, which can occur without science) and always will be contestable. Political and social change produces winners and losers, and always threatens one set of values and beliefs while reinforcing another.
Social science as it should be practiced is often unpopular because it clashes with what humans want to believe about ourselves. We think that we know ourselves best, and that we should ultimately be the arbiter of our own destinies. The stories social scientists often tell clash with common sense, and often are about limits to human agency. Witness, for example, the clash between political scientists, the statistician Nate Silver, and the traditional DC political press. The political press thought they knew politics.
Years of pounding the pavement and chasing down stories taught them that they just knew the race was a toss-up, dagnabbit! Those pointy-headed professors and that stats nerd hadn’t gotten the access they had, never gone on a junket, and never had chased down a scoop. But the professors had decades of research on American politics that suggested that the race was not, in fact, a toss-up.
The belief in agency—and knowledge as a tool that grants it—is so very deeply ingrained that Dani Rodrik found himself questioning his own study of political economy because it did not allow for meaningful ideas about how to effect political change. “To change the world, we need to understand it,” Rodrik wrote. But there are a lot of intervening variables between knowledge of something and the ability to alter or manipulate it. Even GI Joe recognized that “knowing is half the battle.”
To understand how deeply ingrained our refusal to accept this idea may be, consider Rodrik’s op-ed in the context of an rather crude analogy. Both human societies and “natural” subjects are complex and often can put up fierce resistance to ambitious projects. But this is easier to recognize when the tough project is, say, sending an astronaut to Mars. We know that complex technical projects have “normal accidents,” many things can wrong in space, and it is incredibly hard for humans to survive in such a hostile environment. Just as the sciences that underpin the applied practice of space travel tell us about the inherent difficulties, risks, and limitations of putting a man on Mars, the accumulated wisdom of political economy tells us about similar limits on the capacity of certain political projects.
Pause for a bit and I’ll take a walk back. I am not saying that political economy is the same thing as, say, celestial mechanics or the other myriad scientific disciplines that go into making a spaceflight to Mars happen. But assume for the sake of argument that the risk, uncertainty, and potential for unanticipated outcomes in both flying to Mars and, say, a process of macrosocial change is roughly equal. But it’s more likely that we’d accept the NASA official’s statement that putting a man on Mars is impossible while rejecting a political scientist’s assessment that ___ social change presents unacceptable risks. It would be unusual, at the very least, to see a critique of the science of the Mars trip based around on the idea that the science, while sound, didn’t give us a means of going to Mars. Yet this is more or less implicit in Rodrik’s op-ed.
Up until this point, I’ve written about basic research. This is by no means an argument that applied research in social science is wrong or impossible. Social scientists can be extremely good at finding specific solutions to narrowly defined technical problems with their unique scientific training. For example, an international security expert with quantitative training employed at a defense research agency could develop a specific network targeting method capable of destroying a certain insurgent group. An economist could recommend a specific fiscal policy. Basic research creates fruitful areas for applied research that is defined by a client’s certain targeted need. Scientifically rigorous basic research can yield targeted, customizable applications.
It’s also true that social science can also be effectively used, within some strict limits, for probabilistic prediction. As crude as prediction may be relative to popular perception, it is not impossible. Human systems, just like any other form of dynamical systems with a set of interactive parts, are not immune to measurement and prediction. But anyone who has taken a look at the famous logistic map equation and period-doubling knows that the nature of such systems necessitates some epistemological modesty about what kind of predictions can be achieved. A extreme best case scenario for social science would be for it to achieve the rigor of weather forecasting, which is still a field characterized by substantial uncertainty. .
The sad case of the Italian geoscientists convicted for failing to predict a devastating earthquake also illustrates that most people do not understand the concept—and limits—of scientific prediction. As Clarke and Primo note, many political scientists do not necessarily understand the concept of models either. Even less people on Wall Street understand the role of models as a heuristic map not a territory:
The world of markets doesn’t exactly match the ideal circumstances Black-Scholes requires, but the model is robust because it allows an intelligent trader to qualitatively adjust for those mismatches. You know what you are assuming when you use the model, and you know exactly what has been swept out of view.
All models have a lot of things “swept out of view” and model thinking goes hand-in-hand with intuition, substantive knowledge, and practical judgement. And we know that humans, while not ‘irrational,” are not really good in practice at using all three in an optimal manner.
Finally, the best prediction does not create consensus necessary for political or social action. The field of intelligence and strategic surprise is filled with political leaders blocked from acting on useful information of impending attack by internal political divisions or larger geostrategic calculations. It also does not naturally endow leaders or military commanders with the wisdom or skill to properly exploit such information as well.
But what about normative/critical theory, you ask? Well, even a theory intended to shed light on the sources of oppression in order to help humans grow more free must still not confuse the scientific explanation of the nature of oppressive discourse with a plan to overcome it. Reading Foucault will give you first and foremost a historical understanding of how certain power structures came to be. But it’s not intended as a manual for how to overawe said social forces.
When social scientists move towards the engineering of social or political change, such as governance, policy, activism, and other vehicles of politics they often find themselves on equal footing with other professionals who develop a longtime base of expertise in an area through less self-consciously scientific experential/project-based learning. An academic that is a specialist on, say, Afghanistan is not necessarily a better policy adviser than a native politician or a militarily skilled warlord.
Tacit knowledge, or local knowledge, can be just as powerful as scientific knowledge in dealing with “wicked problems.” Most specialists in “bridging the gap” recognize this. Andrew George proscribes a minimalist role for political scientists in his book on policy relevance in academia. The academic makes the policymaker aware of scientific knowledge in an effort to help frame the policymaker’s intuitive practice. But if it matters at all for the policymaker, it is as heuristic knowledge that will never really be applied in a scientific manner.
The problem is that many social scientists have never really recognized the distinction between scientific explanation/prediction and the far more messy and difficult process of politics. And social scientists, because their expertise is the social world, have an unique opportunity to participate in positions of authority over politics and society that other scientists do not. Hence the temptation to confuse science for politics or politics for science is greater.
Confusing science for politics encourages theory to be “theology”——more influenced by what ought to be rather than what necessarily is. Besides being unscientific, these sorts of theories also tend to be structurally inconsistent in their explanation of the elaborate constitution and dynamics of social forces and their hewing to a nonetheless optimistic view of human agency. If the social world is so complex and heterogeneous, then the ability to exercise control of it is by definition limited.
Some try to get around this problem by limiting their theoretical scope to variables a policymaker may control, as opposed to actually detailing all of the relevant causal factors. For example, the Pentagon was extremely receptive to tech and org-centered variants of the Revolution in Military Affairs/Military Revolution literature. Technology and organization was something the US government could shape—as opposed to the more thorny problem of generating the political and social change that military historians see as feedback mechanisms that, in concert with technology and organization, create advances in the character of warfare. Eliot Cohen also threw another wrench in the spanner by noting that RMA’s connection to the external geopolitical system—-another variable that the USG could not hope to control—-was undertheorized.
The social world is both ruled by process, structure and still is contingent. That makes human agency—-and using knowledge to advance agency—fraught with difficulty. Certainly human agency doesn’t disappear, but academics are generally not the kind of people best equipped to advance it. Tetlock’s famous study about foxes vs. hedgehogs, is in the end, a story of experential knowledge and heuristics trumping formal expertise. This isn’t because the “common man” is better or smarter than a PhD, as some have wrongfully interpreted. It is that being adaptable to change and skilled at creating change is a skill distinct from scientific explanation and/or prediction. Knowledge helps set the stage for action but action itself follows the logic of practice. The best practitioners possess the right combination of technical skill and ability to continuously reflect and re-assess the application of said techne.
Now to return to my historical tale. That social scientists could be regarded as uniquely capable of contribution to the common good was a historical artifact of a time in which experts enjoyed wide deference. Now we live in a different era, distinguished by rabid populism and suspicion of all scientific knowledge. This is the time in which substantial proportions of the US population believe in aliens, giant lizards, the New World Order, black helicopters, vaccines that supposedly cause autism, intelligence agencies deliberately manufacturing crack cocaine.
How this change happened is, like any social process, complex and thorny. But it means that the deference that social scientists once enjoyed is gone. If many people are willing to doubt scientific consensus on autism, what hope is there for social science to regain its former place of prominence in the current climate? Not everyone has an opinion on vaccines and autism. But everyone, after all, has an opinion about politics or society.
The Guardian op-ed to me offers exactly the wrong advice. It tells social scientists to reject their strengths: scientific explanation and prediction. It cannot decide whether it wants to emphasize science or politics, noting that so-called “wicked problems” have no right or wrong answers but only better or worse ones. Sure, this is good and correct, but……
In the practical world political scientists want to influence, this is a distinction without a difference. “Better’ is, from the perspective of a harried policymaker, “right” and often remembered as such—-especially if it leads to what is politically defined as an optimal outcome. This is how, for example, an ambiguous venture like the Iraq War Surge is cast as a Caesar-in-Gaul like military triumph rather than a gamble that was better than the alternatives and could have, depending on many variables outside American control, easily failed.
Sure, we can plausibly argue that better knowledge of the social world is necessary for an healthy society. But if we let the dependent variable be healthy society and our independent variable be “knowledge” we would likely find a fairly weak correlation at best. The op-ed cannot make a strong case for social science without having to repeatedly qualify itself. This is a bit of a downer in terms of the optics of crafting a persuasive case for why we need social science.
So what might we do to adapt to these trends? Well, it would be great if there were better mechanisms of doing applied research that could produce tangible products. This, not necessarily policy relevance per se, is what political science and IR needs more of. In essence, a political science equivalent of corporate applied technical research groups that could turn out discrete technical widgets and solutions.
Much of the energy and engagement that could have gone into useful applied research has been misdirected towards overly broad conceptions of policy relevance and gap-bridging that emphasize participation in the professional wrestling match that is domestic and foreign politics as the chief criteria of relevance. Gap-bridging should not be seen as beneath political scientists. It is useful and rewarding.
But experienced policy-fluent academics like George rightly emphasize a far more minimalist conception of policy engagement than more policy-oriented academics that blame other academics for not essentially doing more politics. That’s not bridging the gap—it’s crossing it to the other side. Bridge-crossing is not bad as long as it is recognized as such, and embarked on with the knowledge that Tetlock’s foxes hold some powerful advantages.
Second, I do not recall who first made this analogy and I’m much too tired to look it up myself, but political science and IR need good popularizers like Neil DeGrasse Tyson or Carl Sagan, the latter of which was tremendously effective at introducing Americans of all generations to the beauty of the universe. Stephen Hawking today performs this function. A popularizer would be able to communicate—in beautiful, simple language—what is so fascinating about the nature of the political and social worlds that motivated us to devote so much sweat and tears to getting our doctorates and also engage in politics to rebut challengers.
Whatever the flaws of Freakonomics or Malcolm Gladwell’s books, the books sell out and the simplifications of the research are not nearly as bad, say, Tom Friedman’s various adventures in Flatland. Gladwell and Freakonomics, unlike Friedman, are not nearly as engaged as advocates for particular policies and views of the world and derive more success from simply introducing novice audiences to the fascinating nature of economic and behavioral science. And unlike Fareed Zakaria, the popularizer should be clearly seen as a working political scientist rather than a journalist.
I freely admit that both of these ideas are weak, and perhaps as weak as the Guardian op-ed I’ve critiqued. But it’s close to 5 AM EST, and political science won’t be saved in a single night. Consider this just a sketch for some future thoughts on this subject once I’ve advanced further in my studies.
On the flight back to DC from the International Studies Association, I read a blog about one of my favorite non-polisci subfields, ecology. It finally gave me an outlet for some long-brewing thoughts I’ve had since Steve Saideman, Dan Tdaxp, David Lake, and others reflected on the decline of “grand theory” in international relations. I spent all of today on a thoroughly awful air trip with screaming and sneezing babies, heavy turbulence, and a set of unruly Korova Milk Bar-esque teens in my row that seemed determined to engage in a war of attrition with the flight staff. So I ended up just saying “oh, what the hell” and free-writing my long-suppressed thoughts while dodging reams of infant germs and protecting my laptop screen from spilled coffee as the plane violently rocked to and fro.
This is a TL:DR story with equal parts personal anecdote and arcane intellectual discussion. Steer clear unless you really are interested in some foundational philosophy of science and method issues with international relations and strategic studies. Since you’re reading this research blog, I assume you are.
So what’s the kicker? Ecology blogger Jeremy Fox lambasted the eminent sociobiologist E.O. Wilson for declaring that a solid grasp of math was not essential to make great science. Wilson argued that a sound grasp of math was unnecessary for a great scientist. The most important parts of science, Wilson argued, are creativity, intuition, and theory construction. Scientists should focus on that and just add a statistician to do all of the grunt work. Wilson finally pointed to Charles Darwin as an example of a great scientist who never used an equation.
First, I should note what Wilson gets right here. First, theory absolutely precedes and dictates methods. As Phil Arena notes, empiricists cannot escape theory. Second, great scientists have created useful theory with impressionistic methods or without the benefit of any kind of higher education whatsoever. One of the fathers of modern genetics was a friar for whom the scientific study of inheritance was not by any means a “day job.” And interdisciplinary work is the norm in most sciences, even if it is not encouraged in political science or IR.
I think Fox is a tad bit too harsh on Wilson, who notes that he did suck up his pride in the end and learn calculus. As someone who has struggled with quantitative topics in the past and is making up for lost time with my own self-study, I can certainly relate to Wilson’s personal story of constant struggles with Greek symbols and equations. I appreciate his desire to help students that want to study scientific subjects but lack quantitative faculty. But many of his points are at best banal truths and at worst misleading.
Perhaps the first place to start is Wilson’s *meh* towards quantitative skills. It certainly tracks with a larger environment in which celebrating one’s ignorance in mathematics is socially encouraged. It’s hard to find people in academic settings who brag openly about flunking composition classes, English, or IR 101. But math? Who needs it anyway??
Of course, math is just a language we can use for simplifying relationships and making them more internally consistent. It is one of many communicative languages in academic research. Code, statistics, and formal logic all grew out of the raw power offered by mathematics to represent certain kids of complex relationships simply. They are also all translatable into verbal description. People familiar with the history of analytic philosophy can understand in particular the role of formal logic in fields ranging from metaphysics to ethics.
Verbal descriptions—like any kind of language—have many advantages. But they also carry important limitations in their ability to represent certain kinds of processes. I often hear the lazy critique that “X is too complex to quantify,” when I often advance this line of argument offline (and sometimes on Twitter), but the truth is that there are some subjects that are too complex to simply verbally describe. This is why nonlinearity—beyond mere metaphor—is commonly expressed in the mathematical language of dynamics. Even generative simulations for mathematically intractable problems depend on knowledge of concepts like the Lorenz Curve—key to Robert Axtell and Joshua Epstein’s “Sugarscape” method of agent-based modeling of inequality.
I suppose in a nation where monolingualism in the norm, the idea of eschewing a critical language for social research because it’s hard and different isn’t anything unusual. That, and my own experience of STEM classes, like that of Wilson’s, was mostly painful and unsuccessful until a fairly late time in my education (it’s still painful, just vastly more rewarding). But the problem really does come down to language. And here we come to Wilson’s idea that an aspiring scientist can really farm out most of the actual computation to what is essentially a human calculator.
But let’s flip this around. A China expert would not be taken seriously without the ability to converse and research in Mandarin, or at the very minimum speak good enough Mandarin to do extensive fieldwork in China. Sure, they could hire a native speaker to do that grunt work for them. Good, productive research could be done. But how could they know whether their partner’s interpretation of, say, a crucial historical document from the Cultural Revolution was valid? In doing so the researcher takes a very big gamble in trusting over a crucial aspect of their project to someone whose methods and techniques they cannot understand, validate, or verify.
When I read Wilson’s op-ed, I visualize a scientist generating a theory and then saying to his partner, “OK, now insert statistical model here in that Matlab thingy of yours” before he goes off to battle foul-mouthed preteens in Black Ops 2. To use an out-of-left field analogy: Lincoln’s job was to craft the grand design. But he took an active interest in military affairs and stretched himself in order to articulate exactly what he wanted Grant to accomplish.
Additionally, Razib Khan relates that the Wilson himself ended up attaching his name to a paper that he may not have completely understood. I have been proud to work on Mexican organized crime with John P. Sullivan with the full knowledge that I will never know as much about narcos as him or Robert Bunker. But I would not have felt comfortable working with either of them if I didn’t understand the core aspects of their theories. Why should math be any different?
Indeed, I also empathize with Fox’s frustration with a skill-based double standard as expressed below:
And if you say, “So how come theoreticians write for that narrow audience, instead of making more effort to communicate with the non-theoreticians who want to test their theories?”, my response is, “How come non-theoreticians don’t learn more math?” Understanding is a two-way street. And understanding, not citation, or “impact”, or even “communication”, is the ultimate issue here. Yes, absolutely, theoreticians vary in how good they are at helping readers understand their work.** But exactly the same thing is true of non-theoreticians! Any scientific paper, theoretical or not, necessarily assumes a lot of background knowledge on the part of the reader. Scientific papers are written for people with Ph.D.’s, not undergraduates. Many Ph.D. ecologists’ and evolutionary biologists’ last math class was a now-forgotten first-year undergraduate calculus course. Why should theoreticians be under a special obligation to write their papers at that level? Why are non-theoreticians not under a similar obligation to write their papers for an audience whose last courses in natural history, field methods, zoology, botany, statistics, etc. were early in their undergraduate careers?
There is no way around the fact that academia is really, really hard. The “soft sciences” are in fact “hard” sciences that demand a diverse base of expertise because their subject—-the individual, social and political behavior of human beings—is studied at a frenetic and sometimes soul-crushing level of intensity befitting of the social complexity of human civilization. Human social life both influences and influenced by a multitude of things that frustrate and ruin intellectual monocultures. A political scientist that doesn’t have at least a passing grasp of modern economics, foreign policy/national security, the philosophy of science, and social and political theory will struggle to truly understand the thrust of contemporary IR. But people who can effortlessly analyze complex social processes somehow still freak out at the sight of a summation symbol or a sigma.
In particular Wilson places a great deal of emphasis on creative and intuitive processes of observation and deep immersion into a subject. But he unfortunately separates this process from method, neglecting some crucial things about how we frequently go far beyond description to make useful discoveries. I’ll let Fox, again, take it from here:
Wilson says that theory is only useful when it describes the actual world. To be useful, theory has to concern “the possible permutations that actually exist on earth.” ….The claim that only descriptions of our actual world are useful, everything else being an irrelevant hypothetical, will come as news not only to every theoretician, but to every empiricist who’s ever conducted a manipulative experiment. the way you learn how the world works is by manipulating it so that it works differently than how it actually does? Because that’s what an experiment does–it creates unrealistic conditions, conditions that quite literally would not have occurred if not for the intervention of the experimenter. That’s the whole point of experiments. And it’s the whole point of piles of mathematical models too. When Fisher famously asked “Why does most every sexual species only have two sexes?”, he answered that question by modeling species with three or more sexes, a condition never observed in nature. When Fisher asked “Why is the sex ratio usually 1:1?”, he answered that question by modeling what would happen if it wasn’t 1:1.
Description is a useful tool, but if quantitative methods are often criticized for trying to simplify the complex such critiques often implicitly betray an equally dubious and naive faith in verbal description to capture causality and social complexity. Fox, in his critique of a seminal paper by the self-consciously literary Stephen Jay Gould, provides an example of just how wrong qualitative scholars without firm ideas of method can go. Gould simply argues by analogy and metaphor, often without a firm grounding of the original fields from which he draws said analogies and metaphor. Without rigor and discipline, intuition and creativity often becomes the basis for theories that are muddled beyond recognition.
The catch is that untangling the muddle requires wading through confused text and verbiage. With the data or code used in, say, a logit regression on military effectiveness one can at least catch sloppy work by attempting to replicate the results. Basil Liddell-Hart’s deceptive claims that he was the father of Blitzkrieg, however, fooled military historians for decades. For every “garbage in, garbage out” formal model, there is a muddled social theory that allows a scholar to cast themselves as Morpheus giving a cast of Neo-like followers a just-so story disguised as the blue pill. That, as Fox points out, exactly what Gould did under the guise of criticizing “adaptationist” just-so stories.
This isn’t to say there aren’t difficult intellectual of problems of theory, logic, and evidence to sort through. Social studies of science, as well as critical approaches to social science, do not invalidate the idea of the scientific method. But they do point to the immense barriers to practicing good science not unduly biased by larger social prejudices, power relations, and paradigmatic fault lines. Again, it should be emphasized that the kind of sensemaking, heuristics, and interpretation Wilson alludes to is useful and even crucial to every part of the process of research and analysis. But this brings me to what is probably the most problematic derivation of Wilson’s separation of theory from method: the valorization of grand theory without recognition of its inherent difficulty.
Grand theory is theory that is, well, grand. And Wilson drastically underrates just how complex, difficult, and well-nigh impossible for most people truly useful grand theory can be. Figures such as Darwin are rightly recognized because they are rare, just as there are a distinct paucity of Great American Novelists, NBA players, Carnegie Hall worthy violinists, and supermodels relative to the population attempting to reach such rarefied heights. The failure rate of revolutionary science is incredibly high. It is also, as Fox notes, much more easier to accomplish grand theory when the intellectual community is small, the literature homogenous, and the tools of inquiry much less sophisticated. For E.O. Wilson to cite Darwin as an example suggests a lack of contextual appreciation for why a modern successor Darwin would not do as he did.
Perhaps the most ironic thing about Wilson’s advice to young scientists, as Fox notes, is that Darwin himself stretched himself to learn many other fields to better his own. Grand theory is a work of active synthesis that requires intellectual engagement with a dizzying array of fields. So, paradoxically, in telling his hypothetical young student just not to worry about the maths and let the statisticians handle it, Wilson is undermining the very cross-disciplinary learning that his young charge would have to accomplish in order to actually make grand theory. Most grand theorists were intellectual sponges, sucking up whatever knowledge they came into contact with. Making a priori rejection of an important discipline or tool—-or worse yet—delegating all of the thought processes related to it to a glorified technician—is not a recipe for making grand theory.
Several examples of this can be found in IR and strategy. One dominant problem with structural realism is its reliance on fairly aged and fragile concepts from economics. Kenneth Waltz, though inspired by many disciplines, borrowed fairly heavily from the economics of his day. The 2008 crash helped accelerate nearly 30 years worth of criticism of dynamic stochastic equilibrium theory. It says something that an entire discipline—-international political economy—has been invented largely to explain why real-world global macroeconomic outcomes do not agree with predicted outcomes in economic textbooks.
So if economics isn’t the ticket (or at least the economics of Waltz’s day), what is? You might *gasp* have to read a physics textbook! But one of the fathers of constructivism—Alexander Wendt—did for his new book project.
Similarly, when Carl von Clausewitz set out to make his grand theory of the nature of war, he did not turn to just a retrospective study of military history alone. He borrowed metaphors from the-then dominant science of Newtonian mechanics to help him generate a framework for thinking about political-military interactions. But I suppose a man that was lucky enough to survive one of the most destructive pre-WWI European systemic wars without any serious mental or physical injuries did not see stretching himself to create something genuinely new and interesting as inherently Mission Impossible.
The problem with the kind of “grand theory” in international relations that John Mearsheimer and Stephen Walt whose decline lament in their paper is that it leads to a dead end. Mearsheimer and Walt’s structural realism has run into so many empirical roadblocks over the last 30 years that it would take another post to even begin to summarize the literature. Grand theorists had difficulty answering historian John Lewis Gaddis’ critique of why IR theory missed the largest and most important event of the 20h century—the end of the Cold War. Likewise, Barry Watts has pointed out that national-level theories of coercion also get fundamental elements of causality in war wrong.
No one is served by what essentially became a battle of dueling political theologies. Critical theorists were right that neorealism’s “is” often becomes an “ought,” in large part because prominent structural realists such as Walt have assiduously resisted the formalization of their concepts, a step that might help cast some light on its internal consistency. But this is not to bash on neorealists alone. I can think of just as many liberal or constructivist ideas that have weak microfoundations, missing causal mechanisms, and lack empirical validity.
On the subject of strategy, one can see even more problematic derivations of grand theory. I vehemently disagree with On Violence’s post on strategy and intellectual progress. Clausewitz is still the best work on the nature and general conduct of war. Of course, the ancillary implication of this is that virtually no one has tried to actually build on Clausewitzian ideas through valid methods of scientific inquiry invented after the 1830s.
Why? It’s not just intellectual conservatism. In fact, the biggest problem may be the exact opposite. The majority of “new theories” of war leap straight for the “grand theory” level without the process of normal science—a process that might enrich Clausewitzian strategic thought and take it to a new level. It is no wonder, for example, that the vast majority of people who have tried to surpass the old Prussian have failed—-the failure rate of revolutionary science is high and many competitors skip the crucial step of first operationalizing the dominant approach and giving it a fair analysis.
Instead, Liddell-Hart, Martin van Creveld, and the “New Wars” movement simply dynamited the entire house from foundation to roof without at least bothering to inspect, explore, and incrementally modify its contours. It’s just grand theory against grand theory. It’s not a recipe for making strategy a field capable of intellectual progress beyond the day-to-day (and budget-influenced) battles of landpower, airpower, and seapower theologies.
Grand theory is useful. The best parts of grand theory are indeed creativity, intuition, and synthesis. But there is also the unsexy work of model-checking, validation, and verification. They are symbiotic, and it is difficult to have one without the other. Separating theory from method is simply a recipe for methodologically disjointed grand theory and mindless methodological rigor without a useful research question or design.
Lastly, grand theory is also largely an elite privilege. As Khan notes, E.O. Wilson, as an venerable old scientist, has options to find mathematically skilled collaborators to operationalize his theories that others less senior may not. Similarly, Steve Saideman and Dan tdaxp both wrote about the way that the old world of grand IR theory was a difficult place for those that lacked access to an old-boy network to succeed in. Great work if you can get it, but even pop-thinkers like Gladwell recognize the structural conditions necessary for outliers.
There are a lot of things wrong with the study of international relations today. From differing angles, Patrick Jackson and Phil Schrodt capture some of the philosophy of science and methodological considerations that need to be tackled going forward. But we should look at the overwhelming positives. There is space to actually advance theory without passing a ideological litmus test of whether it justifies grand theories of realism, constructivism, or liberalism.
People like my blogfriends W.K. Winekoff and Anton Strezhnev are the future, looking at exciting new ways of conceptualizing macro and micro elements of the international system. More established players like Wendt, Lars-Erik Cederman, and Bear Braumoeller are looking at ways to take systemic IR theory beyond the confines of the 20th century social science model.
And I thank E.O. Wilson, whose work and legacy I respect despite this critique, for giving me an prompt that helped me really work through all of these somewhat disorganized reflections. I suppose I should also thank a bunch of thuggish teenagers, snot-nosed babies, and the specific weather conditions that ensured a nearly non-stop roller coaster ride from San Francisco to BWI. There’s opportunity for both grand and “normal” theory. We just have to avoid the kind of advice that E.O. Wilson is offering on this specific issue of theory and method.
The back of the Starbucks line. Political scientists and caffeine is bad combo
A true classic, so far.