The publication of King, Keohane and Verba’s ‘Designing Social Inquiry: Scientific Inference in Qualitative Research‘ (hereafter KKV) popularized many methodological terms in political science, such as descriptive and causal inference, and contributed toward a systematic approach to research design in the study of political science.
Since it’s publication, however, qualitative researchers within political science have gone far beyond the implicit regression assumptions behind KKV’s recommendations. In this World Politics article, James Mahoney reviews five important books on “new qualitative methods” in political science, all of which are an attempt to go beyond KKV:
- Henry Brady and David Collier eds (2004), Rethinking Social Inquiry: Diverse Tools and Shared Methods
- Alexander George and Andrew Bennett (2005), Case Studies and Theory Development in the Social Sciences
- John Gerring (2007), Case Study Research: Principles and Practices
- Gary Goertz (2006), Social Science Concepts: A User’s Guide
- Charles Ragin (2008), Redesigning Social Inquiry: Fuzzy Sets and Beyond
As we have been discussing throughout this course, the new qualitative research paradigm adopts analytic tools to make causal inference using the “case study” method. This is not to suggest a consensus within qualitative political science (for example, these approaches are interested in causal inference and not just analytic interpretations of the social world).
Today we will discuss these approaches, and assess whether they are preferable for “theory (hypothesis) development” and/or “theory (hypothesis) testing”, and how these are related to within case study, and cross case study analyses.
Process-tracing and causal process observations (CPO’s)
Process tracing has emerged as the most dominant approach to qualitative research in political science. Recent work by Brady and Collier (2004) is an explicit attempt to offer guidelines and criteria on how and when to use this method.
Process tracing, it is argued, contributes to causal inference primarily through the discovery of CPO’s (causal process tracing observations). CPO’s can be distinguished from dataset observations (DSO’s) in that they contribute an “insight or piece of data that provides information about context, process or mechanism”.
Think about those pieces of evidence a detective might use when she has a “theory”, and the prices of “seeking out” corroborative evidence to assess the merits of her theory.
DSO’s are observations in a rectangular dataset. Statistical analyses are largely concerned with increasing DSO observations. Process tracing is all about increasing CPOs.
CPO’s are often incomparable across cases and do not lend themselves to cross-case dataset analysis (the same murder could not have occurred in two places).
For KKV this strategy simply increases the number of variables (rather than observations), and leads to an infinite regress. Regression assumes an X-Y correlation across cases, whereas process tracing is interested in sequential processes within a historical case.
Qualitative researchers rely on CPOs, not DSO’s, as these are primarily used to develop, elaborate, or specify more precisely a hypothesis or a given theory within a case.
Put simply: CSO’s are non-comparable observations related to a link between cause and effect within a case. DSO’s are comparable observations across cases.
They form a different evidential basis for causal inference.
Why are those non-comparable observations (which might be generated through elite interviews or archival research), which are pieces of evidence to assess a process, considered “causal” here, and not just “observations”?
Can we really assume that the process is causal? Surely evidence is not inference?
The use of CPO’s for theory development is widely acknowledged within the political science community (statistical researchers often think of case study researchers as historians generating CPOs, which they then ‘scientifically test’). But CPO’s can also be used for theory testing. For Mahoney there are three types of theory testing CPO’s:
- Independent variable CPOs
- Mechanism CPOs
- Auxiliary (outcome) CPOs
Independent variable CPOs provide information about a controversial “cause”.
The cause of a given outcome is contested. Independent variables CPOs provide information about the existence (or not) of this contested independent variable. For example, one theory to explain the extinction of dinosaurs is a meteorite collision.
A process-tracing observation within this case is the discovery of iridium.
Can you think of any independent CPO’s that are contested in the political and social sciences?
Independent CPO’s would provide data to support the existence of this collision (iridium in the earths crust). Similar examples apply to the germ theory of disease or the big bang theory. In political and social science, similar issues arise.
Mahoney cites the research by Nina Tannenwald on the non-use of nuclear weapons.
Her qualitative research (using elite interviews to trace key decision-making episodes) suggests that the “normative taboo” stigmatized nuclear weapons. The presence of this normative taboo among elite explains their non-use.
To evaluate her argument the critical issue is whether or not the nuclear taboo actually existed among policymakers.
Mechanism CPOs, on the other hand, provide information about whether an intervening event posited by a theory is actually present. Even if the causal mechanism is contested by researchers, mechanism CPOs should lead researchers to some sort of convergence as to what really matters when trying to explain a given outcome.
Consider Theda Skocpol’s “States and Social Revolutions“. She argues that vanguard movements are not important causes of social revolutions. They are certainly present in all cases of social revolution but they usually take advantage of a structural crises, and are not critical causes in-themselves, of social revolutions. They are an intervening variable.
Can you think of any other “causal mechanisms” that are contested in the political and social sciences? What data would corroborate or not?
Auxiliary CPOs do not provide information about the existence of independent or intervening variables, rather they trace occurrences that should emerge if a theory works in a posited fashion.
They are traces, or markers, that should be left behind if the theory or hypothesis is true.
Mahoney cites the classic book by Gregory Luebbert on “Liberalism, Fascism or Social Democracy?“. He argues that a red-green alliance between the socialist party and the agricultural peasantry was a key cause of social democracy in interwar Europe (urban socialists and farmers working together).
Luebbert then provides auxiliary CPOs that were left behind, which support his hypothesis: an unwillingness to challenge wealth distribution in the countryside.
For KKV this is simply increasing the number of observable implications of a theory. But for the new qualitative case-based researcher, this is a case of theory testing, which relies upon a Bayesian rather than a frequentist logic of causal inference.
It is not based on the assumptions of regression analysis.
Methods using set-theory and logic
Remember, qualitative case study researchers adopt a causes-of-effects approach (whereas regression analyses adopt an effect-of-causes approach).
That is, they seek to explain why cases have certain outcomes.
A new methodological approach to this type of causal inference in cross-case analyses is called qualitative comparative analysis (QCA) and/or fuzzy-set analysis.
These approaches analyze logical types of causes, including necessary, sufficient and INUS causes, and rely on set-theoretic methods and/or Boolean algebra.
Fuzzy set analysis, associated with Charles Ragin’s “Comparative Method” do not rely on Boolean dichotomous measurement (1/0, yes/no).
Rather it attempts to identify the probabilistically necessary and/or sufficient causes as continuously coded between 1-0 (0.75, 0.50, 0.90), which are then used to construct “truth tables”.
QCA it is an attempt to use Boolean algebra to extend the logic of cases studies to comparative analysis. It allows researchers to make comparisons on the basis of “a lot versus a little” rather than “more or less”.
Using this method, Hicks et al (1995) discovered there are “three routes to consolidating the welfare state” (Bismarckian, liberal-labour and catholic paternalistic).
But does this approach really go beyond DSO regression? Might it not be better to use DSO’s and regression analyses rather than QCA for a medium to large N study?
Gerring (2007) strongly advocates this mixed method approach. For Gerring and others, small N cases are best utilized to improve regression findings. Large N establishes a correlation (comparable). Small N traces the causal mechanism (non-comparable).
This also happens in reverse. Quantitative research can be used to supplement qualitative findings. A qualitative researcher develops a hypothesis or theory, and quantitative studies “test” the theory. In reality, such a strict division of labour rarely exists.
Theory development and theory testing are iterative steps in all research projects.
Theory development and theory testing
In comparative politics, most researchers do not have readily testable hypotheses drawn from general research programs/paradigms, as is often the case in international relations (liberalism, realism, marxism etc).
This means that comparative analysts try to develop testable hypotheses, which can often leave a powerful legacy. Think about Robert Michel’s 1911 book on “Political Parties”, which established the “iron law” of “who says organization, says oligarchy”.
Extracting close ideas at close range can lead to powerful hypothesis.
For example, Kathy Thelen (2004) has convincingly argued that the German Handicraft Protection Law of 1897 was designed to win support from a reactionary artisanal class and critical to the development of the German vocational training system.
Qualitative case study researchers, using within case study analyses, regularly identify “critical junctures” and “path dependent” processes of institutional change.
Theory development is closely related to concept formation and the development of typologies: types of democratic regimes, types of welfare state, types of market capitalism.
Most of these “types” emerged from fine-grained case study analysis.
Their resilience over time is testament to the avoidance “coding errors” endemic in statistical data. By developing contextualized knowledge of cases, qualitative researchers are less likely to exclude key variables or mis-specify interrelations among variables.
But from the perspective of quantitative methodology, this means that these cases cannot be used to both develop and test a theory. New qualitative methodologists disagree, they argue that within and cross case analysis provide causal inference precisely because they test theories on the basis of CPO observations.
In-class assignment: think about your MSC/PhD research project, write down the core hypothesis. Is this a case of theory development or theory testing?