Seminar 10: Using Formal Game Theory in Case Studies

In this seminar the link between case study methods and research design and formal models of politics are considered. We  show spatial models of politics can be used as a bridge between in-depth case study research and more formalised models of politics developed by game theorists, when conducting research on the strategic interactions between actors in a political system.

The seminar explains spatial models of politics, using the “Decision-making in the European Union” (DEU) project research design, as an example.

This case study explores the input and outputs of the legislative process in the EU using spatial models as a framework in which to capture data on actor policy positions and policy outcomes.

Spatial models can be used as a concrete theoretical framework in which actor policy positions within a given EU negotiation can be captured and related to one another. The data for each individual EU negotiation is collected by researchers through a series of in-depth semi-structured interviews with stakeholders involved in the negotiations of interest.

Each legislative negotiation considered by the DEU project can be seen as a detailed case study of the negotiations surround an individual legislative proposal put forward by the EU Commission. Questions about how interview data was translated into positional data in a spatial modelling framework were raised and discussed.

Methods for triangulation of interview data against legislative records are shown to be one way in which to verify collected data. The collected data can later be used to test a whole host of formal models of decision-making and strategic interaction between negotiators.

Having discussed the strengths and weaknesses of the spatial modelling framework as implemented in the DEU project, we can consider how case-study methods and formal modelling approaches share some epistemological assumptions about how to approach the study of political systems.

In political science spatial models and formal analysis can compliment case-study methods and act as a tool for theory building and advancing political research when scholars are interested in the logical implications of a given set of assumptions in a particular setting.

Texts:

See syllabus

Seminar 9: Content and Discourse Analysis

Content and Discourse analysis methods are used throughout social science. They involve in depth engagement with content – words, images, film etc. or discourse – linguistic interactions that seek to establish and advance meaning. These approaches can be useful for both qualitative and quantitative approaches, and can be an appropriate methodological approach for a vast range of research questions. Guest seminar with Muireann O’Dwyer.

The Philosophy of Content and Discourse Analysis

Understanding the application of content and discourse analysis requires firstly an appreciation of the relationship of language to reality. Depending on your epistemological standpoint, the type of questions you can ask of content or discourse can vary. More positivist approaches can identify prominent, or absent, ideas from content analysis, and can explore the ways that norms are established and enforced through discourse. More constructivist approaches involve an appreciation of the role of content and discourse in shaping the very reality under examination. Further, for political scientists, it is crucial to appreciate the relationship between language (content and discourse) and power dynamics.

Meaning in content can be described as a manifest or latent. Manifest meaning is easily observed – and can be essentially “read” from the text. For example, salience of an issue can be examined by identifying the frequency of reference, the location of the references and the connection to other salient topics. Latent meaning can not be so easily “read” from the text. Latent meaning is the meaning that is embedded within the content or the discourse, the underlying normative assumptions, the establishment of categories or boundaries, and the implicit subjects and objects of the text (or other content).

Berg describes content analysis as “a passport to listening to the words of the text, and understanding better the perspectives of the producer of those words” (Berg 2001 : 242) This description captures the way in which we can use an analysis of content to understand our cases, or processes under study. Content analysis allows the research to systematically study the available date – it is this systematic nature of the analysis that allows for the testing of theoretically based hypotheses or claims, and also brings validity to the methodology.

Some Examples

Content and discourse analysis can be used in various areas of social and political science. It has been used to examine the way that gender equality is understood within the EU (Lombardo and Meier 2006), in studying political manifestos (Laver and Garry 2000) and in examining the role of the media in economic crises (Mercille 2014). Content and discourse analyses are therefore not limited by subject matter, but rather are chosen based on the research question, the research design and the data available.

The coding scheme used in content analysis will reflect your theoretical framework. It is from that framework that you will develop observable implications of the theory, which can then be tested against the content or discourse. For example, working within a Foucauldian framework would lead to asking how truth is constructed in a text, who or what is included or excluded, what identities are at play and what is being normalised or problematized.

An important aspect of choosing this methodology is data management. It is important to ensure you can access a sufficient and suitable data, and that you keep a clear and consistent record of your data, and of your analysis. Coding schemes, as well as the actual coded documents should be retained, and if possible published along with your research, either as an appendix or as an online addition. This enhances the validity of your research and demonstrates again the systematic nature of the approach.

 

Berg,   Bruce L. 2001. Ch 11 “Content Analysis” in Qualitative Research Methods, 4th ed. London: Pearson/Allyn and Bacon, pg 238-268

Gee, J. P. (2014). Chs. 1 and 2 “Introduction” and “What Is Discourse Analysis” An introduction to discourse analysis: Theory and method, 4th Ed. Routledge, pp. 1-29.

Krippendorf, K. (1980). Content analysis: An introduction to its methodology Beverly Hills, CA: Sage Publications

Laver, M., & Garry, J. (2000). Estimating policy positions from political texts.American Journal of Political Science, 619-634.

Lombardo, E., & Meier, P. (2006). Gender Mainstreaming in the EU Incorporating a Feminist Reading?. European Journal of Women’s Studies,13(2), 151-166.

Mercille, J. (2014). The role of the media in sustaining Ireland’s housing bubble.New Political Economy19(2), 282-301.

 

 

Seminar 8: How to Do Causal Process Tracing – Historical

What is “systematic process tracing analysis’? Why has this method become so poplar in political science studies that use a small number of cases?

Most qualitative researchers in the process tracing tradition adopt a comparative-historical approach and tend toward a positivist perspective of the social sciences.

First, they consider causal inference as a problem of identifying a configurative set of variables (whose values “vary” across time and space i.e. X1…Xn), that expert a causal impact on a set of outcomes (Y1…Yn).

Second, they generally hypothesise or specify a ‘theory’ on how and why these variables interact in the way that they do, and affect the outcome in question.

This is what we call the “causal mechanism”.

Peter A Hall identifies three distinctive approaches within this systematic process tracing tradition in the social sciences: historically specific, multivariate and theory oriented.

As discussed previously, historically specific modes of explanation try to identify the full set of causal factors important to an outcome (y), and they try to understand why the outcome occurred in a specific time and place.

Historians tend to give priority to a very particular context, and the spatial or temporal specificities affecting their cases. The study of politics and history have always been closely connected. But even historians are rarely just “listing one damned thing after another”.

From a political science perspective, to say that the arbitrary efforts of King Charles to raise taxes caused the English revolution in 1640, usually implies that, under a given set of conditions, raising arbitrary taxes will tend to cause political discontent.

Multivariate explanations identify a small set of variables that cause an outcome, which are independent of other factors feeding into the causal chain. The objective is to measure the precise magnitude of the effect of each variable, and the confidence with which we can assert its effect, such that it generates precise parameter estimates.

Theory oriented explanations construe the task of causal explanation as one of elucidating and testing a theory. The task is to specify and hypothesise the causal mechanism, and the regularities in the causal process through which the relevant outcome is generated.

As we have discussed at length, under a given set of conditions, regression analyses and statistical methods are more effective for causal inference. For example, basic socio-structural factors such as per capita income, literacy and economic development have been found to be sufficient to stabilise democratic regimes. Here it is better to use marginal effect analyses.

But when comparative case studies began to show that stable democracy was really a product of complex strategic interactions among reformists, extremists, and defenders of the old regime, statistical methods became less useful in assessing the causal chain.

Instead, theorists turned to historically specific methods to test theories, now understood as causal mechanisms. Discuss.

This “mechanism” approach to social science gave birth to “process tracing”, in which many facets of the causal chain are intensively investigated, to test and formulate theories.

  1. Step 1: the investigator begins by formulating a set of theories that identify the principal causal variables that are said to conduce a specific type of outcome to be explained. The object is to test one theory against another. It is a three-cornered fight among a theory, a rival theory, and a set of empirical investigations.
  2. Step 2: for each of the theories to be considered the investigator then derives a set of predictions about the patterns that will appear if the theory is valid or false. This is a process of deriving predictions that are consistent with one theory but not another. In the course of the research these predictions will be often specified as hypotheses to be examined.
  3. Step 3: observations relevant to these predictions are then made. An observation consists of a piece of data drawn from, or observed, in the case, using whatever technology is appropriate: documentary research, field work, interviews or computation. The observations are designed to assess whether the process is present in the cases being investigated. Observations are ‘clues’ about events expected to occur if the theory is valid; the sequence of those events; the specific type of actions taken by various actors; and  statements by those actors about why they took those actions.
  4. Step 4: observations are drawn from the cases to compare predictions from theories, to reach a judgement about the relative merits of each theory. It is about comparing the plausibility of the theory with the validity of the observations. Effective theory building is as important as gathering empirical data.

In class exercise – consider the case of John Owen’s (1994) ‘Democratic Peace Theory‘, which specifies an ideational causal mechanism for why liberal states do not go to war with other liberal states, whilst also arguing that those same ideas often spur liberal states to wage war with non-liberal states.

Process tracing analysis is most useful when a researcher is theory oriented and interested in comparative-historical oriented modes of explanation. This is especially true of processes that are path dependent or rooted in rational choice or strategic interactions (i.e game theory).

Think about Andrew Moravcsick’s explanation for European integration.

But what is “path dependence” in the study of politics? How useful is the concept of “increasing returns” in explaining path dependence?  Discuss with reference to Paul Pierson’s article on ‘Increasing Returns, Path Dependence, and the Study of Politics (2000).

Process tracing.jpg

 

 

Seminar 7: Using Case Studies for Hypothesis Testing

The publication  of King, Keohane and Verba’s ‘Designing Social Inquiry: Scientific Inference in Qualitative Research‘ (hereafter KKV) popularized many methodological terms in political science, such as descriptive and causal inference, and contributed toward a systematic approach to research design in the study of political science.

Since it’s publication, however, qualitative researchers within political science have gone far beyond the implicit regression assumptions behind KKV’s recommendations. In this World Politics article, James Mahoney reviews five important books on “new qualitative methods” in political science, all of which are an attempt to go beyond KKV:

  1. Henry Brady and David Collier eds (2004), Rethinking Social Inquiry: Diverse Tools and Shared Methods
  2. Alexander George and Andrew Bennett (2005), Case Studies and Theory Development in the Social Sciences
  3. John Gerring (2007), Case Study Research: Principles and Practices
  4. Gary Goertz (2006), Social Science Concepts: A User’s Guide
  5. Charles Ragin (2008), Redesigning Social Inquiry: Fuzzy Sets and Beyond

As we have been discussing throughout this course, the new qualitative research paradigm adopts analytic tools to make causal inference using the “case study” method. This is not to suggest a consensus within qualitative political science (for example, these approaches are interested in causal inference and not just analytic interpretations of the social world).

Today we will discuss these approaches, and assess whether they are preferable for “theory (hypothesis) development” and/or “theory  (hypothesis) testing”, and how these are related to within case study, and cross case study analyses.

Process-tracing and causal process observations (CPO’s)

Process tracing has emerged as the most dominant approach to qualitative research in political science. Recent work by Brady and Collier (2004) is an explicit attempt to offer guidelines and criteria on how and when to use this method.

Process tracing, it is argued, contributes to causal inference primarily through the discovery of CPO’s (causal process tracing observations). CPO’s can be distinguished from dataset observations (DSO’s) in that they contribute an “insight or piece of data that provides information about context, process or mechanism”.

Think about those pieces of evidence a detective might use when she has a “theory”, and the prices of “seeking out” corroborative evidence to assess the merits of her theory.

DSO’s are observations in a rectangular dataset. Statistical analyses are largely concerned with increasing DSO observations. Process tracing is all about increasing CPOs.

CPO’s are often incomparable across cases and do not lend themselves to cross-case dataset analysis (the same murder could not have occurred in two places).

For KKV this strategy simply increases the number of variables (rather than observations), and leads to an infinite regress. Regression assumes an X-Y correlation across cases, whereas process tracing is interested in sequential processes within a historical case.

Qualitative researchers rely on CPOs, not DSO’s, as these are primarily used to develop, elaborate, or specify more precisely a hypothesis or a given theory within a case.

Put simply: CSO’s are non-comparable observations related to a link between cause and effect within a case. DSO’s are comparable observations across cases.

They form a different evidential basis for causal inference.

Why are those non-comparable observations (which might be generated through elite interviews or archival research), which are pieces of evidence to assess a process, considered “causal” here, and not just “observations”?

Can we really assume that the process is causal? Surely evidence is not inference?

 

Screen Shot 2017-03-07 at 09.56.03

The use of CPO’s for theory development is widely acknowledged within the political science community (statistical researchers often think of case study researchers as historians generating CPOs, which they  then ‘scientifically test’). But CPO’s can also be used for theory testing. For Mahoney there are three types of theory testing CPO’s:

  1. Independent variable CPOs
  2. Mechanism CPOs
  3. Auxiliary (outcome) CPOs

Independent variable CPOs provide information about a controversial “cause”.

The cause of a given outcome is contested. Independent variables CPOs provide information about the existence (or not) of this contested independent variable. For example, one theory to explain the extinction of dinosaurs is a meteorite collision.

A process-tracing observation within this case is the discovery of iridium.

Can you think of any independent CPO’s that are contested in the political and social sciences?

Independent CPO’s would provide data to support the existence of this collision (iridium in the earths crust). Similar examples apply to the germ theory of disease or the big bang theory. In political and social science, similar issues arise.

Mahoney cites the research by Nina Tannenwald on the non-use of nuclear weapons.

Her qualitative research (using elite interviews to trace key decision-making episodes) suggests that the “normative taboo” stigmatized nuclear weapons. The presence of this normative taboo among elite explains their non-use.

To evaluate her argument the critical issue is whether or not the nuclear taboo actually existed among policymakers.

Mechanism CPOs, on the other hand, provide information about whether an intervening event posited by a theory is actually present. Even if the causal mechanism is contested by researchers, mechanism CPOs should lead researchers to some sort of convergence as to what really matters when trying to explain a given outcome.

Consider Theda Skocpol’s “States and Social Revolutions“. She argues that vanguard movements are not important causes of social revolutions. They are certainly present in all cases of social revolution but they usually take advantage of a structural crises, and are not critical causes in-themselves, of social revolutions. They are an intervening variable.

Can you think of any other “causal mechanisms” that are contested in the political and social sciences? What data would corroborate or not?

Auxiliary CPOs do not provide information about the existence of independent or intervening variables, rather they trace occurrences that should emerge if a theory works in a posited fashion.

They are traces, or markers, that should be left behind if the theory or hypothesis is true.

Mahoney cites the classic book by Gregory Luebbert on “Liberalism, Fascism or Social Democracy?“. He argues that a red-green alliance between the socialist party and the agricultural  peasantry was a key cause of social democracy in interwar Europe (urban socialists and farmers working together).

Luebbert then provides auxiliary CPOs that were left behind, which support his hypothesis: an unwillingness to challenge wealth distribution in the countryside.

For KKV this is simply increasing the number of observable implications of a theory. But for the new qualitative case-based researcher, this is a case of theory testing, which relies upon a Bayesian rather than a frequentist logic of causal inference.

It is not based on the assumptions of regression analysis.

Methods using set-theory and logic

Remember, qualitative case study researchers adopt a causes-of-effects approach (whereas regression analyses adopt an effect-of-causes approach).

That is, they seek to explain why cases have certain outcomes.

A new methodological approach to this type of causal inference in cross-case analyses is called qualitative comparative analysis (QCA) and/or fuzzy-set analysis.

These approaches analyze logical types of causes, including necessary, sufficient and INUS causes, and rely on set-theoretic methods and/or Boolean algebra.

Fuzzy set analysis, associated with Charles Ragin’s “Comparative Method” do not rely on Boolean dichotomous measurement (1/0, yes/no).

Rather it attempts to identify the probabilistically necessary and/or sufficient causes as continuously coded between 1-0 (0.75, 0.50, 0.90), which are then used to construct “truth tables”.

QCA it is an attempt to use Boolean algebra to extend the logic of cases studies to comparative analysis. It allows researchers to make comparisons on the basis of “a lot versus a little” rather than “more or less”.

Using this method, Hicks et al (1995) discovered there are “three routes to consolidating the welfare state” (Bismarckian, liberal-labour and catholic paternalistic).

But does this approach really go beyond DSO regression? Might it not be better to use DSO’s and regression analyses rather than QCA for a medium to large N study?

Gerring (2007) strongly advocates this mixed method approach. For Gerring and others, small N cases are best utilized to improve regression findings. Large N establishes a correlation (comparable). Small N traces the causal mechanism (non-comparable).

This also happens in reverse. Quantitative research can be used to supplement qualitative findings. A qualitative researcher develops a hypothesis or theory, and quantitative studies “test” the theory. In reality, such a strict division of labour rarely exists.

Theory development and theory testing are iterative steps in all research projects.

Theory development and theory testing

In comparative politics, most researchers do not have readily testable hypotheses drawn from general research programs/paradigms, as is often the case in international relations (liberalism, realism, marxism etc).

This means that comparative analysts try to develop testable hypotheses, which can often leave a powerful legacy. Think about Robert Michel’s 1911 book on “Political Parties”, which established the “iron law” of “who says organization, says oligarchy”.

Extracting close ideas at close range can lead to powerful hypothesis.

For example, Kathy Thelen (2004) has convincingly argued that the German Handicraft Protection Law of 1897 was designed to win support from a reactionary artisanal class and critical to the development of the German vocational training system.

Qualitative case study researchers, using within case study analyses, regularly identify “critical junctures” and “path dependent” processes of institutional change.

Theory development is closely related to concept formation and the development of typologies: types of democratic regimes, types of welfare state, types of market capitalism.

Most of these “types” emerged from fine-grained case study analysis.

Their resilience over time is testament to the avoidance “coding errors” endemic in statistical data. By developing contextualized knowledge of cases, qualitative researchers are less likely to exclude key variables or mis-specify interrelations among variables.

But from the perspective of quantitative methodology, this means that these cases cannot be used to both develop and test a theory. New qualitative methodologists disagree, they argue that within and cross case analysis provide causal inference precisely because they test theories on the basis of CPO observations.

In-class assignment: think about your MSC/PhD research project, write down the core hypothesis. Is this a case of theory development or theory testing?

Seminar 5: How to Select Cases and Make Comparisons

Introduction

Comparative case studies offer detailed insight into the causal mechanisms, processes, policies, motivations, decisions, beliefs and constraints facing actors – which statistics, large-scale surveys and cultural historiographies often struggle to explain.

As we discussed in week 1, case-oriented approaches place the integrity of the case, not variables, center-stage.

The language of variables, not the case, dominate the research process of variable-oriented comparative work. In case-oriented research the configuration of explanatory factors within-the-case is what matters, in terms of explaining the “outcome” of interest.

It is “Y” centred research.

What distinguishes the “case study approach” from “analytic narratives” is that the researcher operates from the assumption that their “case” reveals something about a broader population of cases. It shines a light on a bigger argument.

For example, generally, few people will care about your case on Ireland, Switzerland or Belgium,  what they care about is it’s broader theoretical relevance.

Case selection 

Since the case is often constructed on the basis of a specific outcome or theory of interest, case selection is purposive i.e. it is not based on random sampling. It is theory driven.

In case studies, researchers want to explain a given outcome such as the re-emergence of far-right politics in Europe, and therefore they must violate the statistical rule of “choosing cases on the dependent variable”.

But actively selecting cases (the dependent variable) can lead to accusations of selection bias. How can purposive case selection be justified?

Political scientists require methodological justification for their case selection. It is not sufficient to say you are studying Irish politics because you speak the language and know the country. Nor is it sufficient to pick a case in order to ‘prove’ your theoretical claim.

What is your case study a case of?

The central question facing any case study researcher is “what is my case a case study of?”. Small N qualitative case studies inform the scholarly community about something larger than the case itself, even if the case cannot result in a complete generalization.

Case studies make a powerful contribution toward theory testing and theory building, something we will discuss in more detail in week 7.

Usually it is assumed that case studies are “countries”. But they can be anything from a person, a time period, a company, an event, a decision or a public policy.

What matters is how you construct the case study.

But what is a case? Is it an observation? 

Methodologically, case studies should be bounded in time and space, related to a wider population of cases, and theoretically relevant.

Depending on the research question you are asking, or the puzzle that interests you, cases can be:

  • Identified and established by the researcher (networks of elite influence)
  • Objects that exist independently of the researcher (nation-states)
  • Theoretically constructed by the researcher (benevolent tyranny)
  • Theoretically accepted conventions (post-industrial societies)

Hancké (2009) uses the example of the Law and Justice Party in Poland, from 1995-2005, as a case study of rising populism in Eastern Europe. The case study is an in-depth analysis of the causal mechanisms that enabled populism to emerge in Poland, but it is framed against a broader universe of cases: the rise of populism in central and eastern Europe.

Single case studies 

The weakest case studies are perhaps those selected to illustrate a theory.

A case study that challenges a scholarly community to think differently about the relevant dimensions of an existing theory is a much better contribution to social science debate.

These type of cases are often called “critical” or “crucial” case studies.

In terms of single case studies, casual process-tracing is the most widely used methodological strategy in political science. Causal process-tracing (which we will spend an entire seminar in week 9 on) attempts to unpack the precise causal chain or intermediate steps, or set of functional relationships, leading x to cause y.

They actively select their dependent variable in to trace the causal process leading x-y.

This is why we describe small N case study research as ‘purposive’. Researchers purposively select their case in order  to explain a given “outcome” of interest.

Casual mechanism 

For example, if we say that “democratic countries are wealthier”, we could unpack the causal mechanism into the following steps (with distinct empirical observations):

  • Step 1: the median voter in a market economy has an income below the median
  • Step 2: these voters support and elect parties that redistribute income
  • Step 3: this redistribution leads to higher spending among the low-income majority
  • Step 4: this results in higher consumption and aggregate demand
  • Step 5: higher aggregate demand leads to higher employment and economic growth

This is not designed to be an empirical statement of fact. It is a reconstruction of a purported causal mechanism. Most importantly, each step can be empirically tested, against other proposed theories on why democracies are wealthier.

This is an essential point. In case study research, one needs a counter-factual, and an engagement with an alternative hypotheses/explanations for the same outcome. It’s not simply a matter of “telling a story” or a “I told you so argument”.

Critical case studies

Critical or crucial case studies challenge an existing theory.

Imagine you find a case where all existing theories suggest that given conditions X1, X2, X3, X4, we should expect to find a specified outcome Y1. Instead, we find a case with the opposite outcome.

Centralized wage-setting in a liberal market economy: the case of Ireland.

The researcher engages an existing theory, stacks the cards against herself, and then explains why the existing theory cannot explain the aberration observed.

It is not designed to generalize but to problematize.

Consider another example, almost all OECD countries experienced the common shock of declining interest rates and the expansion of cheap credit, but not every country experienced the emergence of an asset-price or housing bubble.

The same pressure in different institutional settings lead to different outcomes. Why?

Most different/most similar 

Case studies are hard work and require a lot of careful reasoning by the researcher to ensure they are making valid comparisons that meaningfully speak to a wider population of cases, and which are of theoretical interest to a broader scientific community.

The most powerful techniques of comparison in the qualitative case study approach are those that make the dimensions of their case studies explicit.

The basic idea behind this approach originates in John Stuart Mills “A System of Logic“, and it’s usually referred to as the “Method of Difference” and “Method of Agreement” approach.

Alternatively, it is often referred to as a “most different or most similar” research design.

In the method of difference you select cases that are similar in every relevant characteristic expect for two: the outcome you are trying to explain (y – dependent variable), and what you think explains this outcome (x – independent variable).

Table 1 illustrates the logical structure of this comparative approach.

Examine the table. In this analysis, what explains the variation in house price inflation between case A (Ireland) and case B (Netherlands)?

 

Table 1: Method of Difference

Case A (Netherlands) Case B (Ireland)
Explanation 1 (credit) Present Present
Explanation 2 (LTV) High High
Explanation 3 (interest rate) Low Low
Explanation 4 (EMU) Present Present
Explanation 5 (growth) High High
Explanation N (IV – Income)* Absent (low) Present (high)
Outcome (DV – housing inflation) Absent Present

The method of agreement works the other way around: everything between the two cases is different except for the explanation (x) and the outcome (y).

Table 2 illustrates the logical structure in this type of comparative analysis.

What explains the collapse of social partnership in Ireland and Italy in this example?

 

Table 2: Method of Agreement 

Case A (Italy) Case B (Ireland)
Explanation 1 (size of economy) Large Small
Explanation 2 (type of market economy) Coordinated Liberal
Explanation 3 (problem-load) Pensions Wages
Explanation 4 (partisanship) Technocratic Centrist
Explanation N (union power)* Weak-insiders Weak-insiders
Outcome (DV – collapse of social partnership) Present Present

 

Conclusion 

The essential point to remember – and the main takeaway of this seminar – is that you need to defend your case selection, and think systematically about your comparisons.

Causal process tracing is a technique that will enable you to do this (week 8/9).

Gerring & Seawright (2008) suggest 7 case selection procedures, each of which facilities a different strategy for within-case analysis. These case selection procedures are:

  1. Typical (cases that confirm a given theory)
  2. Diverse (cases that illuminate the full range of variation on X, Y or X/Y)
  3. Extreme (cases with an extremely unusual values on X or Y)
  4. Deviant (cases that deviate from an established cross-case population)
  5. Influential (cases with established and influential configurations of X’s)
  6. Most similar (cases are similar on all variables except X1 and Y)
  7. Most different (cases are different on all variables except X1 and Y)

I would add “crucial or critical” cases to this list (cases that problematise a theory).

Discuss these case selection procedures and their methodological justification, and identify which is most appropriate to your research design.

Seminar 4: Measurement Validity in the Social Sciences.

Introduction 

Over the last three weeks we discussed how to construct a research question, compare cases, form concepts, and the importance of theoretical puzzles.

We are now confronted with the question of how to operationalise our concepts, develop indicators, obtain good data and make valid measurements?

Thinking about data and measurement is integral to all steps in a research project.

Data can be defined as “any form of systematic empirical observation that will enable you to answer your research question”. This link between concept formation, measurement and data collection is crucial for all researchers: qualitative and quantitative.

Measurement validity 

The process of linking concepts, indicators and operationalisation is the problem of measurement validity: making sure you measure what you think you are measuring.

In addition to valid measurement as a researcher you’ll also be expected to present reliable evidence (I should see what you see if I am looking at the data in the same way) and replicable evidence (I should be able to replicate your results).

Validity, the central topic of this seminar, is about making sure the concepts you use are correctly expressed in the measurements you use.

Sometimes this is simple: daily calorie intake can be taken as a good indictor of diet. It is also comparable across the population.

But some concepts (in fact, a lot of concepts) in political and social science do not easily translate into comparable data or measures. Think about wealth, democracy, inequality, informal labour, unpaid work, competitiveness, economic freedom, structural reform.

The fuzziness of many social science concepts suggests that we need to be particularly careful about measurement validity in comparative research.

If a researcher cannot present valid measures of a core concept in their research project then it’s going to make communication with their supervisor very difficult.

Discuss: why is measurement important?

For example, what indictors can we use to measure the health of the economy? GDP, unemployment, employment rate, current account balance, happiness?

Operational definitions 

Last week we discussed concept formation. But when it comes to measurement we have to assume relative agreement of the systematised concept in order to operationalise it.

Hence, this week we are moving away from the process of interpreting the meanings and understandings associated with a concept to the process of developing measures.

In comparative case study research, this involves generating the operational definitions employed in classifying cases, and then developing scores for these cases.

For example, many international organizations have developed synthetic indicators to try capture complex multidimensional economic concepts such as competitiveness.

But what do we mean when we say a country has “lost competitiveness”? Can we really use this concept for describing the economy of nation-states and/or regions?

Concept formation is a philosophical conflict over meaning. Measurement validity is about trying to find good indictors to operationalise a concept within a scientific community.

Let’s continue with the example of “competitiveness”. What indictors would we select to score countries on this measure? What would it look like if we seen it?

Now compare this with a concept such as “enterprise or industrial policy”.

Break into groups of 3 and discuss how you would operationalise and develop indicators for both of these concepts.

Measurement validity is not a philosophical debate, it is about making sure that we measure what we think we are measuring, using an adequate set indicators to score cases.

Data collection 

If we cannot measure a concept directly using a given set of indicators, we should try measure its observable consequences.

For example, how do we assess whether the theory of a comet hitting the earth, which then wiped out the dinosaurs, if we cannot see it? What are the observable implications of this hypothesis? Answering this question is the process of data collection.

Reliability and replicability is much more complicated in discursive research setting because the data literally does not exist without the researchers interpretation.

But this makes the importance of systematic data collection all the more important; archival, interviews, and content analysis requires ordering and reporting, such that your (examiner) can study and examine the data you have used to make your argument.

Equivalence and contextual specificity 

The number of public databases and official statistics  has increased exponentially over the past few years, because of advances in technology and communication systems.

These databases are invaluable for researchers but they were usually not designed for the question a researcher has in mind.

For example, countries measure unemployment in different ways, but the OECD uses a standardized rate, by imposing a uniform definition across all cases. Standardising indicators in comparative politics creates problems of equivalence and contextual specificity.

Think about this in the Irish case today, what is the impact of using ‘unemployed’ rather than ‘joblessness’ in the measurement of the unemployment rate?

To take another example. GDP figures in Ireland are infamously unreliable. This is because aggregate productivity is skewed by a handful of firms engaged in transfer-pricing, whereby it “appears” that a given level of economic activity takes place in Ireland, whereas in actual fact, it is an accounting exercise for tax avoidance purposes.

National income, therefore, is a better measure to capture the contextual specificity of economic activity in the Irish case, whereas GDP might be perfectly reasonable in most other advanced economies of Europe.

Getting as close as possible to the production of data is always preferable, as it allows you to engage in a dialogue with the researchers and scientific community who produce the data. You can contact researchers, ask questions, get clarifications, and find out why they measure things the way they do.

How would you assess the validity and reliability of secondary data?

Survey data/interview data

Surveys are the most popular way to collect data. The obvious problem with survey data is that you have to assume that the respondents mean what they say when answering questions (income declarations) and that the question is properly understood (scales: agree, strongly agree, somewhat agree, neither disagree or agree).

Most case-study researchers in political science will generate data via elite interviews or surveys with secondary sources: archives, research reports, focus groups, official reports, newspaper articles. Reliability often becomes the problem.

All of this is useful but does not get away from the measurement problem: making sure your indicators adequately reflect the concept you are trying to operationalize.

Figure 1 illustrates a 4-step process to operationalize a contested concept such as “structural reform”, which I have defined as “cost competitiveness” and then used “unit labour costs” as a comparative measure, and scored cases from 1-100.

The overall objective of such a measure, in policy terms, is to improve economic growth.

Concept-measure

 

Think about the problem of this! Measurement validity is specifically concerned with whether operationalization reflects the concept the researcher seeks to measure. This measure of structural reform is defined as cost competitiveness and constructed around a replicable set of indicators that all researchers can critique and engage with.

But does it really measure what it is supposed to measure? Whilst it might be externally valid, is it internally valid?

Are ULC’s really a good measure and/or indicator of competitiveness? What are the policy implications of using ULC’s as opposed to public infrastructure as a measure?

Conclusion 

Downward and upward movement in Figure 1 (adapted from Adcock & Collier 2001) can be understood as a set of research tasks.

Measurement is valid if the scores derived from a given indicator can be meaningfully interpreted in terms of the systematized concept that the indictor seeks to operationalize.

Systemic error arises when the links between concept, indictor, and scores are poorly developed. This happens more often than you might think in social research.

For example, is counting newspaper articles a good measure of media bias? Is taxation as a percentage of GDP a good measure of whether a country is a low-tax or high-tax country? Probably not. Therefore, better measures need to be developed.

Discuss: is there a tradeoff between precision and validity? Is there a trade off between internal validity and external validity?

What should we be most concerned about?

It is important to note that the same score on an indicator may have different meanings in different contexts. Context matters in comparative politics and the only way to avoid error is to engage in reflexive careful reasoning in the stages of operationalization.

 

Seminar 3: Concept Formation in the Social Sciences

“Because we are prisoners of the words we choose, we better pick them carefully”……  Giovanni Sartori (1970).

Introduction 

I take my point of departure from Ludwig Wittgenstein, the Austrian-British philosopher, who dealt primarily with the philosophy of language, logic and mathematics, and arguably the most influential philosopher of the 20th century.

In his Magnum Opus, the Tractatus, Wittgenstein concluded that the essence of language is its logical form. The logical structure of language sets the limit to what can be meaningfully said. He then published the Philosophical Investigations, and came to the opposite conclusion, namely that the the essence of language is its use.

For some social scientists and philosophers, the pragmatic-linguistic turn instigated by the philosophical investigations has meant there can be no escaping the hermeneutic circle. There can be no objective reality in social science. All that’s left is interpretation.

A large part of the  development of the social and cultural sciences occurs through conflict over words, terms, concepts and definitions. What this suggests is that we need some sort of criteria for what makes a “good concept” or what makes a proposition coherent.

We need criteria to reach shared agreement over the concepts we use. If we leave it open for endless interpretation, all we are left with is a perpetual sophist debate.

For example, take the popular use of the term “structural reform” within European policymaking circles. This is supposedly the core proposition to generate growth in a period of compressed domestic demand (austerity). But what does it mean? Is it a concept that applies across all economies, regardless of their domestic political differences?

Concept stretching 

Methodology is the logical structure and public procedure of scientific inquiry. It must be distinguished from technique.  Giovanni Sartori, in his famous 1970 article that we’re discussing this week,”concept mis-information”, argues that the over conscious technician is someone who refuses to discuss heat without a thermometer.

As the social world that we study expands, the more we require concepts that are able to travel, in order to compare and contrast, and make sense of that world. But this leads to the problem of conceptual stretching. In the study of comparative politics, the risk of broadening the meaning of our concepts to include more cases – and thereby their range of application – may come at the expense of making our concepts meaningless.

‘Democracy’, ‘globalisation’, ‘populism’, ‘ideology’ and ‘capitalism’ are concepts that have been subject to conceptual stretching in the social sciences. They are concepts that are used to cover a lot variation in the political world. If we assume that one of the defining characteristics of social scientific discourse is precision then this becomes problematic.

In ordinary language use, it is less of a problem. At the same time, social scientists also seek to make “generalizable” claims, and therefore they must use generalisable concepts (with minimal attributes) that can travel to as many cases as possible.

So, the question is how to construct generalise concepts, without concept stretching?

Concepts stand prior to quantification 

For Sartori what we gain in extensional coverage, we lose in precision, which is a defining characteristic of “scientific discourse”.

Simultaneously, universal concepts, if they are to be universal at all, must be empirical. That is, we must know it if we see it. They must have real observable attributes.

But what about those abstract concepts with limited empirical referents in the world such as justice or morality? This has non-observable  attributes. But even here we surely know justice when we see it (or not). It may not be directly observable in the world but it is an abstract concept with real utility.

The problem of concept formation in the study of comparative politics often emerges from the distinction between differences in kind and differences in degree.

The latter lends itself to measurement (on interval scales) and quantification. The former lends itself to typologies (and ordinal scales). For many quantitative research problems, ambiguity is generally cleared up with better measurements. Whereas in qualitative research, ambiguity is often cleared up with taxonomies and typologies.

Many political science scholars tend to forget, however, that concept formation, even for the data technician, stands prior to quantification.

As Sartori points out, there can be no quantification without conceptualisation.

The process of thinking begins with natural language, and natural language is a fuzzy and messy affair. The logic of either or cannot be replaced with more or less. This is much like the difference between linear algebra and boolean logic.

Criteria for concept formation 

We need clear criteria for what makes for a good concept, particularly if that concept shapes the measures and informs the variables we use for data analysis.

The classical approach to concept formation focused on classifications and taxonomies (starting with Aristotle, and directly observable in other sciences such as biology, chemistry and zoology). Classifications remain a central condition for any scientific discourse (think about the periodic table in chemistry, which was published 180 years ago by Dmitri Mendeleev).

But it is important to remember that a concept is something conceived, and leads to a proposition that corresponds to some class of entities in the world.

In the classical tradition we can make a concept more general by lessening its properties or attributes (extension). Conversely we make a concept more specific by the addition of attributes (intension). In this tradition, as we reduce the intension of a concept (attributes), in order to apply the concept to more cases, we move up the ladder of generality.

This can create obvious problems.

For example, we can use the concept ‘capitalism’ as a general term to refer to a certain type of economic regime, and in ordinary language use most people will understand what we are talking about. But in political economy  we”add adjectives” to specify what type of market economies we are referring to: liberal, coordinated, social democratic, statist, authoritarian. In turn, these concepts can be used to create a typology, whereby countries and sectors are classified under each of these different types of comparative capitalisms.

Concepts are intimately related to the theories we use.

Concepts are, as Gerring (1999) states, “handmaidens to good theories”. Think about the use of the concept “social class”. This concept is used less and less in social science. Why? Is it because we don’t live in a world of social class divisions? Not really. But because these concepts are associated with a certain “theory” that is less and less popular in the social sciences (Marxism), they tend to fall out of discourse, with the theory.

Should we therefore replace the concepts “social class” with indicators that are easy to operationalise such as education level, income and occupation?

Table 1: Conceptualization and ladders of abstraction 

Levels of abstraction Comparative scope and purpose Logical and empirical properties
HL: High level categories Universal    conceptualisations Cross area comparisons among heterogeneous contexts (global theory) Maximal extension. Minimal intension. Definition by negation
ML: Mid level categories General conceptualisations and taxonomies Intra area comparisons among relatively homogenous contexts (middle range theory) Balanced extension and intension.Definition by analysis
LL: Low level categories Configurative conceptualisations Case by case analysis (narrow range theory) Maximal intension. Minimal extension. Contextual definition

Extension and intension 

Sartori was one of the first comparativists to propose a political science framework for good concept formation in political science. As he states “to compare is to control”.

The comparative method is one of the most powerful tools in political science. But it requires coherent and externally differentiated concepts.

He encouraged scholars to be attentive to context without abandoning broad comparisons (i.e the capacity to generalize).  Table 1 suggests that as we move up the ladder of abstraction we increase the extension (set of entities in the world to which the concept refers) and reduce the intension (set of meanings or attributes that define the concept).

There is an inverse relationship between extension and intension.

Think about Weber’s famous typology on “legitimate domination”: traditional, charismatic, legal-rational. Think about the sub-attributes of each type of authority.

He wrote extensively about patrimonial authority, which he classified under traditional. If he only used patrimonial authority it would extend to less cases in the world. ‘Traditional’ authority, however, subsumes patrimonial and extends to more cases.

Rethinking the classical approach 

David Collier and James Mahon wrote a very influential article in 1993, as an attempt to re-engage Sartori’s debate on “concept misinformation”. They note that Wittgenstein family resemblance approach to concept formation suggests that members of a category can have a lot in common, but may not have one single attribute that they all share.

Think about the concept ‘mother’. Does this concept require that the mother be the birth-mother? Different attributes can be used as the defining properties of the same category.

They think about the problem of conceptual stretching in terms of primary and secondary categories, rather than subordinate and super ordinate taxonomies. ‘Mother’, ‘capitalism’ and ‘democracy’ are primary categories, birth motherelectoral democracy and liberal market are secondary categories.

Unlike the classical approach the differences are contained within the primary radial category. Conceptual stretching is avoided by adding adjectives.

In classical forms of taxonomical categorization, the problem of conceptual stretching is resolved by dropping an adjective (authoritarianism) whereas in more contemporary usage in political science it is resolved by adding an adjective (bureaucratic authoritarianism).

Extension is gained by adding a secondary category.

A long standing debate in comparative politics is the question which attributes of democracy should we use to differentiate a democratic regime from a non-democratic regime? Elections, party choice, contestation, participation, accountability, protection of civil rights, equal opportunities, rule by the people, and social equity are all contested concepts, and each has specific attributes.

But does every ‘democracy’ have them all?

Clearly, not every democracy has all these attributes but they share a family resemblance.

A researcher may have 7 cases of democratic regime, and each case may have 5 out of six shared attributes, therefore each case is missing a different attribute.

But they all radiate from a core meaning of “rule by the people”.

Defining and conceptualizing democracy is not a simple case of operationalising variables and properties. Conceptualization is a highly contextual process. You have to know your cases. Gerring suggests that all concept formation in the social sciences can be understood as an attempt to mediate the eight criteria above. Let’s briefly discuss each one.

Eight criteria 

But how does this all apply to the day to day practice of research design, and ordinary language use? For Gerring (1999), concept formation is an ongoing interpretative battle that involves a set of tradeoffs (as opposed to rules) between eight different criteria:

  1. Coherence: how internally coherent and externally differentiated are the attributes of the concept?
  2. Operationalisation: How do we know it when we see it?
  3. Validity: Are we measuring what we purport to be measuring?
  4. Field utility: how useful is the concept within a field of closely related terms?
  5. Resonance: how resonant is the concept in ordinary language/specialised discourse?
  6. Contextual range: how far can it travel?
  7. Parsimony: how many attributes does it have?
  8. Analytic/empirical utility: how useful is it in your research design?

Concept formation ultimately refers to a) the phenomena to be defined, b) the properties or attributes that define them, and c) the label covering both.

He points out that our research is heavily shaped by the concepts we use. For example, the terms “neoliberal” and “ordoliberal” have very different connotations, and attributes, despite the fact that both concepts point to a similar economic philosophy. The same can be said of “globalisation” and “internationalisation”.

The concept ‘ideology’ is said to have 35 different attributes. These means it can effectively have hundreds of different definitions. Should we therefore abandon the concept in favour of a more specific term such as “political belief system”?

It matters a great deal how we define our terms and how we use them in our scientific discourses. Humans are bipedal and featherless (i.e. an observable fact with a clear empirical referent). But this is not what we mean by there term ‘human’?

Definitions, shared meanings, and providing clear indicators of what we are talking about are crucial in social science discourse (and ordinary language use). Concepts perform a referential function. But this is not their only purpose. They also serve to differentiate, define and explicate. The colour blue takes off where the colour green and brown ends.

 

Familiarity is important. But if a common concept or term serves to confuse a plurality of ideas then the creation of a new concept is often necessary. But this is not an invitation to produce a Derridaesque set of neologisms (yes, I did just create a new term!)

As Gerring points out, on a practical level, effective phrase-making (la casta in recent spanish politics) can be no more separated from the task of concept formation than good writing can be separated from good research.

Concepts, for good or ill, also aspire to power, which are usually captured by their resonance. Good concepts stick because they resonate.

Good concepts do not have endless definitions. Abbreviation shortens discourse and increases understanding. Mathematical and logical language is an obvious example of this. But so is the Chinese language. It is parsimonious.

Parsimonious concepts reduce ambiguity and therefore it is easier for a theory to “grow legs” if it resonates and is parsimonious (i.e. does not have many different attributes).

Coherence suggests that the internal attributes which define the concept belong together. This is arguably the most important criteria for concept formation.

It differentiates blue from green, liberal from conservative. Coherence in the core meaning can make for a very sticky concept, which travels through time and space. Democracy as “rule by the people” is arguably the highest level of coherence in defining democracy.

Internal coherence is inseparable from external differentiation.

But differentiation is always a matter of degree. If you don’t know it when you see it, then you can’t tell it (the concept) from other things. For example what is the difference between power, force, authority and violence? Goof concepts need operationalization.

How can we operationalise a concept such as efficient “administrative reform” in the public sector? Geddes uses the concept “meritocratic recruitment”. This concept is coherent, differentiated and quite parsimonious.

But how theoretically useful is it? Is it the most important aspect of administrative reform?

Typologies 

Finally, allow me to say a few words about the central importance of typologies in the process of concept formation. Typologies (think Weber’s typology of authority, or typologies of liberal democracy) serve multiple functions: forming concepts, refining measurement, exploring dimensionality, and organizing explanatory claims.

Conceptual typologies and categorical variables explicate the meaning of a concept by mapping out its dimensions. How should we conceptualize the process of involving civil society in the policymaking? Outside Westminister majoritarian systems this has led to an important set of typologies on corporatism, concertation and pluralism.

How should conceptualise the diversity of capitalisms that exist in market societies?

Discussion – are there are typologies that are directly relevant for organising the cases in your research project? What are the core contested concepts that shape your study?

 

Seminar 2: How to Use Comparative Case Studies for Causal Inference.

Introduction 

There are two dominant approaches to research design in the political and social sciences: statistical-oriented and case-oriented. The former is an extremely useful tool for identifying correlations, and the latter for teasing out causal mechanisms.

Imagine you were interested in trying to assess the effect of changes in diet on general health, and you had access to an enormous dataset collected by the world health organization. What methodological approach would you use?

Statistics are a powerful tool for population oriented studies.

You can slice and dice all the relevant information into variables and capture what is relevant with a few indicators. You could do this with a series of long interviews, but how much more relevant information would you receive?

Statistics are most useful when researchers are trying to explain marginal changes and marginal effects. You can find out how a change in lifestyle (more exercise and less alcohol) might effect a change in a given outcome (longevity).

Causes of effects 

Now imagine you want to explain why some countries revert to being authoritarian regimes after a period of stable democracy. There are not very many cases of this in history. But Weimer Germany is the obvious example worth considering.

The USA today might be a case in the future.

Small N case studies contribute to social science because they can explain the complex configuration of factors that interact to produce a given outcome. This is what’s described as a “causes of effect” approach. Population studies are interested in “effects of causes”.

In the case of Germany, many different factors have been identified: the first world war, the Versailles treaty, unemployment, inflation, conflict between communists and social democrats, a polarizing class structure, culture of anti-semitism, rise of the Nazi party.

The complexity and uniqueness of case studies is what makes them so interesting for researchers. There are multiple pathways for a democracy to collapse (equifinality), and the integrity of the configuration of factors that make up the case is crucial.

Case studies, therefore, are not very useful for generalizations.

However, if you want to explain why mass genocides occur, then the number cases (N) increases – Rwanda, Balkans. You can then subtract the specific conditions that apply to Germany and identify more general trends and characteristics that all of these cases share.

Statistical and configurational case studies differ along 4 key lines:

  • the type of explanation (marginal versus holistic),
  • the unit of observation (variables versus cases),
  • the method of answering (additive versus conditions),
  • and the type of answers they provide (general versus specific).

Sometimes both methods can be combined (identifying a strong statistical relationship or correlation via regression analysis) and then using paired case studies to tease out the causal mechanism. But they cannot always be combined.

This course is qualitative methods in political science , so we are more interested in the type of causal arguments that the case study approach can provide.

What is it a case of?

There is one fundamental rule or criteria that case study researchers tend to follow in the process of research design: ensuring that the case sheds light on a broader question, or universe of cases. To put it another way, always ask yourself – what is this a case of?

For example, if you are interested in examining why Mumbai has the largest red light district in Asia, frame and justify the case selection of Mumbai against a wider universe of cases such that it speaks to a broader theoretical literature on prostitution.

It is also crucial to consider the dimensions of your case.

If your research question can be captured in one dimension then two well chosen cases is enough. If it requires two dimensions, you can use four cases.

Political scientists usually capture these in a 2 x 2 table. For example, see table 1

Table 1: Modes of Wage-Setting and Political Economy

Export Sheltered
Centralized France Ireland
De-centralized Germany UK

Cases do not necessarily have to be countries or nation-states. They can be economic sectors, time periods, institutions, regions or even individuals.

Time and history 

Another distinguishing feature of causal inference in the case study method is sensitivity to time. As Paul Pierson (1996) argues, all politics takes place in time.

Some statistical analyses are timeless, in that they attempt to identify the marginal effects of a individual variable across time and space.

Pierson outlines four core points to consider when thinking about causality in time:

  • Sequence (does it matter if property rights are constitutionally enshrined before the state privatizes public assets?)
  • Timing (does it matter when a country democratizes? For example southern Europe in the 1970’s and central Europe in the 1990’s)
  • Asymmetry (if social democrats created the welfare state does that then imply that a decline in social democracy leads to retrenchment of the welfare state, or does the politics of retrenchment have its own causal dynamic)?
  • Change (is change in form the same thing as change in function, how do we consider inconsequential change from critical change?)

Causal inference 

Statistical and case-study oriented research designs have different understandings of causation, sometimes complementary but not always.

Goldthorpe (2001) identifies three different understandings of causation in the social sciences (robust dependence, consequential manipulation and generative processes).

“Causation is not correlation”. This states an obvious fact, namely that an association between the variables x and y does not imply X caused y.

Causation as robust dependence attempts to solve this problem through the use of various statistical inference to detect spurious causation. X is a cause of Y to the extent that dependence of Y on X can be shown to be robust.

That is, it cannot be eliminated through introducing other variables into the analysis.

A lot of political science has critiqued this approach by arguing that such techniques can show relations among variables but not necessarily that these relations are actually produced. They can forecast but they cannot explain.

Statistical inference is not causal inference.

Income is dependent on educational levels, and therefore the dependence is robust. But why? How does this dependence come about? Is it about the supply and demand for skills. To establish a causal link requires specifying the relationship within a theory.

Causation as consequential manipulation attempts to establish causal inference through experimental methods. Causes must serve as treatments that are manipulable.  If X is manipulated then it must have a consequential response or effect on Y.

Only a randomized experiment can verify if Y is a consequential effect of X. How useful is this understanding of causation when explaining political and social phenomena?

Case study researchers are usually interested in explaining why X has causal significance for Y, and that the association must be generated through some sort of causal mechanism, even if it is not directly observable. They are interested in the causes of effects.

Case studies and causal mechanisms 

Causation as a generative process usually takes place via three steps in research design: establishing the phenomena to be explained, hypothesizing a generative process or causal mechanism, and testing the hypotheses.

For example, in the study of political economy, supply-side factors such as declining interest rates and financial liberalization are associated with housing-asset price booms (explananda). But does this explain the wide variation in house prices in the OECD?

An alternative explanation might be income growth and wage setting institutions (mechanism) explain housing inflation. This might lead to a particular hypothesis on the extent to which sectoral-level interests shape asset-inflationary outcomes (hypothesis).

Necessary and sufficient conditions 

In conclusion, political science researchers conceptualize causation in contrasting ways when they pursue explanation in particular cases versus large populations.

Mathematically, both research traditions stem from different understandings of causality. One originates in the study of linear algebra, and the other, boolean logic.

But is there a contradiction in conceptualizing causation as a configurational process that generates particular outcomes within specific cases versus causation as a statistical probability that exists across all populations? Can there be a unified theory of causality?

For example, what causes democracy?

The conditions that caused democracy in India (all encompassing mass party) should probabilistically decrease democracy in general.  So what causes democracy in this case?

The all encompassing party turns out to be a necessary causal variable in India.

Conclusion 

Causal inference in case studies are grounded in a philosophy of logic rather than the logic of probability. Logic identified necessary and sufficient causes for an outcome.

For example in political science,  Barrington Moore (1966) famously quipped “no bourgeoise, no democracy”. The presence of a middle class is considered a necessary cause for democracy. But the presence of a middle class is not a sufficient cause in itself.

Think about this in terms of formal logic (see James Mahoney 2008):

Y1 = X1 & (A1 v B1)

Y1 = democratic pathway; X1 = strong bourgeoisie; A1 = alliance between bourgeoise and aristocracy; and B1 = weak aristocracy.

A1 and B1 is neither individually necessary nor sufficient. Instead they combine with X1 to produce two possible combinations for Y1. A1 and B1 are “causes” that combine with X1 to form two possible combinations that are sufficient to produce Y1.

screen-shot-2017-01-31-at-10-32-30

S1: How to Think About Research Design

Introduction

Research design is about how to pose questions and fashion scholarly inquiry to make valid descriptive and causal inferences about the social world.

In political science, researchers adopt diverse methodological tools; quantitative and qualitative, but they have a shared standard standard of evaluation.

No statistical technique can substitute for good research design and subject matter knowledge.

Qualitative research strategies usually combine a small number of cases with complex arguments with the implication that there are more variables than observations: the collapse of eastern Germany in 1989.

Case study research is empirical work carefully tailored to the subject.

Neither quantitative nor qualitative research is superior to the other, and deciding which strategy to pursue ought to be conditioned by the research question.

Both approaches to social science, however, must pay attention to the rules of scientific inference and adopt shared standards and procedures of social inquiry.

Intelligent commentary is not research.

Research is a scientific process of inquiry that occurs within a stable structure of rules and procedures. In academia, it’s not your opinion that matters, it’s what you can demonstrate!

rethinking-social-inquiry

Causal inference 

For King, Keohane and Verba (1994) all political scientific research, regardless of method, shares the following characteristics:

  • The goal is inference (to infer beyond the data to make a meaningful claim about the world that cannot be directly observed)
  • The procedures are public (the data can be reliably assessed by others to determine its validity)
  • The conclusions are uncertain (to construct arguments that can be falsified)
  • The content is the method (the material of inquiry is endless, the unity consists in the methods of inquiry)

For analytical purposes political scientists usually break the process of research design into four interactive components: the research question, the theory, the generation of data and the use of data.  I will go through these in more detail in the coming weeks.

In the quantitative-statistical template, many of the problems of research design are defined away with more and better statistical controls. This is why qualitative researchers are often accused of not abiding by the same rigorous rules of scientific inquiry.

But all researchers who rely on observational data need case studies. Knowledge of cases and context contributes to achieving valid inferences about the political/social world.

Critical thinking is more than technical wizardry. Analytic rigour in all research is difficult. The appearance of methodological rigour can be highly deceptive.

This module argues that research is an iterative process. The steps of research design are constantly being constructed by the researcher. But it also adopts the political science assumption that causal inference is the objective of social science.

The process of turning interesting ideas into arguments usually involves:

  • Identifying a puzzle (explaining the success/failure of economic adjustment strategies)
  • Formulating a research question to address the puzzle (does austerity explain Ireland’s export-led recovery recovery?)
  • Presentation of a debate in the literature (troika policy choices versus long-term institutional effects)
  • Proposing a different hypothesis/argument/theory (the presence/absence of a US service sector firms)
  • Gathering empirical material to address the question (sectoral composition of economic growth and exports)
  • Drawing conclusions and inferences (export led recovery more associated with US business cycle than Europe).

Social science as a debate 

All interesting ideas have to be constructed as a specific empirical research question linked to a theoretical debate with real world significance.

Intelligent research design is about constructing better arguments, and using the procedures of research design to make better and more valid descriptive and causal inferences about the political and social world.

A central step in designing a masters or doctoral thesis is identifying a gap or problem in a clearly defined body of academic literature, and then formulating a specific research question to address this.

A thesis is about depth not breadth.

Early researchers often start out with a ‘grand theory’ that they want to ‘prove’ or ‘disprove’ such as ‘the European Union is a neoliberal project’. They then amass as much data as possible to prove this is true. This is not a good strategy to adopt.

Master or doctoral thesis makes a contribution to social science when they are mid-level theories, specific and test for causal relationships that challenge theoretical assumptions.

The political world that social science tries to understand is highly unpredictable and very uncertain. Just think about Trump and Brexit!

There are competing social scientific visions of how the social and political world is constituted (ontology); what we can know about that world (epistemology); and how we can develop empirical knowledge of that world (methodology).

Unsurprisingly, therefore, the basic architecture of political science often takes the form of an academic debate.

But it is not a sophist debate. It is a scientific debate aimed at solving puzzles: explaining the origins of democracy, economic development, causes of war, policy responses to crisis and success of international trade agreements.

New facts rarely settle theoretical debates 

Puzzles almost always relate back to a theory or an argument that is under fire.

For example, what causes unemployment?

A lot of economic theory suggests that institutions which keep labour markets from self-clearing are a cause of unemployment? If this is true what would be the observable implications of this theory? High unemployment in countries with strong welfare states?

A theory never entirely disappears, no matter how often it has been falsified (i.e. self-clearing labour markets) but they usually find reincarnations in a new set of falsifiable predictions.

New facts rarely, if ever, settle long standing theoretical debates. This is the lesson we have learnt from Imre Lakatos. It requires a new theory.

We discuss this further next week.

Research question 

All research starts with a question. Half the battle of writing a masters or doctoral thesis is constructing that question. It’s iterative. The question will evolve.

Research design gives you the tools to adequately answer the question. The process of linking empirical observations to theories (arguments) = research design.

When someone asks you about your thesis is about, always try to answer it by stating: “I am asking the question…” or “I am trying to understand….”

A social scientific question should be stated in such a way that it could be wrong. Don’t ask a question that implicitly contains the answer. Frame it empirically.

The literature review is your construction of the competing academic explanations that attempt to answer that question.

When you propose and construct your explanation (hypothesis/argument/claim) the difficult part begins: finding the observable implications of your theory.

A large part of research, therefore, is taken up with systematically collecting data on as many observable implications of a suggested theory for a social phenomenon.

For example, in the study of inequality one of the major contributions of Thomas Piketty’s book ‘Capital in the 21st Century‘ is producing a longitudinal time-series dataset to assess his theory of R>G. The theory is contested but not the data.

Ideas can emerge from anywhere. But what makes for a good research question?  Bob Hanckè (2009) outlines the following criteria:

  1. Relevance to real world problems
  2. Pre-research and engagement with empirical material
  3. Engaging an existing theoretical debate
  4. Balance between concreteness and abstractness
  5. Falsifiability (i.e. not a statement that contains the answer)
  6. Researchability (i.e. don’t questions about the future)

Next week we will discuss causal inference in the social and political sciences.