One of the critical initial steps in research involves defining the terms and scope of a project. Consider the question posed on the first page of this website: Why is there war? One of the first steps towards approaching that question was defining exactly what we meant by “war”. Adcock and Collier have suggested that it can be useful to work from some general background concepts to more specific systematized concepts which involve an explicit definition. However, simply defining a concept is not sufficient for undertaking a research project. We also need to identify indicators for the concepts. How, for instance, do we know that a war has occurred? What are the measures we will use (such as the number of battlefield deaths) to signify that one event is war while another is not? Finally, how will we score cases (e.g. by the number of deaths, or as “total war” vs. “limited war” in a more qualitative scheme).
Concepts
1. Concepts are often contested. Identifying and defining our key concepts can be a challenging task. Consider John Gerring’s (2001) effort to define a common social science concept: ideology. In the process of defining that concept, Gerring identified more than 30 distinct definitional traits used in the literature. Reading that literature we may be unsure whether ideology should be defined in terms of opinions, beliefs, symbols, or attitudes.
2. Smith and Medin (1981) suggest three general approaches to understanding the nature of concepts. The classical approach held that “all instances of a concept shared common properties, and that these common properties were necessary and sufficient to define the concept.”. The probabilistic or prototype assumes that instances of a concept can vary in the degree to which they share certain properties. Finally, the exemplar view holds that “there is not single representation of an entire class or concept, but only specific representations of the class’s exemplars“. Many social science concepts likely fit under the probabilistic or exemplar views. Consider democracy. There are so many different forms of democracy that exist in the world that we could not easily use a classical approach. The construction of many of the popular indexes for measuring democracy is in part a recognition that democratic nations vary in the characteristics they have in common (the probabilistic view). The existence of rival indexes for measuring democracy suggests we are talking about a class of political phenomena which may be impossible to identify using a single description (the exemplar view).
3. As Gerring (2001) suggests, we might identify and define a concept by adopting a definition others have used; considering what explains the concept or what the concept itself explains; exploring the intellectual history of a concept; or grouping together the “specific definitional attributes” that other definitions and uses of the word provide.
Measurement
1. After we have determined precisely what our concept is, we often still want to determine a way to indicate and measure its presence in the real world.
2. We want our measurements to be sensitive, valid, and reliable.
a. Sensitivity is about the level of precision in your measures. In general you want to keep be as sensitive as possible, but you should keep in mind the limits of your measurement method. Some measurement methods–such as survey rating scales–demonstrate decreased reliability when there are attempts to increase sensitivity.
i. One aspect of precision is the level of measurement. Nominal measures are variations in kind or type. Marital status is an example. Ordinal measures demonstrate variation in degree along a continuum, such as a rank order of preferences. Interval measures also vary on a continuum but the relative positions on the continuum are measurable and significant. An example is the use of time as a scale. Finally, ratios are like intervals except that the number zero is also meaningful. It is possible with a ratio scale to say something has “twice the value” of something else. An example would be the age of a person.
b. Validity is the extent to which what you measure is what you say you measure.
i. Face Validity: plausible on its “face”
ii. Content Validity: extent to which all components of a systematized concept are measured in the indicator; matching a list of attributes
iii. Criterion-related Validity: extent to which an indicator matches criteria; predictive, concurrent
iv. Construct Validity: extent to which what you measure behaves as it should within a system of related concepts; an attribute of a measure/indicator
c. Reliability is the extent to which a measure is free from random error. A reliable measure is repeatable, consistent and dependable.
3. It is useful to clearly state your definition and measurement process for key variables in your research, especially when dealing with concepts for which there may be rival definitions.
Lecture Slides
Professor Nelson, Fall 2010, on “Concepts and Measurement”:
Resources
Books
-
Collier, David, and John Gerring. 2009. Concepts and method in social science: the tradition of Giovanni Sartori. Routledge.
-
Gerring, John. 2001. Social science methodology: a criterial framework. Cambridge University Press.
-
Goertz, Gary. 2006. Social science concepts: a user’s guide. Princeton, NJ: Princeton University Press.
-
King, Gary, Robert O. Keohane, and Sidney Verba. 1994. Designing social inquiry: scientific inference in qualitative research. Princeton, NJ: Princeton University Press.
-
Sullivan, John Lawrence, and Stanley Feldman. 1979. Multiple indicators: an introduction. Beverly Hills: SAGE.
Articles
- Adcock, Robert, and David Collier. 2001. “Measurement Validity: A Shared Standard for Qualitative and Quantitative Research.” The American Political Science Review 95(3): 529-546.
Abstract: Scholars routinely make claims that presuppose the validity of the observations and measurements that operationalize their concepts. Yet, despite recent advances in political science methods, surprisingly little attention has been devoted to measurement validity. We address this gap by exploring four themes. First, we seek to establish a shared framework that allows quantitative and qualitative scholars to assess more effectively, and communicate about, issues of valid measurement. Second, we underscore the need to draw a clear distinction between measurement issues and disputes about concepts. Third, we discuss the contextual specificity of measurement claims, exploring a variety of measurement strategies that seek to combine generality and validity by devoting greater attention to context. Fourth, we address the proliferation of terms for alternative measurement validation procedures and offer an account of the three main types of validation most relevant to political scientists.
- Collier, David, and Robert Adcock. 1999. “Democracy and Dichotomies: A Pragmatic Approach to Choices about Concepts.” Annual Review of Political Science 2(1): 537-565.
Abstract: Prominent scholars engaged in comparative research on democratic regimes are in sharp disagreement over the choice between a dichotomous or graded approach to the distinction between democracy and nondemocracy. This choice is substantively important because it affects the findings of empirical research. It is methodologically important because it raises basic issues, faced by both qualitative and quantitative analysts, concerning appropriate standards for justifying choices about concepts. In our view, generic claims that the concept of democracy should inherently be treated as dichotomous or graded are incomplete. The burden of demonstration should instead rest on more specific arguments linked to the goals of research. We thus take the pragmatic position that how scholars understand and operationalize a concept can and should depend in part on what they are going to do with it. We consider justifications focused on the conceptualization of democratization as an event, the conceptual requirements for analyzing subtypes of democracy, the empirical distribution of cases, normative evaluation, the idea of regimes as bounded wholes, and the goal of achieving sharper analytic differentiation.
- Collier, David, and Steven Levitsky. 2009. “10 Democracy: Conceptual hierarchies in comparative research.” In Concepts and method in social science: the tradition of Giovanni Sartori, eds. David Collier and John Gerring. Routledge.
- Gerring, J. 1999. “What Makes a Concept Good? A Criterial Framework for Understanding Concept Formation in the Social Sciences..” Polity 31(3): 357-93.
- Sartori, Giovanni. 1970. “Concept Misformation in Comparative Politics.” The American Political Science Review 64(4): 1033-1053.
updated July 16, 2017 – MN