By: Shaaban Fundi
The article entitled “The Qualitative Method of Impact Analysis” by Mohr (1999) attempts to qualify qualitative study design as rigorous and explicit methods for impact analysis (Impact evaluation purposes). In this article, Mohr discusses the problems facing qualitative methods when it is used to study impact. He asserts, impact is fundamentally a causation type of a problem and causation type of impact analysis are better evaluated using a quantitative methodology. Mohr argues that the main issue here lay squarely on the definition of causality. The most accepted definition of causation is based solely on the counter-factual definition of causality. That is if Y occurs, then, X must have occurred. This align perfectly with the quantitative methodology of impact evaluation. According to Mohr (1999) a more defensible version of the counter factual definition is called factual causation. Factual causation states that “X was caused by Y if and only if X and Y both occurred and, in the circumstances, if X had not occurred, then neither would Y” (Mohr, 1999; p. 71). Because of this, causation is better established when things are compared. Thus, causality is derived from the comparison of results from the experimental group to those in the control group. Without this bases of putting two sets of observations together to determine the variance on the treatment variable statistical analysis would not be possible.
Based on the counter-factual definition of causality it seems impossible to use qualitative methodology to evaluate impact. To determine impact, qualitative methods must rely on something other than evidence of counter-factual to establish causal inferences. Therefore, it renders impossible for a qualitative methodology to show the concurrence of X and Y without the use of a treatment group and a control group that is prevalent in quantitative designs. However, Stricken (1976 as cited in Mohr, 1999) offer us an approach called the “modus operandi’ method that could be used to bypass the counter-factual definition of causality. The modus operandi method can be described as follows: It is an elimination process. For example, to demonstrate that treatment T has caused Y to occur, other possible causes of Y such as U, V, and W must be eliminated as contenders for causing T to occur through elimination. The modus operandi is commonly used in daily works of professionals such as doctors, police, educators, and investigators. Modus operandi does not meet the counter-factual definition of causality used in quantitative study designs. However, because of the modus operandi methods, qualitative study designs can be used to determine impact of programs using the elimination process to determine causal inferences. Thus, no variables are needed to establish causation in qualitative designs because physical causality rather than factual causality does indeed produce compelling evidence for ascertaining the occurrence of T occurring when Y occurred after all the other contenders have been eliminated. Moreover, causal reasoning can be reliably used in qualitative designs to determine causal inferences in program and impact analysis.
I enjoyed reading this article because it offered me very practical and useful insights in conceptualizing causality inferences. I have learned that the debate on causation between researchers in the quantitative design and those in the qualitative design is hugely centered on the definition of causation. For the supporters of the quantitative design, causation is defined majorly based on the counter-factual definition of causality. That, causation is determined through comparing two sets of variables (control and experimental values). On the other hand, the proponents in the qualitative design camp sees that causation can be established through the elimination process. They argue that the process of elimination is commonly used in our daily lives without a comparison and/or variables. I can relate this to my research. There are several similarities between my research design and the process of elimination described in this article. My research follows the quantitative design tradition but does not involve a control group. Thus, the causal inferences I can draw from my research design (single-participant research design) are largely a result of better controls of the internal threats to validity rather than the comparison of results from the control group and that of the experimental group because none exists. Thus, as a researcher I plan to incorporate the useful, practical, beneficial insights, and steps of determining causal inferences discussed in this article in my own research especially during the design phase (to eliminate all other possible causes that may have caused the increase in student scores) of the research and during data interpretations.
Mohr, B. L. (1999). The qualitative method of impact analysis. American Journal of Evaluation, 20 (1), 69-84.