The article entitled “The Qualitative Method of Impact Analysis” by Mohr (1999) attempts to qualify qualitative study design as a rigorous and explicit method for impact analysis (impact evaluation purposes). In this article, Mohr discusses the problems facing qualitative methods when it is used to study impact. He asserts, impact it fundamentally is a causation type of a problem. Causation impact analysis is better evaluated if one uses a quantitative methodology. Mohr argues that the main issue here is based upon the definition of causality. The most accepted definition of causation is based solely on the counterfactual definition of causality. Therefore, if Y occurs, then, X must have occurred. This aligns perfectly with the quantitative methodology of impact evaluation. According to Mohr (1999), a more defensible version of the counter factual definition is called factual causation. Factual causation states that “X was caused by Y if and only if X and Y both occurred and, in the circumstances, if X had not occurred, then neither would Y” (Mohr, 1999; p. 71). As a result, causation is better established when variables are compared. Thus, causality is derived from the comparison of results from the experimental group to those in the control group. Without this base of combination of observations it would be impossible to determine the variance on the treatment variables. Hence, statistical analysis would not be possible.
Based on the counterfactual definition of causality it is impossible to use qualitative methodology to evaluate impact. To better determine impact, qualitative methods must rely on something other than evidence of counterfactual to establish causal inferences. Therefore it renders impossible for a qualitative methodology to show the concurrence of X and Y without the use of a treatment group and a control group that is prevalent in quantitative designs. However, Stricken (1976 as cited in Mohr, 1999) offer us an approach called the “modus operandi’ method which can be used to bypass the counterfactual definition of causality. The modus operandi method can be described as follows: it is an elimination process. For example, to demonstrate that treatment T has caused Y to occur, other possible causes of Y such as U, V, and W must be eliminated as contenders for causing T to occur through elimination. The modus operandi is commonly used in the daily works of professionals such as doctors, police, and investigators. Modus operandi does not meet the counterfactual definition of causality used in quantitative study designs. However, because of the modus operandi methods, qualitative study designs can be used to determine the programs impact using the elimination process to determine causal inferences. Therefore, no variables are needed to establish causation in qualitative designs because physical causality rather than factual causality does indeed produce compelling evidence for ascertaining the occurrence of T when Y occurred after all the other contenders have been eliminated. Thus, causal reasoning can be reliably used in qualitative designs to determine causal inferences in program and impact analysis.
I enjoyed reading this article because it offered me practical and useful insights in conceptualizing causality inferences. I have learned that the causation debate between researchers in quantitative design and those in qualitative design is based on the definition of causation. For the supporters of quantitative design, causation is defined by the counterfactual definition of causality. Thus, causation is determined by comparing two sets of variables (control and experimental values). On the other hand, the proponent in the qualitative design camp proves that causation can be established through the elimination process. The process of elimination is commonly used in our daily lives without comparisons and/or variables. I can relate this to my research. There are several similarities between my research design and the process of elimination described in this article. My research follows the quantitative design tradition, but it does not involve a control group. The causal inferences I can draw from my research design (single participant research design) are largely a result of better controls of the internal threats to validity rather than the comparison of results from the control group to that of the experimental group. There are no control groups in my proposed experimental design. Thus, as a researcher I plan to incorporate the useful, practical, beneficial insight, and steps of determining causal inferences discussed in this article.
Reference
Mohr, B. L. (1999). The qualitative method of impact analysis. American Journal of Evaluation, 20 (1), 69-84.