Course Evaluation


Hello everyone. It was great to have you in my course this semester. I hope you enjoyed the experience. In my quest to make the course more enjoyable to you, I would like your input. Additionally, I hope you will find a way to use the information you learned in this course in the near future to make your lives better. As we are approaching the end of the semester, I would like for you to share your opinion about the course by clicking this link. It is my hope that you will take this opportunity seriously and that you will offer genuine suggestions to improve the course.

Here are three things I would like you to respond to:

1) what did you like about the course (think about pacing (too slow, too fast, just about right), information, field trips, out of class activities, in class activities and so forth)?

2) what did you not like?

3) what could I have done differently?

This is completely anonymous. Feel free to express your opinion to help me improve students’ experiences in the course.

Good-luck and Happy Summer Y’all!!

 

 

Advertisements

Curriculum Evaluation Using Tyler’s Goal Attainment Model or Objectives-Centered Model


In this article  I will describe the Tyler model while emphasizing its evaluative component. I will use the DeKalb County Science Curriculum in my analysis. Specifically, I will use Dunwoody High School students’ outcomes data (end of course test-EOCT) for physical science and biology to evaluate the curriculum. However, before I start the evaluation, I will provide a brief overview of the Tyler model (what is it? what are its parts? and what are the criticisms of the model?) and finally I will conclude Continue reading “Curriculum Evaluation Using Tyler’s Goal Attainment Model or Objectives-Centered Model”

The Qualitative Method of Impact Analysis


The article entitled “The Qualitative Method of Impact Analysis” by Mohr (1999) attempts to qualify qualitative study design as a rigorous and explicit method for impact analysis (impact evaluation purposes). In this article, Mohr discusses the problems facing qualitative methods when it is used to study impact. He asserts, impact it fundamentally is a causation type of a problem. Causation impact analysis is better evaluated if one uses a quantitative methodology. Mohr argues that the main issue here is based upon the definition of causality. The most accepted definition of causation is based solely on the counterfactual definition of causality. Therefore, if Y occurs, then, X must have occurred. This aligns perfectly with the quantitative methodology of impact evaluation. According to Mohr (1999), a more defensible version of the counter factual definition is called factual causation. Factual causation states that “X was caused by Y if and only if X and Y both occurred and, in the circumstances, if X had not occurred, then neither would Y” (Mohr, 1999; p. 71). As a result, causation is better established when variables are compared. Thus, causality is derived from the comparison of results from the experimental group to those in the control group. Without this base of combination of observations it would be impossible to determine the variance on the treatment variables. Hence, statistical analysis would not be possible.

Based on the counterfactual definition of causality it is impossible to use qualitative methodology to evaluate impact. To better determine impact, qualitative methods must rely on something other than evidence of counterfactual to establish causal inferences. Therefore it renders impossible for a qualitative methodology to show the concurrence of X and Y without the use of a treatment group and a control group that is prevalent in quantitative designs. However, Stricken (1976 as cited in Mohr, 1999) offer us an approach called the “modus operandi’ method which can be used to bypass the counterfactual definition of causality. The modus operandi method can be described as follows: it is an elimination process. For example, to demonstrate that treatment T has caused Y to occur, other possible causes of Y such as U, V, and W must be eliminated as contenders for causing T to occur through elimination. The modus operandi is commonly used in the daily works of professionals such as doctors, police, and investigators. Modus operandi does not meet the counterfactual definition of causality used in quantitative study designs. However, because of the modus operandi methods, qualitative study designs can be used to determine the programs impact using the elimination process to determine causal inferences. Therefore, no variables are needed to establish causation in qualitative designs because physical causality rather than factual causality does indeed produce compelling evidence for ascertaining the occurrence of T when Y occurred after all the other contenders have been eliminated. Thus, causal reasoning can be reliably used in qualitative designs to determine causal inferences in program and impact analysis.

I enjoyed reading this article because it offered me practical and useful insights in conceptualizing causality inferences. I have learned that the causation debate between researchers in quantitative design and those in qualitative design is based on the definition of causation. For the supporters of quantitative design, causation is defined by the counterfactual definition of causality. Thus, causation is determined by comparing two sets of variables (control and experimental values). On the other hand, the proponent in the qualitative design camp proves that causation can be established through the elimination process. The process of elimination is commonly used in our daily lives without comparisons and/or variables. I can relate this to my research. There are several similarities between my research design and the process of elimination described in this article. My research follows the quantitative design tradition, but it does not involve a control group. The causal inferences I can draw from my research design (single participant research design) are largely a result of better controls of the internal threats to validity rather than the comparison of results from the control group to that of the experimental group. There are no control groups in my proposed experimental design. Thus, as a researcher I plan to incorporate the useful, practical, beneficial insight, and steps of determining causal inferences discussed in this article.
Reference
Mohr, B. L. (1999). The qualitative method of impact analysis. American Journal of Evaluation, 20 (1), 69-84.