Formative evaluation developed relatively late in the course of evaluation's emergence as a discipline as a result of growing frustration with an exclusive emphasis on outcome evaluation as the only purpose for evaluation activity. Outcome evaluation looks at the intended or unintended positive or negative consequences of a program, policy or organization. While outcome evaluation is useful where it can be done, it is not always the best type of evaluation to undertake. For instance, in many cases it is difficult or even impossible to undertake an outcome evaluation because of either feasibility or cost. In other cases, even where outcome evaluation is feasible and affordable, it may be a number of years before the results of an outcome evaluation become available. As a consequence, attention has turned to using evaluation techniques to maximise the chances that a program will be successful instead of waiting till the final results of a program are available to assess its usefulness. Formative evaluation therefore complements outcome evaluation rather than being an alternative to it.
Formative evaluation is done with a small group of people to "test run" various aspects of instructional materials. For example, you might ask a friend to look over your web pages to see if they are graphically pleasing, if there are errors you've missed, if it has navigational problems. It's like having someone look over your shoulder during the development phase to help you catch things that you miss, but a fresh set of eye might not. At times, you might need to have this help from a target audience. For example, if you're designing learning materials for third graders, you should have a third grader as part of your Formative Evaluation.
Formative Evaluation has also recently become the recommended method of evaluation in U.S. education. In this context, an educator would analyze the performance of a student during the teaching/intervention process and compare this data to the baseline data. There are four visual criteria that can be applied (Kazdin, 1982): 1)Change in mean, 2)Change in level or discontinuity of performance, 3)Change in trend or rate of change, 4)Latency of change (Thomas & Grimes, 2008).
Another method of monitoring progress in formative evaluation is use of the number-point rule. In this method, if a certain pre-specified number of data points collected during the intervention are above the goal, then the educators need to consider raising the goal or discontinuing the intervention. If data points vary highly, educators can discuss how to motivate a student to achieve more consistently (Thomas & Grimes, 2008).
Kazdin, A. E. (1982). Single-case reserch designs: Methods for clinical and applied settings. New York: Oxford university Press.
Thomas, T. & Grimes, J. (2008). Best practices in school psychology V (National Association of School Psychologists (NASP) Ed.). Bethesda, MD: NASP. vol 2, p. 218.