P.O. Box 234, Brodheadsville, PA 18322 Phone: 570-656-4286

Making Data-Based Decisions

By
John H. Correll, Ed.D., NAPSA Trustee

As pupil service administrators, we are typically responsible for an array of student programs, both legislated and locally determined, that are well meaning in their attempts to assist students. Too often, however, we are caught between staff that are emotionally involved in particular services, and School Boards and other administrators that want evidence that these expensive and staff intensive programs "work." PPS administrators are increasingly called upon to verify that programs are effective for their intended purposes, and many grants now call for strict summative outcome data for refunding. Thus, accountability comes in many forms for administrators responsible for programs that are established to enhance student growth and learning. Ethically, it also seems relevant that we continually monitor our programs for their efficacy as special programs that take much staff and pupil time should be shown to work. Otherwise, the time that could be spent on other interventions that do work is wasted.

Program evaluation can be a very collaborative effort among those actually involved in the implementation of a particular set of services, as well as those responsible for the direct outcomes at the system level. For this to happen, program evaluation should ideally be included as the program is being planned, so that data gathering steps can be
included from the onset of the project. As such, program evaluation is an essential component of program development, and should be considered in the early planning stages.

The first step in any planning process is clear agreement upon the goals of the program. These agreed upon goals should be tied to the district mission statement and learner outcomes in order to be relevant and important to all those involved. If a program does not address issues that fall somewhere on the district’s landscape of priorities and intents, it is likely to be considered ancillary and irrelevant to the goals of the district. Reviewing the district’s priorities can help the planning committee understand district goals regarding student growth. For example, is an academic support program essential to increase student test scores, improve attendance, increase credits earned, or enhance progress on an IEP, or, perhaps, several of these? It is critical to achieve clarity and agreement on the intent and focus of a program before implementation.

Second, the activities of the program must be aligned with the goals of the program. As self-evident as this statement may be, too often a time analysis of the activities of a particular intervention program reveals that much of the intervention time is devoted to activities not really central to the program goals. For
example, a special program designed to acquaint eighth graders with high school registration and course selection should be clearly focused on those goals and time should not be spent unnecessarily on tangential issues.

Third, program evaluation data should ideally be collected at three levels.

The first level of evaluation data is considered descriptive only. (ie: How many students were involved at what level, which gender, etc.?) This gathers critical basic demographics of participants. This data may be vital to the outcome of a program and may heighten awareness among the groups that a major concern exists.

The next level of evaluation involves the subjective impressions of the stakeholders regarding the program. The survey we complete regarding our evaluation of a conference presentation is an example of this. Through a survey or interview, students involved in a program can provide valuable information regarding a particular service or program. (ie: Is the guidance program helpful and in what ways?) Surveys should be carefully constructed and should be as specific as possible.

The third level of program evaluation involves collection of direct outcome data on the progress of the participants based on the interventions of the program. (ie: Does that special reading program actually improve student reading?) Pre/post test data can be used effectively to provide data, as well as the collection of additional data from classroom performance. Collecting data at this levle must be included in the program design from the onset, as it is too late to pretest months into an intervention. This area of program evaluation can involve some of the intricacies of field research in terms of control of comparison groups, the use of some statistical techniques for comparing the significance of impact, etc., but can be fun. There are several statistical analysis packages that can be used with a PC that allow for user-friendly data testing.

When evaluating a program that is already underway, some modifications of these procedures may be needed. However, given that our work takes place in schools that have distinct time intervals (school year, semester, etc.), the time frame for an evaluation process can be interjected at a normally occurring break in the process. The process may not be as pure, but decisions can be positively influenced by effective program evaluation.

PPS administrators are called upon to make data driven decisions regarding programs, and some of the basic principles of program evaluation can help in decision-making efforts. Additional information and references on the topic of effective program evaluation, can be obtained by contacting John Correll.