Program evaluation is the process of systematically collecting, analyzing, and using data to review the effectiveness and efficiency of programs. In educational contexts, program evaluations are used to: identify methods of improving the quality of higher education; provide feedback to students, faculty, and adminstrators; and ensure that programs, policies, curriculum, departments, and/or institutions are functioning as intended and producing desireable outcomes. The Poorvu Center is available to provide program evaluation support to Yale faculty and administrators.
Common Types of Educational Program Evaluations
1. Needs assessment – This type of evaluation identifies whether there is a difference between the performance of a program and its desired objectives or outcomes. The purpose is to identify the existing ‘needs’ within the targeted audience that can be addressed with supplemental instruction or programming.
2. Curriculum mapping – This type of evaluation identifies when and how various skills, content, and objectives are addressed across multiple courses. A curriculum map helps instructors and administrators determine how to modify instruction or program requirements to ensure that the curriculum has the appropriate breadth and depth.
3. Program review – This type of evaluation occurs on a regular schedule (often five to seven years). The evaluation allows faculty and adminstrators to examine how the program has changed over time and periodically (re)assess programmatic goals.
Program Evaluation Process
Several different models of program evaluation exist that differ slightly in both purpose and process. These models include the Kirkpatrick model, the CIPP model, and the SEP. Each of these models has unique terminology and may be more applicable in certain contexts than others. Despite these differences, the general program evaluation process in each model follows a similar process.
- Identify purpose – Determine why an evaluation is needed and what the information gathered during the evaluation process would be used for.
- Identify stakeholders– This includes identifying key personnel, participants, and audiences of the program. During this process, determine who should be involved in the evaluation process. Think about whose perspective would improve the evaluation and whose voices would be absent if not included in the process.
- Identify resources of the program– Determine all of the components that contribute to the functioning of the program (e.g. time, money, human resources, facilities, etc).
2. Understanding Program Design
- Describe the goals and outcomes of the program – Write objectives and classify goals as being either short term, medium term, or long term.
- Identify programmatic activities– This includes all of the tasks that the staff need to complete, as well as all of the courses, assignments, co-curriculars, mandatory meetings, etc. that participants of the program will be asked to complete.
- Connect the goals of the program with the activities and then the outcomes– This will help identify any gaps (i.e. goals and/or outcomes that are not associated with an activity) and ensures that all activies align with their intended purpose.
3. Design the Evaluation Plan
- Determine the scope of the evaluation– Identify what goals, outcomes, and activities will be included in the evaluation. Some outcomes may not be realized until long after the particpants complete the program and may require an ongoing evaluation.
- Find or develop measures to collect data - Perform a literature search to identify if any measures of your constructs exist and are currently being implemented in other programs. If not, you can systematically develop your own measures to address your initial questions about your program.
- Write an evaluation plan – The plan describes the data collection, data analyses, and reporting processes and outlines the responsibilities of each member of the evaluation team. Often an evaluation plan includes an examination of the fidelity of implementation, valuates whether the program was implemented exactly as written so that results can be based on the actual occurrences of the program, rather than on assumptions of how program components took place.
4. Conduct the evaluation
- Gather the data - according to the plan you developed.
- Analyze the data - Depending on the type of data you collected, select appropriate statistical analyses that allow you to understand what the data are demonstrating.
- Report the results to program stakeholders - Identify the audience and write an evaluation report. Include a description of the program as well as the results.
5. Revise the program and/or evaluation plan for continuous improvement
- Evaluating a program is not a one-time process. When done well, it opens a continuous procedure for measurement within the program.
Internal vs. External Evaluations
Depending on the reasons why an evaluation is being considered for a program, the program may pursue an internal or external evaluation.Internal evaluations are conducted by individuals within the program or institution, often as part of a formative effort of quality monitoring or continuous improvement. While the methodology may be similar to the internal process, external evaluations are conducted by individuals outside of the institution and are often focused on summative assessment. Internal or external reviews can help to determine where resources can be allocated for program improvement, and provide a comprehensive picture of services offered.
One benefit to using internal self-evaluation is that it can be directed to specific goals or intentionally aligned with program outcomes. Internal colleagues may also be familiar with the culture and context of the program being evaluated and be less intimidating to those involved. Internal evaluation can promote greater collegiality and collaboration among units, and due to the proximity of the evaluator, often it is feasible for results that are implemented to be overseen. Drawbacks of internal evaluation can be that there is subjectivity or bias in the program evaluation.
A benefit of using external evaluators is that the evaluation can often be broader in scope and design. Additionally, external review can provide greater credibility for the results with increased levels of accountability for those being evaluated.3Some grant funding agencies require an eternal evaluator for these reasons.
Relationship to Educational Research
Evaluators in educational settings use similar data collection methods and encounter many of the same methodological challenges as education researchers. However, the purpose underlying the work differs. Educational researchers are often trying to understand and explain how learning takes place for the purpose of increasing the wider knowledge base. Since researchers usually are trying to identify phenomenon that may occur or be applicable across different contexts, there is often a greater emphasis on sampling methods to ensure that the results are not limited to a specific group of participants, lab, classroom, or environment. Evaluations are usually context specific and there is less of an attempt to generalize the results beyond a particular context. The audience for an evaluation is usually a stakeholder(s) in a program with the aim of indentifying ways to improving that program. Though depending on the scope of the evaluation, the results may be informative to other similar programs.
Carroll, C., Patterson, M., Wood, S., Booth, A., Balain, J.R., & Balain, S. (2007). A conceptual framework for implementation fidelity, Implementation Science, 2-40.
Goldstein, I. L., & Ford, J. K. (2002). Training in organizations: Needs assessment, development, and evaluation (4th ed.). Belmont, CA, US: Wadsworth/Thomson Learnin