Learning to Give, Philanthropy education resources that teach giving and civic engagement


Program Evaluation

By Michelle Lindeman

Graduate Student, Grand Valley State University


Definition

Program evaluation is defined as efficiency, effectiveness, and accountability of a department, program or agency. There are three key definitions of program evaluation. The first defines it as systematic measures and comparisons which provide specific information on program results to senior officials for use in policy or management decisions (Milakovich and Gordon 2001). The second defines it as the use of scientific methods to measure the implementation and outcomes of programs for decision-making process (Rutman and Mowbray 1983). The third describes it as the application of systematic research methods to the assessment of program design, implementation and effectiveness (Chelimsky 1989). All three definitions see program evaluation as systematic and related to measurement.


Historic Roots

Program evaluation is rooted in government and politics. It started in Europe in the 1870 with the age of reform in England, which evaluated educational achievement in Europe (Chelimsky 1989). In the United States, program evaluation began during the industrial revolution and Woodrow Wilson's presidency. Increased bureaucracy called for accountability, which added even more bureaucracy, and lead to program evaluation.

According to Chelimsky, program evaluation "developed slowly and incrementally under various guises and disguises" (Chelimsky 1989, 2). However during the period 1948-1963, program evaluation had an important movement in that several different methods of program evaluation started to merge together.

In the 1950's, the rational spending of government drove the development of program evaluation. The Department of Defense began using a Program Budgeting System (PPBS) (Chelimsky 1989) and the General Accounting Office (GAO) was also involved in performance evaluation. These departments were the "investigating arm of congress that helped Congress oversee federal program and operations to assure accountability to the American people" (Milakovich and Gordon 2001, 412).

The 1960 society reform program influenced program evaluation and brought PPBS and other performance budgeting options even further into the spotlight. During a period from 1966-1969 the fight began for political rationality. The art of muddling through was no longer the choice; it established the scientific method of program (Chelimsky 1989).

In 1970, program evaluation continued to be of interest and was seen as "more systematic instead of intuitive" (Milakovich and Gordon 2001, 406). In 1974, the Title VII Congressional Budget and Improvement Control Act came into fruition and helped generate continued interest in program evaluation.

In 1993, program evaluation continued to push forward and gain momentum due to the Governmental Performance and Results Act (GRPA) requiring managers to plan and measure performance in new ways (Milakovich and Gordon 2001). The national performance review movement hedged forward at this time.

In 2000, program evaluation became even more important and controlled. Grant makers began requiring organizations applying for grants to include in their proposals the plan to evaluate the effectiveness and efficiency of their programs. In addition, grant makers were beginning to provide funds as part of the grant to ensure program evaluation was completed, and grants were funded to programs specifically for program evaluation.


Importance

The purpose of program evaluation is to determine whether the program is efficient in terms of using resources wisely to perform the needed work, effective by performance measures or objectives set, and implemented as stated. The efficiency and effectiveness of a program can help make decisions, fix accountability problems, and aid in planning. It can also improve operations, reallocation of resources and contract monitoring (Lane 1999). In order to understand the importance, knowledge of the components of the program evaluation is required.

There are several steps prior to beginning the evaluation process. There must be a commitment to the evaluation process, clear communication to staff, decisions on a budget, determination if the evaluation will be done internally or externally (using a professional evaluator is preferred) and staff's role need to be defined in relation to the evaluation process. The next step is to make a logic model (Hatry 1999) or program model (Rutman and Mowbary 1983). The model provides an in depth understanding of the issues, what can be measured, how to measure and what analysis should be used in the evaluation.

The first step is to list all inputs or resources. Then, list the activities or workload which are services of the department, program or agency. Next, list the outcomes, which are products, number of pamphlets distributed, hours served, etc. Finally, list outcomes which occur as a result of the inputs, activities and outputs. There can be intermediate outcomes where a change has occurred; end outcomes where a goal has been reached; and ultimate outcomes or impact that will effect a broader audience for future generations. After the lists are completed, connect them together using arrows showing which inputs lead to which activities and which activities lead to specific outcomes, showing how categories are interdependent upon one another; this is a logic model (Hatry 1999).

Having completed the logic model, the decision must be made on the analysis used for the program evaluation. In order to make that decision, one must do an evaluabilitiy assessment (Rutman and Mowbary 1983). This entails knowing the types of measures the organization is able to get, the time frame and budget. When deciding what type of analysis is feasible for the evaluation, it is important to pay close attention to the demographics of the program, the internal and external threats that might affect the evaluation, and how the data is collected such as through observation, internal data and surveys (Rutman and Mowbary 1983).

After all decisions are made, the evaluator ensures the indicators are clear, concise and everyone evolved in the program evaluation understands what is being measured, how it is being measured, and how the data will be analyzed. The implementation of the program evaluation is very important because if it is not done correctly the results will be skewed. Attribution (did the program produce the measured results) and whether or not it can be generalized to other programs is also very important (Rutman & Mowbary 1983). The evaluator must ensure that the data is as objective, reliable and valid as possible.

Program evaluation is often used to assist lawmakers in government and in organizations to make policy decisions. Often policy makers have many choices in selecting a particular policy. In order for them to choose, they need to have information regarding the effectiveness of policies. Performance evaluation can give the information necessary to make informed decisions.

Political executives, legislators, and organizations use program evaluation to provide information on the efficiency and effectiveness of programs and policies. It affects decisions on which programs and policies to fund, whether to continue funding the same way or not to fund the program or policy at all. It assists in future decision making, policy formulation and policy revisions, and can be used for monitory resources and monitoring the implementation of programs (Milakovich & Gordon 2001).

Program evaluation is used in many different disciplines. The common disciplines are psychosocial, sociology, economics, political science, applied statistics, and anthropology. Citizens use program evaluation to determine product or service purchases, or if they will donate money or volunteer for an organization. Foundations use program evaluation to determine their new and continued funding choices.


Ties to the Philanthropic Sector

Many foundations are now requiring those grant seekers to have a program evaluation process in place. The evaluation may be internal or by an outside provider. Foundations often include funds in the grant to pay for outside program evaluations. Stipulations may state that programs not meeting their goals are unable to receive additional funds. Programs not meeting their goals may be viewed as inefficient, ineffective and unaccountable.


Key Related Ideas

Impact Analysis , Performance Measurement and Systems Analysis are several key ideas or tools that have emerged as a result of program evaluation. Program evaluation assesses the effectiveness of the organization; impact analysis is the first tool. Impact analysis shows the relationship between X (the program) and Y (outcomes of the program) (Mohr 1995).

Performance Measurement develops indicators "to measure the outcomes and efficiency of services provided" (Harty 1999, 3). Employers implement pay for performance systems with sets of indicators to measure employee performance for meeting expectations, merit raises or other work incentives.

Systems Analysis or Cost-effectiveness Analysis is term included in many budgeting policies for decision-making purposes regarding the extent of funding for a program. Performance base budgeting measures the efficiency, effectiveness and accountability of the program. Zero base budgeting (ZBB) is to reappraise a program as needed. It is a rational approach to budgeting and employs a ranking system to review and prioritize the budget of each unit.

Total Quality Management (TQM) , is a management tool that involves employees and empowers them to make decisions to improve quality for customers. TQM is utilized in the private sector as well as nonprofit entities.

Continuous Quality Improvement (CQI) is a management tool that assists organizations in becoming learning organizations from data collection. Examining the data collected allows continued improvement in the quality of services provided and improvement in the organization as a whole.


Important People Related to the Topic

  • Malcome Baldridge: Baldridge has a national award named in his honor after him entitled, The National Quality Award, created by Public Law 100-107 and signed into law on August 20, 1987. He was the U.S. Secretary of Commerce from 1981 until his death in 1987. Through his managerial skills, he contributed to the study of program evaluation and improved the efficiency and effectiveness of the government (National Institute for Standards and Technology).

  • Donald T. Campbell: Campbell contributed to program evaluation and impact analysis for over fifty years. He received his Ph.D. from Berkeley in 1947. He introduced the language of quasi-experimental design and was a leading authority on cross-culture psychology. He taught at many prestigious universities, and received the scientific contribution award from the American Psychological Association; and he also served as president of the organization. Dr. Campbell received an award for his contribution to research in education (Segall 1996).

  • Eleanor Chelimsky: Chelimsky authored Program Evaluation Patterns and Directions . She was an economic analyst for the United States mission to N.A.T.O. and was a Fullbright scholar in Paris. In 1980, she directed the program evaluation and methodology division of the U.S. General Accounting Office (GAO); and in 1988, she was named the assistant controller general for program evaluation methodology.

  • Harry P. Hatry: Hatry is an influential contributor to the field of performance measurement and evaluation. He is currently a research associate at the Urban Institute in Washington D.C. Mr. Hatry is a former director of the Institute's Public Management Program and has been influencing the field since the 1970's. In 1995, he was the recipient of the Elmer B. Staats Award for Excellence in Program Evaluation and received an award from the National Public Service, sponsored by American Society of Public Administration. In 1999, he received a lifetime achievement award for performance measurement. In 2000, he received the 50 th Anniversary Einhor-Gray Award for his commitment to a more accountable government. He has been an associate of the U.S. Office of Management and Budget performance measurement advisory council and served on the task force for the United Way of America outcome measurement panel (IBM Center for The Business Government).

  • Charles Hitch: Hitch is an economist and contributed to the concepts of PPBS budgeting in the 1950s. He created such analytical tools as cost effectiveness, cost benefit analysis and systems analysis, which is beneficial in program evaluation. He was part of Robert McNamara's Office of Systems Analysis for the Pentagon.

  • Lawrence B. Mohr: Mohr has contributed greatly to the program evaluation and impact analysis as an author of I mpact Analysis for Program Evaluation and has written several books on this topic as well as organizational behavior. He is a professor at the University of Michigan and has worked for the U.S. Public Health Service (University of Michigan).

  • Leonard Rutman: Rutman is co-author of U nderstanding Program Evaluation and has written and co-authored several books in the area of program evaluation. He has taught at Carleton University and the University of Winnipeg.

  • Carol Weiss: Weiss is a professor at the Harvard Graduate School of Education and has been a key figure in program evaluation. Her expertise lies in the area of affirmative action, policy analysis and evaluation, and research methods. She has a PH.D. from Columbia University and has written over eleven books on evaluation and other related topics. Her current research is in the area of decision-making and the influence of state and federal policies as well as studies of evaluation. She has received several awards, including the Mardal Award for Science from the Evaluation Research Society in 1980 and most recently a fellowship for advanced study in the Behavioral Sciences in 1993 (Graff and Christou 1996).

  • Joseph Wholey: Wholey is one of the leading individuals to explore evaluability assessment and is an authority on performance measurement. He is Senior Advisor for the U.S. General Accounting Office regarding evaluation methodology. Previously, he worked for the Department of Health Measurement and Human Services. Mr. Wholey has been the past president of Urban Institute and the Evaluation Research Society. He earned his Ph. D. from Harvard.


Related Nonprofit Organizations

  • Center for Excellence in Nonprofits is a learning organization and has materials on community based nonprofits, continuous improvement, and leadership development ( http://www.cen.org/site/cen/ ).

  • The Center for What Works improves the effectiveness of social policy by assisting public and non-profit sector organizations to systematically identify and replicate best practice. Their emphasis is benchmarking ( http://www.whatworks.org ).

  • Indiana Center for Evaluation is linked with the School of Education at Indiana University. Its purpose is to promote and support systematic program evaluation for non-profit organizations (http:// http://ceep.indiana.edu/ ).

  • The National Results Council is an organization is geared toward programs with vocational rehabilitation. They track the results and performance of employment programs ( http://www.nationalresultscouncil.org ).

  • The United Way Outcome Measurement Resource Network provides outcome measurement resources and instructions
    ( http://www.unitedway.org ).


Related Web Sites

Answer Center Program Evaluation Web site, at http://www.delawarenonprofit.org/ProgEvalFaq.html, provides links to many program evaluation tools and answers frequently asked questions. The Delaware Association of Nonprofits Agencies maintains the site.

The Bureau of Justice Assistance (BJA) Evaluation at http://permanent.access.gpo.gov/lps9890/lps9890/www.bja.evaluationwebsite.org/index.htm, Web site is geared towards evaluation of criminal justice programs and includes a glossary, reports and resources includes an evaluation "roadmap."

CDC Evaluation Working Group Web site at http://www.cdc.gov/eval/resources.htm, is used for leaning about CDC evaluation with many resources for general organizations regarding standards of excellence, performance improvement, ethics and standards.

Evaluation Societies Web site, at http://www.policy-evaluation.org, provides links to evaluation societies around the world who give updates on new development in the field. It is geared towards community evaluation.

The Learning Institute Web site has an Nonprofit Organizational Assessment Tool, at http://www.uwex.edu/li/learner/assessment.htm, which provides a tool for non-profit organizations to help with discussions and applications of many different issues that non-profits may encounter.

Management Assistance Program for Nonprofits (MAPNP) Web site, at http://www.mapfornonprofits.org, provides documents that can help guide an organization with planning and implementation of a program evaluation for both non-profit and private organizations.

Nonprofitexpert.com Web site, at http://nonprofitexpert.com/evaluation.htm, has many links for nonprofits and has many links for specific nonprofits.

Online Evaluation Resource Library Web site at, http://oerl.sri.com, is run by the national science foundation. It has a glossary of evaluation terminology and best practices criteria as well as case studies.

W.K. Kellogg Foundation Web site has an Evaluation Handbook at, http://www.wkkf.org/Pubs/Tools/Evaluation/Pub770.pdf, gives a blueprint for organizations to help them conduct evaluations.


Bibliography and Internet Sources

Center for the Business of Government. "Award Winners." http://www.businessofgovernment.org/main/
winners/bios/harry_hatry_bio.asp
.

Chelimsky, Eleanor. Program Evaluation: Patterns and Directions . Washington D.C.: The American Society for Public Administration, 1989. ISBN: 0-936678-12-7.

Graff, Fiona, and Miranada Christou. "In Evidence Lies Change: The Research Whiting Professor Carol Weiss." Harvard Graduate School of Education. HGSE News. September (2001). http://www.gse.harvard.edu/news/features/weiss09102001.html.

Hatry, Harry P. Performance Measurement Getting Results . Washington D.C.: The Urban Institute Press, 1999. ISBN: 0-87766-692-X.

Lane, Fredrick S. Current Issues In Public Administration . Boston/New York: Bedford/St Martian's, 1999. ISBN: 0-312-15249-3.

Milakovich, Michael E., and George J. Gordon. Public Administration In America . Boston: Bedford/St Martin's, 2001. ISBN: 0-312-24972-1.

Mohr, Lawrence B. Impact Analysis for Program Evaluation . Thousand Oaks/London/New Delhi: Sage Publications, 1995. ISBN 0-8039-5935-4(alk.paper) ISBN: 0-8039-5936-2 (pbk.:a;k.paper).

National Institute for Standards and Technology. "Baldrige National Quality Program". The Malcom Baldrige Nation Quality Improvement Act of 1987- Public Law 100-107. http://www.quality.nist.gov/Improvement_Act.htm.

Rutman, Leonard, and George Mowbray. Understanding Program Evaluation . Beverly Hiss/London/New Delhi: Sage Publications, 1983. ISBN: 0-8039-2093-8.

Segall, Marshall. "On the Life of Donald Campbell" New York USA. (1996). http://www.iaccp.org/bulletin/V30.2_1996/Campbell.html.

University of Michigan. "Gerald R. Ford School of Public Policy." http://www.fordschool.umich.edu/.