Evaluation research in the 21st century: a more and more relevant tool for the educational and social development

Authors

  • Tomás Escudero Universidad de Zaragoza

DOI:

https://doi.org/10.7203/relieve.22.1.8164

Keywords:

Evaluation research, Social development, Transdisciplinary discipline, Different methodologies, Participative strategies, Utility and use of the evaluation, Ethical-scientific rules, Meta-evaluation

Abstract

After a wide revision of the recent publications about the subject, the present situation of evaluation research is analysed as a strategic tool for the decision making of the development and improvement of society and citizens’ quality of life in varied sectors such as education, health, economy, culture, social protection, public policies, etc.  The scientific identity of the evaluation research is described and founded upon, stressing its transdisciplinary character, the rise of the evaluation of organizations and institutions, and the use of different methodologies and the importance of participative strategies. Also outlined is the utility and appropriate use of the evaluations as priority object of this kind of research, relying always on principles and ethical rules of scientific quality and on the corresponding meta-evaluative studies.

Author Biography

Tomás Escudero, Universidad de Zaragoza

Chair of Educational Research at the University of Zaragoza.  He has authored a large number of works and publications in the evaluative education field, particularly institutional evaluation

References

Abelson, J., Forest, P-G., Eyles, J., Smith, P., Martin, E., & Gauvin, F.P. (2003). Deliberations about deliberative methods: issues in the design and evaluation of public participation processes. Social Science & Medicine 57, 239–251. doi: http://dx.doi.org/10.1016/S0277-9536(02)00343-X

Abma, T. A. (2000). Stakeholder conflict: a case study. Evaluation and Program Planning 23, 199-210. doi: http://dx.doi.org/10.1016/S0149-7189(00)00006-9

Aguilar, M. (2001). La evaluación institucional de las universidades. Tendencias y desafíos. Revista de Ciencias Sociales (Cr), II-III, 93-92, 23-34.

American Evaluation Association (2008). Guiding Principles for Evaluators American Journal of Evaluation, 29 (4), 397-398.

Askew, K., Green Beverly, M. & Jay. M. L. (2012). Aligning collaborative and culturally responsive evaluation approaches. Evaluation and Program Planning, 35, 552–557. doi: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.011

Azzam, T. & Levine, B. (2015). Politics in evaluation: Politically responsive evaluation in high stakes environments Evaluation and Program Planning, 53, 44-56. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.07.002

Betzner, A., Lawrenz, F. P. & Thao. M. (2016). Examining mixing methods in an evaluation of a smoking cessation program. Evaluation and Program Planning, 54, 94-101. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.06.004

Brandon, P. R. & Fukunaga, L. L. (2014). The state of the empirical research literature on stakeholder involvement in program evaluation. American Journal of Evaluation, 35 (1), 26–44. doi: http://dx.doi.org/10.1177/1098214013503699

Bredo, E. (2006). Philosophies of Educational Research. En J. L. Green, G. Camilli & P. B. Elmore, Handbook of Complementary Methods in Education Research. London: Lawrence Erlbaum Associates Publishers AERA 3-31.

Calderon, A. J. (2004). Institutional Research at RMIT. A case study. Ponencia presentada en el 26th EAIR Forum, Barcelona, 5-8 de septiembre.

Chelimsky, E. (2008). A Clash of Cultures: Improving the “Fit” Between Evaluative Independence and the Political Requeriments of a Democretic Society. American Journal of Evaluation, 29, 4, 400-415. doi http://dx.doi.org/10.1177/1098214008324465

Chouinard, J. A., & Milley. P. (2016). Mapping the spatial dimensions of participatory practice: A discussion of context in evaluation. Evaluation and Program Planning, 54, 1-10. http://dx.doi.org/10.1016/j.evalprogplan.2015.09.003

Christie, C. A. (2003). The practice-theory relayionship in evaluation. New Directions for Program Evaluation, 97, Jossey-Bass, San Francisco, Ca.

Christie, C. A. (2007). Reported influence of evaluation data on decision makers’actions: An empirical examination. American Journal of Evaluation, 28, 1, 8–25 doi: http://dx.doi.org/10.1177/1098214006298065

Christie, C. A. & Fleischer, D. N. (2010). Insight Into Evaluation Practice: A Content Analysis of Designs and Methods Used in Evaluation Studies Published in North American Evaluation-Focused Journals. American Journal of Evaluation, 31(3), 326-346. doi: http://dx.doi.org/10.1177/1098214010369170

Christie, C. A., Ross, R. M. & Klein, B. M. (2004). Moving toward collaboration by creating a participatory internal-external evaluation team: a case study. Studies in Educational Evaluation,36(2), 107-117. doi: http://dx.doi.org/10.1016/j.stueduc.2004.06.002

Claverie, J., Gonzalez, G. & Perez, L. (2008), El Sistema de Evaluación de la Calidad de la Educación Superior en la Argentina: El Modelo de la CONEAU. Alcances y Límites para Pensar la Mejora. Revista Iberoamericana de Evaluación Educativa, 1(2), 149-164.

Cook, J. R. (2015). Using Evaluation to Effect Social Change: Looking Through a Community Psychology Lens. American Journal of Evaluation, 28(1), 107-117. doi: http://dx.doi.org/10.1177/1098214014558504

Cousins, J. B. (2004). Commentary: Minimizing evaluation misuse as principled practice. American Journal of Evaluation, 25(3), 391-397. doi: http://dx.doi.org/10.1177/109821400402500311

Cousins, J. B.; Goh, S.C.; Elliot, C.J. & Bourgeois, I. (2014). Framing the Capacity to Do and Use Evaluation. New Directions for Evaluation, 141 (1), 7-23. doi: http://dx.doi.org/10.1002/ev.20076

Daigneault, P. (2014). Taking stock of four decades of quantitative research on stakeholder participation and evaluation use: A systematic map. Evaluation and Program Planning, 45, 171–181. doi: http://dx.doi.org/10.1016/j.evalprogplan.2014.04.003

Donaldson, S. I. (2007). Program theory-driven evaluation science. Mahwah, NJ: Erlbaum

Donaldson, S. I. & Gooler, L. E. (2003). Theory-driven evaluation in action: lessons from a $20 million statewide Work and Health Initiative. Evaluation and Program Planning, 26, 355–366 doi: http://dx.doi.org/10.1016/S0149-7189(03)00052-1

Donaldson, S. I. & Lipsey, M. W. (2006). Roles for theory in contemporary evaluation practice: Developing practical knowledge. En I. Shaw, J. C. Greene & M. M. Mark (Eds.), The Handbook of Evaluation: Policies, programs, and Practices (56-75). London: Sage

Donaldson, S. I. & Scriven, M. (2003). Diverse visions for evaluation in the new millennium. En S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Vision for the new millenium (pp. 3-16), Mahwah, NJ: Erlbaum.

Escudero, T. (2000). Evaluación de centros e instituciones educativas: las perspectivas del evaluador. En D. González, E. Hidalgo & J. Gutiérrez (Coords.), Innovación en la escuela y mejora de la calidad educativa (pp. 57-76). Granada: Grupo Editorial Universitario

Escudero, T. (2002). Evaluación institucional: algunos fundamentos y razones. En V. Álvarez & A. Lázaro, Calidad de las Universidades y Orientación Universitaria (pp. 103-138). Málaga: Ediciones Aljibe.

Escudero, T. (2003). Desde los tests hasta la investigación evaluativa actual. Un siglo, el XX, de intenso desarrollo de la evaluación en educación. [English version: From tests to current evaluative research. One century, the XXth, of intense development of evaluation in education]. RELIEVE, 9(1), Recuperado de http://www.uv.es/RELIEVE/v9n1/RELIEVEv9n1_1.htm

Escudero, T. (2005-2006). Claves identificativas de la investigación evaluativa: análisis desde la práctica. Contextos Educativos. Revista de Educación, 8-9, 179-199.

Escudero, T. (2006). Evaluación y mejora de la calidad en educación. En T. Escudero & A. D. Correa, Investigación en innovación educativa: algunos ámbitos relevantes (pp. 269-325). Madrid: La Muralla, S. A.

Escudero, T. (2007). Evaluación institucional de la calidad universitaria en España: Breve pero interesante historia. Anuario de Pedagogía, 9, 103-115.

Escudero, T. (2009). Some relevant topics in educational evaluation research. En M. Asorey, J. V García Esteve, M. Rañada, & J. Sesma, Mathematical Physics and Field Theory. Julio Abad, in Memoriam, (pp. 223-230). Prensas Universitarias de Zaragoza.

Escudero, T., Pino, J. L. & Rodríguez, C. (2010). Evaluación del profesorado universitario para incentivos individuales: revisión metaevaluativa. Revista de Educación, 351, 513-537.

Escudero, T. (2011). La construcción de la investigación evaluativa. El aporte desde la educación. Prensas Universitarias-Universidad de Zaragoza.

Escudero, T. (2013). Utilidad y uso de las evaluaciones. Un asunto relevante. Revista de evaluación educativa, 2 (1). Recuperado de http://revalue.mx/revista/index.php/revalue/issue/current

European Commission/EACEA/EURYDICE (2015). Assuring Quality in Education: Policy and Approaches to School Evaluation in Europe, Eurydice Report. Luxembourg: Publication Office of the European Union. Recuperado de http://eacea.ec.europa.eu/education/eurydice/

Exposito, J., Olmedo, E. & Fernandez-Cano, A. (2004). Patrones metodológicos en la investigación española sobre evaluación de programas educativos. RELIEVE, 10 (2). Recuperado de http://www.uv.es/RELIEVE/v10n2/RELIEVEv10n2_2.htm

Ferrandez, R. (2008). Programas de Auditoría Institucional Universitaria. Comparación de la Propuesta Española con el Sistema Británico. Revista Iberoamericana de Evaluación Educativa, (1)1, 156-170.

Fetterman, D. M. (2001a). The Transformation of Evaluation into a Collaboration: A Vision of Evaluation in the 21st Century. American Journal of Evaluation, (22) 3, 381-385. doi: http://dx.doi.org/10.1177/109821400102200315

Fetterman, D. M. (2001b). Foundations of empowerment evaluation. Thousand Oaks, CA.: Sage.

Fetterman, D.M., Kaftarian, S.J. & Wandersman A. (Eds.) (2015). Empowerment Evaluation: Knowledge and Tools for Self- Assessment, Evaluation Capacity Building, and Accountability(2nd edition). Thousand Oaks, CA.: Sage Publications. doi: http://dx.doi.org/10.1016/b978-0-08-097086-8.10572-0

Fitzpatrick, J. L. (2012). Commentary-Collaborative evaluation within the larger evaluation context, Evaluation and Program Planning, 35, 558-563. doi: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.012

Geist, M. R. (2010). Using the Delphi method to engage stakeholders: A comparison of two studies. Evaluation and Program Planning, 33, 147-154. doi: http://dx.doi.org/10.1016/j.evalprogplan.2009.06.006

Henry, G. T. (2003). Influential evaluations. American Journal of evaluation, 24 (4), 515-524. doi: http://dx.doi.org/10.1177/109821400302400409

Henry, G. T. & Mark, M. M. (2003). Beyond use: Understanding evaluation´s influence on attitudes and actions, American Journal of Evaluation, 24 (3), 293-314. doi: http://dx.doi.org/10.1177/109821400302400302

House, E. R. (2008). Blowback. Consequences of Evaluation for Evaluation. American Journal of Evaluation, 29(4), 416-426. doi: http://dx.doi.org/10.1177/1098214008322640

Jacob, S. (2008). Cross-Disciplinarization: A New Talisman for Evaluation? American Journal of Evaluation, 29 (2), 175-194.doi: http://dx.doi.org/10.1177/1098214008316655

Johson, K. (2009). Research on Evaluation Use: A Review of the Empirical Literature From 1986 to 2005. American Journal of Evaluation, 30(3), 377-410. doi: http://dx.doi.org/10.1177/1098214009341660

Joint Committee On Standards Of Educational Evaluation (2003). The student evaluation standards. Thousand Oaks, Ca.: Corwin

Kirkhart, K. (2000). Reconceptualizing evaluation use: An integrated theory of influence. En V. Caracelli, & H. Preskill (Eds.), The expanding scope of evaluation use. New Directions for Evaluation, Nº 88, San Francisco, Ca.: Jossey-Bass.

La Velle, J. M. & Donaldson, S. I. (2010). University-Based Evaluation Training Programs in the United States 1980-2008: An Empirical Examination. American Journal of Evaluation, 31(1), 9-23. doi: http://dx.doi.org/10.1177/1098214009356022

Leviton, L. C. (2003). Evaluation use: Advances, challenges and applications. American Journal of Evaluation, 33 (2), 159-178.

Ledermann, S. (2012). Exploring the Necessary Conditions for Evaluation Use in Program Change. American Journal of Evaluation, 24 (4), 525-535. DOI: http://dx.doi.org/10.1177/1098214011411573

Makrakis, V. & Kostoulas-Makrakis, N. (2016). Bridging the qualitative–quantitative divide: Experiences from conducting a mixed methods evaluation in the RUCAS programme. Evaluation and Program Planning 54, 144-151. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2015.07.008

Mark, M. M. (2003). Toward a integrative view of the theory and practice of program and policy evaluation. En S. I. Donaldson & M. Scriven (Eds.) Evaluating social programs and problems: Vision for the new millenium, (pp. 183-204). Mahwah, NJ.: Erlbaum.

Maxcy, S. J. (2003). Pragmatic threads in mixed methods research in social sciences: An emerging theory in support of practice. En A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social anb behavioral research, (pp. 51-89). Thousand Oaks, CA: Sage

May, H. (2004). Making statistics more meaningful for policy research and program evaluation. American Journal of Evaluation, 25(4), 525-540. doi: http://dx.doi.org/10.1177/109821400402500408

Mcclintock, C. (2003). Commentary: The evaluator as scholar/practicioner/change agent. American Journal of Evaluation, 24(1), 91-96. doi: http://dx.doi.org/10.1177/109821400302400110

Mira, G. E., Meneses, R. M. & Rincón, D. A. (2012). La Investigación Evaluativa y su perspectiva en la Acreditación y Evaluación de Programas e Instituciones en Educación Superior, XIII Asamblea General de la Asociación Latinoamericana de Facultades y Escuelas de Contaduría y Administración (ALAFEC) (pp. 1-26). Buenos Aires, Argentina.

Muñoz, A., Perez Zabaleta, A., Muñoz, A. & Sanchez, C. (2013). La evaluación de políticas públicas: una creciente necesidad en la unión europea. Revista de Evaluación de Programas y Políticas Públicas, 1, 1-30.

Neuman, A., Shahor, N., Shina, I., Sarid, A. & Saar, Z. (2013). Evaluation utilization research -Developing a theory and putting it to use. Evaluation and Program Planning, 36, 64–70. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2012.06.001

Nicoletti, J. A. (2013). La evaluación de la calidad educativa. Investigación de base evaluativa en centros de educación superior. Revista Argentina de Educación Superior, 6, 189-202.

Nitsch, M., Waldherr, K., Denk, E., Griebler, U., Marent, B. & Forster, R. (2013). Participation by different stakeholders in participatory evaluation of health promotion: A literature review. Evaluation and Program Planning, 40 (1), 42–54. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2013.04.006

Paricio, J. (2015). Análisis de los modelos de calidad de la educación superior. Diseño de una metodología de análisis multidimensional. Tesis Doctoral, Universidad de Zaragoza.

Patel, M. (2002a). A meta-evaluation, or quality assessment, of the evaluations in this issue, based on the African Evaluation Guidelines: 2002. Evaluation and Program Planning, 25, 329-332. doi: http://dx.doi.org/10.1016/S0149-7189(02)00043-5

Patel, M. (2002b). The African Evaluation Guidelines: 2002. A checklist to assist in planning evaluations, negotiating clear contracts, reviewing progress and ensuring adequate completion of an evaluation. Evaluation and Program Planning, 25, 481–492.

Patton, M. Q. (2012). A utilization-focused approach to contribution analysis. Evaluation, 18, 364–377. doi: http://dx.doi.org/10.1177/1356389012449523

Perassi, Z. (2009). Commentary: Evaluar un Programa Educativo: Una Experiencia Formativa Compleja, Revista Iberoamericana de Evaluación Educativa, 2(2), 172-195.

Perez Juste, R. (2002). La evaluación de programas en el marco de la educación de calidad. XXI Revista de Educación, 4, 43-76.

Perrin, B. (2001). Commentary: Making yoursel -and evaluation- useful. American Journal of Evaluation, 22 (2), 252-259. doi: http://dx.doi.org/10.1177/109821400102200209

Pinkerton, S. D., Johnson-Massoti, A. P., Derse, A. & Layde, P. M. (2002). Ethical issues in cost-effectiveness analysis. Evaluation and Program Planning, 25, 71-83. doi: http://dx.doi.org/10.1016/S0149-7189(01)00050-7

Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 443–459. doi: http://dx.doi.org/10.1177/1098214008324182

Renger, R & Hurley, C. (2006). From theory to practice: Lessons learned in the application of the ATM approach to developing logic models. Evaluation and Program Planning, 29, 106–119. doi: http://dx.doi.org/10.1016/j.evalprogplan.2006.01.004

Rodriguez-Campos, L. (2012). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of Multidisciplinary Evaluation, 17, 57– 79.

Rodríguez-Campos, L. (2012). Advances in collaborative evaluation, Evaluation and Program Planning, 35, 523–528. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

Roseland, D., Lawrenz, F. & Thao, M. (2015). The relationship between involment in and use of evaluation in multi-site evaluations. Evaluation and Program Planning, 48, 75-82. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

Ryan, K. E. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25 (4), 443-460. doi: http://dx.doi.org/10.1177/109821400402500403

Scheerens, J. (2004). The evaluation culture. Studies in Educational Evaluation, 30(2), 105-124. doi: Perez Juste, R. (2002). La evaluación de programas en el marco de la educación de calidad. XXI Revista de Educación, 4, 43-76.

Perrin, B. (2001). Commentary: Making yoursel -and evaluation- useful. American Journal of Evaluation, 22 (2), 252-259.

http://dx.doi.org/10.1177/109821400102200209

Pinkerton, S. D., Johnson-Massoti, A. P., Derse, A. & Layde, P. M. (2002). Ethical issues in cost-effectiveness analysis. Evaluation and Program Planning, 25, 71-83.

http://dx.doi.org/10.1016/S0149-7189(01)00050-7

Preskill, H., & Boyle, S. (2008). A multidisciplinary model of evaluation capacity building. American Journal of Evaluation, 29, 443–459.

http://dx.doi.org/10.1177/1098214008324182

Renger, R & Hurley, C. (2006). From theory to practice: Lessons learned in the application of the ATM approach to developing logic models. Evaluation and Program Planning, 29, 106–119.

http://dx.doi.org/10.1016/j.evalprogplan.2006.01.004

Rodriguez-Campos, L. (2012). Stakeholder involvement in evaluation: Three decades of the American Journal of Evaluation. Journal of Multidisciplinary Evaluation, 17, 57– 79.

Rodríguez-Campos, L. (2012). Advances in collaborative evaluation, Evaluation and Program Planning, 35, 523–528. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

http://dx.doi.org/10.1016/j.evalprogplan.2011.12.006

Roseland, D., Lawrenz, F. & Thao, M. (2015). The relationship between involment in and use of evaluation in multi-site evaluations. Evaluation and Program Planning, 48, 75-82. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

http://dx.doi.org/10.1016/j.evalprogplan.2014.10.003

Ryan, K. E. (2004). Serving public interests in educational accountability: Alternative approaches to democratic evaluation. American Journal of Evaluation, 25 (4), 443-460.

http://dx.doi.org/10.1177/109821400402500403

Scheerens, J. (2004). The evaluation culture. Studies in Educational Evaluation, 30(2), 105-124.

http://dx.doi.org/10.1016/j.stueduc.2004.06.001

Schwandt, T.A. (2002). Evaluation Practice Reconsidered. New York NY. Peter Lang Publishing

Schwartz, R. & Mayne, J. (2005). Assuring the quality of evaluative information: theory and practice. Evaluation and Program Planning, 28(1), 1-14. doi: http://dx.doi.org/10.1016/j.evalprogplan.2004.10.001

Schweigert, F., J. (2007). The priority of justice: A framework approach to ethics in program evaluation. Evaluation and Program Planning, 30, 394–399. doi: http://dx.doi.org/10.1016/j.evalprogplan.2007.06.007

Scriven, M. (2000). The logic and methodology of checklists. Recuperado de www.wmich.edu/evalctr/checklists/

Scriven, M. (2003). Evaluation in the new millenium: The transdisciplinary vision. En S. I. Donaldson & M. Scriven (Eds.), Evaluating Social Programs and Problems. Visions for the New Millennium, (pp. 19-42). Mahwah, N. J.: Lawrence Earlbaum Associates

Smith, M. J. (2010). Handbook of Program Evaluation for Social Work and Health Professionals. New York: Oxford University Press

Sondergeld, T. & Koskey, K. (2011). Evaluating the impact of an urban comprehensive school reform: An illustration of the need for mixed methods. Studies in educational Evaluation, 37, 94-107. DOI: http://dx.doi.org/10.1016/j.stueduc.2011.08.001

Stake, R. (2006). Evaluación comprensiva y evaluación basada en estándares, Editorial Graó, Barcelona.

Stufflebeam, D. L. (2000). Guidelines for developing evaluation checklists. Recuperado de www.wmich.edu/evalctr/checklists/

Stufflebeam, D. L. (2001a). Interdisciplinary PHd Programming in Evaluation. American Journal of Evaluation, 22(3), 445-455. doi: http://dx.doi.org/10.1177/109821400102200323

Stufflebeam, D. L. (2001b). The metaevaluation imperative. American Journal of Evaluation, 22(2), 183-209. doi: http://dx.doi.org/10.1177/109821400102200204

Stufflebeam, D. L. (2004). A note on the purposes, development, and applicability of the Joint Committee Evaluation Standards. American Journal of Evaluation, 25, 1, 99-102. doi: http://dx.doi.org/10.1177/109821400402500107

Taut, S. (2008). What have we learned about stakeholders involvement in program evaluation. Studies in Educational Evaluation, 34, 224-230. doi: http://dx.doi.org/10.1016/j.stueduc.2008.10.007

Thomas, V. G. & Madison, A. (2010). Integration of Social Justice Into the Teaching of Evaluation. American Journal of Evaluation, 31 (4), 570-583. doi: http://dx.doi.org/10.1177/1098214010368426

Urban, J. B., Hargraves, M. & Trochim, W. M. (2014). Evolutionary Evaluation: Implications for evaluators, researchers, practitioners, funders and the evidence-based program mandate. Evaluation and Program Planning, 45, 127-139. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.03.011

Urban, J. B. & Trochim, W. M. (2009). The role of evaluation in research-practice integration: Working toward the “Golden spike”. American Journal of Evaluation, 30 (4), 538-553. doi: http://dx.doi.org/10.1177/1098214009348327

Vanhoof, J. & Van Petegem, P. (2010). Evaluating the quality of self-evaluations: The (mis)match between internal and external meta-evaluation. Studies in Educational Evaluation, 36, 20–26. doi: http://dx.doi.org/10.1016/j.stueduc.2010.10.001

Walton, M. (2014). Applying complexity theory: A review to inform evaluation design. Evaluation and Program Panning, 45, 119-126. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2014.04.002

Wasserman, D. L. (2010). Using a systems orientation and foundational theory to enhance theory-driven human service program evaluations. Evaluation and Program Planning, 33, 67-80. DOI: http://dx.doi.org/10.1016/j.evalprogplan.2009.06.005

Weiss, C. H. (2004). On theory-based evaluation: Winning friends and influencing people. The Evaluation Exchange, 9(4), 1-5.

White, H. (2013). The Use of Mixed Methods in Randomized Control Trials. New Directions for Evaluation, 138(2), 61-73. DOI: http://dx.doi.org/10.1002/ev.20058 doi: http://dx.doi.org/10.1002/ev.20058

Youker, B. W., Ingraham, A. & Bayer, N. (2014). An assessment of goal-free evaluation: Case studies of four goal-free evaluations. Evaluation and Program Planning, 46, 10-16. doi: http://dx.doi.org/10.1016/j.evalprogplan.2014.05.002

Yusa, A., Hynie, M. & Mitchell, S. (2016). Utilization of internal evaluation results by community mental health organizations: Credibility in different forms. Evaluation and Program Planning, 54, 11-18. doi: http://dx.doi.org/10.1016/j.evalprogplan.2015.09.006

Published

2016-04-14

Issue

Section

Research Articles