A proposal for a new questionnaire for the evaluation of teachers at the University of the Basque Country. Dimensional, differential and psychometric study
DOI:
https://doi.org/10.7203/relieve.23.2.10436Keywords:
Students’ Evaluation of Teaching, SET, Dimensionality, Questionnaire, Student Evaluation of Teacher Performance, Teacher CompetenciesAbstract
The aim of this paper is to analyze the new questionnaire designed by the University of the Basque Country (UPV/EHU) to evaluate its teaching staff (SET). To do it, the responses of a 941 students sample were analyzed and the following aspects of the questionnaire were studied: its reliability, dimensionality, construct and criterion validity; concluding with a differential study considering variables such as gender, disciplinary field, perceived difficulty level or subject interest. The results suggest high internal consistency and an essential unidimensionality that fits to the theorical dimensions enabling a formative use of information.References
Adams, Meredith J.D., & Umbach, Paul D. (2012). Non response and online student evaluations of teaching: Understanding the influence of salience, fatigue, and academic environments. Research in Higher Education, 53(5), 576-91. doi: https://doi.org/10.1007/s11162-011-9240-5
Addison, William E., Best, John & Warrington, John D. (2006). Students’ Perceptions of Course Difficulty and Their Ratings of the Instructor. College student journal, 40(2), 409-16.
Alvarado, Elías, Morales Dionicio & Aguayo, Ernesto (2016). Percepción de la calidad educativa: caso aplicado a estudiantes de la Universidad Autónoma de Nuevo León y del Instituto Tecnológico de Estudios Superiores de Monterrey. Revista de la Educación Superior, 45(180), 55-74. doi: https://doi.org/10.1016/j.resu.2016.06.006
Apodaca, Pedro & Grad, Héctor (2002). Análisis dimensional de las opiniones de los alumnos universitarios sobre sus profesores: comparación entre técnicas paramétricas y no-paramétricas. Revista de Investigación Educativa, 20(2), 385-409.
Apodaca, Pedro & Grad, Héctor (2005). The dimensionality of student ratings of teaching: integration of uni-and multidimensional models. Studies in Higher Education, 30(6), 723-48. doi: https://doi.org/10.1080/03075070500340101
Basto, Mário & Pereira, José Manuel. (2012). An SPSS R-Menu for ordinal factor analysis. Journal of Statistical Software, 46(4), 1-29. doi: https://doi.org/10.18637/jss.v046.i04
Berk, Ronald A. (2013). Should Global Ítems on Student Rating Scales Be Used for Summative Decisions?. The Journal of Faculty Development, 27(1), 63-68.
Burdsal, Charles A. & Harrison, Paul D. (2008). Further evidence supporting the validity of both a multidimensional profile and an overall evaluation of teaching effectiveness. Assessment & Evaluation in Higher Education, 33(5), 567-76. doi: https://doi.org/10.1080/02602930701699049
Caldera, Juan F., Carranza, María del R., Jiménez, Alma A. & Pérez, Ignacio (2015). Actitudes de los estudiantes universitarios ante la tutoría. Diseño de una escala de medición. Revista de la Educación Superior, 1(173), 103-124. doi: https://doi.org/10.1016/j.resu.2015.04.004
Casero, Antonio (2010). Factores moduladores de la percepción de la calidad docente. RELIEVE, 16(2). doi: http://doi.org/10.7203/relieve.16.2.4135
Cattell, Raymond B. (1966). The Scree Test For The Number Of Factors. Multivariate Behavioral Research, 1, 245-276. doi: https://doi.org/10.1207/s15327906mbr0102_10
Chen, Guo-Hai & Watkins, David. (2010). Stability and correlates of student evaluations of teaching at a Chinese university. Assessment & Evaluation in Higher Education, 35(6), 675-85. doi: https://doi.org/10.1080/02602930902977715
Choi, Bo-Keum & Kim, Jae-Woong. (2014). The Influence of Student and Course Characteristics on Monotonic Response Patterns in Student Evaluation of Teaching in South Korea. Asia Pacific Education Review, may, 1-10. doi: https://doi.org/10.1007/s12564-014-9332-y
Darby, Jenny A. (2007). Are course evaluations subject to a halo effect? Research in Education, 77(1), 46-55. doi: https://doi.org/10.7227/RIE.77.4
De Juanas Oliva, Angel & Beltrán Llera, Jesús A. (2014). Valoraciones de los estudiantes de ciencias de la educación sobre la calidad de la docencia universitaria. Educación XX1, 17(1), 59-82.
Fernández Rico, J. Esteban, Fernández Fernández, Samuel, Álvarez Suárez, Alberto & Martínez Camblor, Pablo (2007). Éxito académico y satisfacción de estudiantes con la enseñanza universitaria. RELIEVE, 13(2). doi: http://doi.org/10.7203/relieve.13.2.4207
Ginns, Paul, Prosser, Michael & Barrie, Simon. (2007). Students’ perceptions of teaching quality in higher education: The perspective of currently enrolled students. Studies in Higher Education, 32(5), 603-615. doi: https://doi.org/10.1080/03075070701573773
Glorfeld, Louis W. (1995). An Improvement on Horn's Parallel Analysis Methodology for Selecting the Correct Number of Factors to Retain. Educational & Psychological Measurement, 55, 377-393. doi: https://doi.org/10.1177/0013164495055003002
González López, Ignacio (2003). Determinación de los elementos que condicionan la calidad de la universidad: aplicación práctica de un análisis factorial. RELIEVE, 9(1). doi: http://doi.org/10.7203/relieve.9.1.4351
Haarala-Muhonen, Anne, Ruohoniemi, Mirja, Katajavuori, Nina & Lindblom-Ylänne, Sari. (2011). Comparison of students’ perceptions of their teaching–learning environments in three professional academic disciplines: A valuable tool for quality enhancement. Learning Environments Research, 14(2), 155-69. doi: https://doi.org/10.1007/s10984-011-9087-x
Hooper, Daire, Coughlan, Joseph & Mullen, Michael (2008). Structural Equation Modelling: Guidelines for Determining Model Fit. The Electronic Journal of Business Research Methods, 6(1), 53-60.
Hoyuelos, F. J.; Ibáñez, J; Jerónimo, E.; San Martín, S. & Santamaría, M. (2014). Variables definitorias del perfil del profesor/a universitario/a ideal desde la perspectiva de los estudiantes pre-universitarios/as. Educación XX1, 17(2), 193-215.
Kaplan, David (2009). Structural Equation Modeling (2nd ed.): Foundations and Extensions. Thousand Oaks, USA: Sage.
Kember, David & Leung, Doris YP. (2011). Disciplinary differences in student ratings of teaching quality. Research in Higher Educatio, 52(3), 278-99. doi: https://doi.org/10.1007/s11162-010-9194-z
Kline, Rex (2011). Principles and practice of structural equation modeling (3rd ed.). New York, London: The Guilford Press.
Lance, C., Butts, M. & Michels, L.. (2006). The Sources of Four Commonly Reported Cutoff Criteria What Did They Really Say? Organizational Research Methods, 9(2), 202-20. doi: https://doi.org/10.1177/1094428105284919
Ledesma, Rubén Daniel & Valero-Mora, Pedro (2007). Determining the Number of Factors to Retain in EFA: An Easy-to-Use Computer Program for Carrying Out Parallel Analysis. Practical Assessment, Research & Evaluation, 12(2), 1-11.
Lemos, M. S., Queirós, C., Teixeira, P.M. & Menezes, I. (2011). Development and validation of a theoretically based, multidimensional questionnaire of student evaluation of university teaching. Assessment & evaluation in higher education, 36(7), 843-64. doi: https://doi.org/10.1080/02602938.2010.493969
(Referencia eliminada para evaluación por pares)
Molero López Barajas, David (2007). Rendimiento académico y opinión sobre la docencia del alumnado participante en experiencias piloto de implantación del Espacio Europeo de Educación Superior. RELIEVE, 13(2), art. 2. doi: http://doi.org/10.7203/relieve.13.2.4205
Mortelmans, D. & Spooren, P. (2009). A revalidation of the SET37 questionnaire for student evaluations of teaching. Educational Studies, 35(5), 547-552. doi: https://doi.org/10.1080/03055690902880299
Muñoz Cantero, J.M., Ríos de Deus, M.P & Abalde, E. (2002). Evaluación Docente vs. Evaluación de la Calidad. RELIEVE, 8(2), art. 4. doi: http://doi.org/10.7203/relieve.8.2.4362
Otani, Koichiro, B., Joon Kim & Jeong-IL Cho (2012). Student evaluation of teaching (SET) in higher education: How to use SET more effectively and efficiently in public affairs education. Journal of Public Affairs Education, 18(3), 531-544.
Palmer, Stuart (2012). The performance of a student evaluation of teaching system. Assessment & Evaluation in Higher Education, 37(8), 975-985. doi: https://doi.org/10.1080/02602938.2011.592935
Pascual Gómez, Isabel (2007). Análisis de la Satisfacción del Alumno con la Docencia Recibida: Un Estudio con Modelos Jerárquicos Lineales. RELIEVE, 13(1), art. 6. doi: http://doi.org/10.7203/relieve.13.1.4216
Pepe, Julie W., & Wang, Morgan C. (2012). What Instructor Qualities Do Students Reward. College Student Journal, 46(3), 603-14.
Peres-Neto, Pedro R., Jackson, Donald A. & Somers, Keith M. (2005). How Many Principal Components? Stopping Rules for Determining the Number of Non-Trivial Axes Revisited. Computational Statistics & Data Analysis, 49, 974-997. doi: https://doi.org/10.1016/j.csda.2004.06.015
Rantanen, Pekka (2013). The number of feedbacks needed for reliable evaluation. A multilevel analysis of the reliability, stability and generalisability of students' evaluation of teaching. Assessment & Evaluation in Higher Education, 38(2), 224-239. doi: https://doi.org/10.1080/02602938.2011.625471
Revelle, William & Rocklin, Thomas (1979). Very Simple Structure. Alternative Procedure for Estimating the Optimal Number of Interpretable Factors. Multivariate Behavioral Research, 14(4), 403-414. doi: https://doi.org/10.1207/s15327906mbr1404_2
Schreider, J., Stage, F., King, J., Nora, A. & Barlow, E. (2006). Reporting structural equation modeling and confirmatory factor analysis results: a review. The Journal of Education Research, 99(6), 323-337. doi: https://doi.org/10.3200/JOER.99.6.323-338
Spooren, Pieter, Brockx, Bert & Mortelmans, Dimitri. (2013). On the Validity of Student Evaluation of Teaching The State of the Art. Review of Educational Research, 83(4), 598-642. doi: https://doi.org/10.3102/0034654313496870
Stark-Wroblewski, Kimberly, Ahlering, Robert F. & Brill, Flannery M. (2007). Toward a more comprehensive approach to evaluating teaching effectiveness: Supplementing student evaluations of teaching with pre–post learning measures. Assessment & Evaluation in Higher Education, 32(4), 403-15. doi: https://doi.org/10.1080/02602930600898536
Stout, William F. (1990). A new ítem response theory modeling approach with applications to unidimensionality assessment and ability estimation. Psychometrika, 55(2), 293-325. doi: https://doi.org/10.1007/BF02295289
Stout, William F. (1987). A nonparametric approach for assessing latent trait unidimensionality. Psychometrika, 52(4), 589-617. doi: https://doi.org/10.1007/BF02294821
Tomkiewicz, Joseph & Bass, Kenneth. (2008). Differences between Male Students’ and Female Students’ Perception of Professors. College Student Journal, 42(2), 422-430.
Uttl, B., White, C. A. & Gonzalez, D. W. (2016). Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation, 54, 22-42. doi: http://dx.doi.org/10.1016/j.stueduc.2016.08.007
Velicer, Wayne F., Eaton, Cheryl A. & Fava, Joseph L. (2000). Construct explication through factor or component analysis: A review and evaluation of alternative procedures for determining the number of factors or components. En Problems and solutions in human assessment, 41-71. Springer. doi: https://doi.org/10.1007/978-1-4615-4397-8_3
Wood, James M., Tataryn, Douglas J. & Gorsuch, Richard L. (1996). Effects of under-and overextraction on principal axis factor analysis with varimax rotation. Psychological methods, 1(4), 354. doi: https://doi.org/10.1037/1082-989X.1.4.354
Zerihun, Zenawi, Beishuizen, Jos & Van Os, Willem. (2012). Student learning experience as indicator of teaching quality. Educational Assessment, Evaluation and Accountability, 24(2), 99-111. doi: https://doi.org/10.1007/s11092-011-9140-4
Zhao, Jing & Gallant, Dorinda J. (2012). Student evaluation of instruction in higher education: Exploring issues of validity and reliability. Assessment & Evaluation in Higher Education, 37(2), 227-35. doi: https://doi.org/10.1080/02602938.2010.523819
Zwick, William R. & Velicer, Wayne F. (1986). Comparison of five rules for determining the number of components to retain. Psychological bulletin, 99(3), 432-442. doi: https://doi.org/10.1037/0033-2909.99.3.432
Downloads
Published
Issue
Section
License
The authors grant non-exclusive rights of exploitation of works published to RELIEVE and consent to be distributed under the Creative Commons Attribution-Noncommercial Use 4.0 International License (CC-BY-NC 4.0), which allows third parties to use the published material whenever the authorship of the work and the source of publication is mentioned, and it is used for non-commercial purposes.
The authors can reach other additional and independent contractual agreements, for the non-exclusive distribution of the version of the work published in this journal (for example, by including it in an institutional repository or publishing it in a book), as long as it is clearly stated that the Original source of publication is this magazine.
Authors are encouraged to disseminate their work after it has been published, through the internet (for example, in institutional archives online or on its website) which can generate interesting exchanges and increase work appointments.
The fact of sending your paper to RELIEVE implies that you accept these conditions.