Abstract
EVALUATION OF LEARNING OUTCOMES ACHIEVED BY MIDWIFERY STUDENTS AT PART-TIME STUDIES OF THE SECOND DEGREE
Aim. An analysis of consistence in evaluating students’ achievements in subjects completed with an exam and included in the curriculum at the department of midwifery of part-time studies at Medical University of Warsaw (MUW).
Materials and methods. The authors used examination data of 231 subjects, part-time midwifery students of the second degree at MUW between the years 2007-2012. A retrospective analysis was used for analyzing students’ achievements and learning results in eight subjects completed with an exam.
In order to determine the trends in evaluating students in consecutive years, a non-parametric ANOVA Kruskal-Wallis rank test was used. Assessing the subsequent compliance of evaluation of exam subjects was performed using Kendall coefficient. Evaluation of internal validity of educational measurement was established on the basis of inter-correlation analysis and multi-dimensional regression analysis.
Results. An analysis of internal compliance in evaluating students in subsequent years showed that for the vast majority of the studied subjects there was no consistency in evaluating students (Kruskal-Wallis rank ANOVA test, p < 0.005). For subsequent ranges of education, an insufficient level of educational measurement was found (Kendall coefficient = 0.11). Evaluating accuracy in the analysis of inter-correlation shows that for the eight studied subjects there are positive dependencies between the results of teaching students in individual areas of education. Confirmation of the above dependencies was also obtained in the analysis of regression.
Conclusions. Low level of compliance in evaluating students in subsequent years proves that there is insufficient cohesion within the system of measuring the learning outcomes. Prognosis analysis which would include the dependent variables connected with the future of graduates should be an important element of educational system evaluation in the future.
References
1. Rozporządzenie Ministra Zdrowia z dnia 29 października 2003 r. w sprawie wykazu dziedzin pielęgniarstwa oraz dziedzin mających zastosowanie w ochronie zdrowia, w których może być prowadzona specjalizacja i kursy kwalifikacyjne, oraz ramowych programów specjalizacji dla pielęgniarek i położnych (Dz.U. 2003 nr 197 poz. 1922).
2. Dobrowolska V. Ocena przygotowania do zawodu w opinii pielęgniarek i pielęgniarzy zatrudnionych w oddziałach szpitalnych. Piel Pol. 2010;35 (1): 7-13.
3. Drennan J. Masters in nursing degrees: an evaluation of management and leadership outcomes using a retrospective pre-test design. J Nurs Manag. 2012;20 (1): 102-12.
4. Begley CM, Oboyle C, Carroll M, et al. Educating advanced midwife practitioners: a collaborative venture. J Nurs Manag. 2007;15 (6): 574-84.
5. The Essentials of Master’s Education in Nursing: American Association of Colleges of Nursing 2011. Available from: http://www.aacn.nche.edu/education-resources/ MastersEssentials11.pdf (dostęp 06-02-2015).
6. Rozporządzenie Ministra Nauki i Szkolnictwa Wyższego z dnia 9 maja 2012 r. w sprawie standardów kształcenia dla kierunków studiów: lekarskiego, lekarsko-dentystycznego, farmacji, pielęgniarstwa i położnictwa (Dz.U. 2012 poz. 631).
7. Duszyński M. Efekty kształcenia w Polsce: perspektywa brytyjska. Nauka. 2011;1 : 137-44.
8. Harden RM, Crosby JR, Davis MH. AMEE Guide No. 14: Outcome-based education: Part 1 – An introduction to outcome-based education. Med Teach. 1999;21 (1): 7-14.
9. Niemierko B. Diagnostyka edukacyjna. Warszawa: Wydawnictwo Naukowe PWN; 2009.
10. Feldt LS. A test of hypothesis that Cronbachs alpha or Kuder-Richardson coefficent 20 is same for 2 tests. Psychometrika. 1969;34 (3): 363.
11. Nathans LL, Oswald FL, Nimon K. Interpreting multiple linear regression: A guidebook of variable importance. Pract Assess Res Eval. 2012;17 (9): 2.
12. Schuwirth LW, van der Vleuten CP. General overview of the theories used in assessment: AMEE Guide No. 57. Med Teach. 2011;33 (10): 783-97.
13. Norman GR, Vleuten C, Newble DI. International handbook of research in medical education. Dordrecht: Kluwer Academic Publishers; 2002.
14. Niemierko B. Pomiar wyników kształcenia. Warszawa: Wydawnictwo Szkolne i Pedagogiczne; 1999.
15. Rowley J. Measuring quality in higher education. Qual High Educ. 1996;2 (3): 237-55.
16. Tam M. Measuring Quality and Performance in Higher Education. Qual High Educ. 2001;7 (1): 47-54.
17. Kendall MG. A new measure of rank correlation. Biometrika. 1938: 81-93.
18. Niemierko B. Testy osiągnięć szkolnych. Podstawowe pojęcia i techniki obliczeniowe. 1st ed. Warszawa: Wydawnictwo Szkolne i Pedagogiczne; 1975.
19. Guilford JP. Psychometric methods. 2nd ed. New York: McGraw-Hill; 1954.
20. Goodwin LD. Changing conceptions of measurement validity: an update on the new standards. J Nurs Educ. 2002;41 (3): 100-6.
21. Meagher DG, Pan T, Wegner R, Olson AT, Overgaard SL, Mehle JJ. PCAT Reliability and Validity. 3rd ed. San Antonio: Pearson Executive Office; 2012.
22. Kubielski W. Podstawy pomiaru, konstruowania i ewaluacji testu dydaktycznego. Warszawa: Wydawnictwo Wyższej Szkoły Pedagogicznej TWP; 2006.
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 Unported License.
Copyright (c) 2015 Mariusz Panczyk, Jarosława Belowska, Aleksander Zarzeka, Joanna Gotlib (Autor)