Comparative analysis of opinions of Warsaw Medical University students on electronic examinations and final tests by their participation in this form of knowledge assessment – preliminary report

Authors

  • Joanna Gotlib Division of Teaching and Outcomes of Education, Faculty of Health Science, Medical University of Warsaw, Poland Author
  • Mariusz Panczyk Division of Teaching and Outcomes of Education, Faculty of Health Science, Medical University of Warsaw, Poland Author
  • Piotr Gębski Central Examination Office of the Faculty of Medicine, Medical University of Łódź, Poland Author
  • Aleksander Zarzeka Division of Teaching and Outcomes of Education, Faculty of Health Science, Medical University of Warsaw, Poland Author
  • Lucyna Iwanow Student Research Society of Medical Law, Faculty of Health Science, Medical University of Warsaw, Poland Author
  • Filip Dąbrowski First Department of Gynecology and Obstetrics, 1st Faculty of Medicine, Medical University of Warsaw, Poland Author
  • Grażyna Dykowska Division of Public Health, Faculty of Health Science, Medical University of Warsaw, Poland Author
  • Marcin Malczyk University Examination Board of Warsaw Medical University, Poland Author

DOI:

https://doi.org/10.1515/pjph-2015-0044

Keywords:

electronic examinations, assessment quality, modern technologies, students of health sciences, attitudes

Abstract

Introduction. Apart from the increasing popularity of modern information technologies and the development of e-learning methods used for teaching medicine and health sciences, there was a spike of interest in using modern computer techniques for checking students’ knowledge.

Aim. The aim of the study was to compare the opinions of students of Medical University of Warsaw about the examinations and final tests conducted using the e-exam ASK Systems platform, measured by their participation in this form of assessmen knowledge.

Material and methods. 148 students; group 1 comprised students participating in an e-exam (59 persons) and group 2 included students not participating in an e-exam (89 persons). A voluntary, anonymous questionnaire study, electronic questionnaire, 58 statements measured using the Likert scale. Questionnaire reliability assessment: analysis of internal consistency with Cronbach’s Alfa coefficient (α>0.70). Statistical analysis: STATISTICA 12.0 licensed to WMU, Mann-Whitney U test.

Results. Cronbach’s α coefficient for the scale amounted to 0.70. Members of group 1 were more likely to admit that students need to put in extra effort into participating in an e-exam (p<0.001) and that test results might be worse than in case of a regular exam (p<0.050). Group 1 significantly more often reported that the participation in an e-exam can cause additional examination stress (p<0.002) and makes cheating during exams more probable (p<0.003).

Conclusions. 1. An analysis of the questionnaire demonstrated that this tool is reliable and can be used in further studies. 2. The participation in an e-exam slightly influenced the opinions of students on this form of knowledge assessment, which may mean that the students’ expectations concerning e-exams were consistent with the actual course of the exam. Therefore, students do not need any special procedure to prepare for e-exams. 3. This was a pilot study and it needs to be continued among the same group of students before and after the e-exam.

References

1. Cantillon P, Irish B, Sales D. Using computers for assessment in medicine. BMJ. 2004;329:606-9.

2. Conole G, Warburton B. A review of computer-assisted assessment ALT-J. Res Learn Tech. 2005;13(1):17-31.

3. Dennick R, Wilkinson S, Purcell N. Online e-Assessment: AMEE Guide No. 39. Med Teach. 2009;31:192-206.

4. Hewson C. Can online course-based assessment methods be fair and equitable? Relationships between students’ preferences and performance within online and offline assessments. J Comput Assist Learn. 2012;28:488-98.

5. Mooney GA, Bligh JG, Leinster SJ. Some techniques for computer-based assessment in medical education. Med Teach. 1998;20(6):560-6.

6. Feldt LS. A test of hypothesis that Cronbachs alpha or Kuder-Richardson coefficent 20 is same for 2 tests. Psychometrika. 1969;34(3):363.

7. Nunnally JC, Bernstein IH. Psychometric theory. New York: McGraw-Hill New York; 1967.

8. Jawaid M, Moosa FA, Jaleel F, Ashraf J. Computer Based Assessment (CBA): Perception of residents at Dow University of Health Sciences. Pak J Med Sci. 2014;30(4):688-91.

9. Hassanien MA, Al-Hayani A, Abu-Kamer R, Almazrooa A. A six step approach for developing computer based assessment in medical education. Med Teach. 2013;35:15-9.

10. Rudland JR, Schwartz P, Ali A. Moving a formative test from a paper-based to a computer-based format. A student viewpoint. Med Teach. 2011;33:738-43.

11. Dermo J. e-Assessment and the student learning experience: A survey of student perceptions of e-assessment. BJET. 2009;40(2):203-14.

12. Hochlehnert A, Brass K, Moeltner A, Juenger J. Does Medical Students’ Preference of Test Format (Computer-based vs. Paper-based) have an Influence on Performance? BMC Med Edu. 2011;11(89):1-6.

13. Gotlib J, Zarzeka A, Panczyk M, Malczyk M. Zaliczenie testowe z przedmiotu „Prawo w medycynie” dla studentów Wydziału Nauki o Zdrowiu na platformie egzaminów elektronicznych ASK Systems – doświadczenia własne. Med Dydak Wychow. 2015;1:28-30.

14. Gotlib J, Panczyk M, Gębski P, et al. Analiza opinii studentów Warszawskiego Uniwersytetu Medycznego na temat udziału w zaliczeniach i egzaminach elektronicznych – doniesienie wstępne. Zdr Publ. 2015. (in print).

Downloads

Published

2015-12-14