Articles

A Competency-Based Approach to Pass/Fail Decisions: An Observational Study

Abstract

Any high-stakes assessment that leads to an important decision requires careful consideration in determining whether a student passes or fails. Despite the implementation of many standard-setting methods in clinical examinations, concerns remain about the reliability of pass/fail decisions in high stakes assessment, especially clinical assessment. This observational study proposes a defensible pass/fail decision based on the number of failed competencies. In the study conducted in Erbil, Iraq, in June 2018, results were obtained for 150 medical students on their final objective structured clinical examination. Cutoff scores and pass/fail decisions were calculated using the modified Angoff, borderline, borderline-regression, and holistic methods. The results were compared with each other and with a new competency method using Cohen’s kappa. Rasch analysis was used to compare the consistency of competency data with Rasch model estimates. The competency method resulted in 40 (26.7%) students failing, compared with 76 (50.6%), 37 (24.6%), 35 (23.3%), and 13 (8%) for the modified Angoff, borderline, borderline regression, and holistic methods, respectively. The competency method demonstrated a sufficient degree of fit to the Rasch model (mean outfit and infit statistics of 0.961 and 0.960, respectively). In conclusion, the competency method was more stringent in determining pass/fail, compared with other standard-setting methods, except for the modified Angoff method. The fit of competency data to the Rasch model provides evidence for the validity and reliability of pass/fail decisions.

1. Torbeck L, Wrightson AS. A method for defining competency-based promotion criteria for family medicine residents. Acad Med 2005;80:832-9.
2. Yang YY, Lee FY, Hsu HC, Huang CC, Chen JW, Lee WS, et al. A core competence-based objective structured clinical examination (OSCE) in evaluation of clinical performance of postgraduate year-1 (PGY1) residents. J Chinese Med Assoc 2011;74:198-204.
3. General Medical Council. Outcomes for graduates (Tomorrow’s Doctors), 2015. (Accessed at: https://www.gmc-uk.org/-/media/documents/outcomes-for-graduates-jul-15-1216_pdf-61408029.pdf.)
4. Cleaton N, Yeates P, McCray G. Exploring the relationship between examiners’ memories for performances, domain separation and score variability. Med Teach 2018;40:1159-65.
5. Gormley G. Summative OSCEs in undergraduate medical education. Ulster Med J 2011;80:127-32.
6. Lee M, Hernandez E, Brook R, Ha E, Harris C, Plesa M, et al. Competency-based Standard Setting for a High-stakes Objective Structured Clinical Examination (OSCE): Validity Evidence. MedEdPublish 2018;7:1-15.
7. Shulruf B, Jones P, Turner R. Using Student Ability and Item Difficulty for Making Defensible Pass/Fail Decisions for Borderline Grades. High Educ Stud 2015;5:107-18.
8. Wyse AE. Five Methods for Estimating Angoff Cut Scores with IRT. Educ Meas Issues Pract 2017;36:16-27.
9. Tavakol M, Dennick R. The foundations of measurement and assessment in medical education. Med Teach 2017;39:1010-5.
10. De Champlain AF. A primer on classical test theory and item response theory for assessments in medical education. Med Educ 2010;44:109-17.
11. Tavakol M, Dennick REG. Psychometric evaluation of a knowledge based examination using Rasch analysis : An illustrative guide : AMEE Guide No . 72 Psychometric evaluation of a knowledge based examination using Rasch analysis : An illustrative guide : AMEE Guide No . 72. Med Teach 2013;35:838-48.
12. Tavakol M, Doody GA. Making students’ marks fair: standard setting, assessment items and post hoc item analysis. Int J Med Educ 2015;6:e838-9.
13. Tor E, Steketee C. Rasch analysis on OSCE data : An illustrative example. Australas Med J 2011;4:339-45.
14. Boone W, Staver J, Yale M. Rasch Analysis in the Human Sciences. In: Boone W, Staver J, Yale M, eds. Chapter 20. Germany: Springer Science & Business Media, 2013:482.
15. Core Committee, Institute for International Medical Education. Global minimum essential requirements in medical education. Med Teach 2002;24:130-5.
16. Bandaranayake RC. Setting and maintaining standards in multiple choice examinations: AMEE Guide No. 37. Med Teach 2008;30:836-45.
17. Norcini JJ, Shea JA, Hancock EW, Webster GD BR. A criterion-referenced examination in cardiovascular disease? Med Educ 1988;22:32-9.
18. Wood TJ, Humphrey-Murto SM, Norman GR. Standard Setting in a Small Scale OSCE: A Comparison of the Modified Borderline-Group Method and the Borderline Regression Method. Adv Heal Sci Educ 2006;11:115-22.
19. Liu M, Liu KM. Setting Pass Scores for Clinical Skills Assessment. Kaohsiung J Med Sci 2008;24:656-63.
20. McHugh ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 2012;22:276-82.
21. Pell G, Fuller R, Homer M, Roberts T. How to measure the quality of the OSCE: A review of metrics-AMEE guide no. 49. Med Teach 2010;32:802-11.
22. Shulruf B, Damodaran A, Jones P, Kennedy S, Mangos G, O’Sullivan AJ, et al. Enhancing the defensibility of examiners’ marks in high stake OSCEs. BMC Med Educ 2018;18:10.
23. Yousuf N, Violato C, Zuberi RW. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study. Teach Learn Med 2015;27:280-91.
24. Zieky M, Perie M, Livingston S. A Primer on Setting Cut Scores on Tests of Educational Achievement Excerpts From Passing Scores: A Manual for Setting Standards of Performance on Educational and Occupational Tests. Prince- ton, NJ: Educational Testing Service, Inc, 2006
25. Schoonheim-Klein M, Muijtens A, Habets L, Manogue M, Van Der Vleuten C, Hoogstraten J, et al. On the reliability of a dental OSCE, using SEM: effect of different days. Eur J Dent Educ 2008;12: 131-7.
26. Cusimano MD. Standard setting in medical education. Acad Med 1996;71: S112-20.
27. Albanese M. Rating educational quality: factors in the erosion of professional standards. Acad Med 1999;74:652-8.
28. Bond TG, Fox CM. Applying the Rasch Model: Fundamental Measurement in the Human Sciences Second Ed. In: Bond TG, Fox CM, eds. Chapter 12. New Jersey: Lawrence Erlbaum Associates, 2007:355.
29. Harrison C. Feedback in the context of high-stakes assessment: Can summative be formative? [dissertation]. Maastricht: Maastricht University.; 2017.
30. Kalet A, Tewksbury L, Ogilvie J, Buckvar-Keltz L, Porter B, Yingling S. Remediation of Learners Who Perform Poorly on an OSCE. In: Zabar S, Kachur E, Kalet A, Hanley K, eds. Objective Structured Clinical Examinations. New York, NY: Springer New York, 2012:35-8.
Files
IssueVol 59, No 7 (2021) QRcode
SectionArticles
DOI https://doi.org/10.18502/acta.v59i7.7022
Keywords
Pass/fail decision Competence-based Standard-setting Rasch model

Rights and permissions
Creative Commons License This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.
How to Cite
1.
Alkhateeb N, Al-Dabbagh A, Mohammed Y, Ibrahim M. A Competency-Based Approach to Pass/Fail Decisions: An Observational Study. Acta Med Iran. 2021;59(7):421-429.