Home Print this page Email this page
Users Online: 272
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
Year : 2014  |  Volume : 2  |  Issue : 3  |  Page : 126-127

A long case in perfect health

Department of Medical Education and Postgraduate Studies, Saudi Commission for Health Specialties, Riyadh, Saudi Arabia

Date of Web Publication31-Jul-2014

Correspondence Address:
James Ware
Department of Medical Education and Postgraduate Studies, Saudi Commission for Health Specialties, Riyadh
Saudi Arabia
Login to access the Email id

Rights and Permissions

How to cite this article:
Ware J, Siddiqui I. A long case in perfect health. J Health Spec 2014;2:126-7

How to cite this URL:
Ware J, Siddiqui I. A long case in perfect health. J Health Spec [serial online] 2014 [cited 2020 Dec 5];2:126-7. Available from: https://www.thejhs.org/text.asp?2014/2/3/126/137889

The reports of my death have been greatly exaggerated.

Mark Twain, 1835-1910

Today, most doctors given the responsibility for making high stakes decisions of clinical competence agree that oral examinations following an unobserved long case clinical encounter are fraught with problems. Almost fifty years ago [1],[2] , Harden and his Glasgow colleagues having showed the lack of precision of examining with their finals' long cases, set about producing another method of examining which they called the objective structured clinical examination (OSCE). [3] Similar observations were happening elsewhere. [4]

Let us be clear, the OSCE may be objective, fair and ostensibly reliable, but it is not real life. Whereas, OSCEs are an effective way to monitor the acquisition of clinical skills by undergraduate medical students, postgraduate trainees need another metric with better face validity. The Saudi Med Framework cites 166 clinical presentations, and it is clear for any graduating student that their performance on all will never be uniform, and yet their right to graduate and practice medicine may be based on the examination of just one. Then there are other factors that may distort the measurement: examiner reliability or inconsistencies, patient variability and not least the candidate-patient relationship, and the stories around that are almost mythical.

However, as Geoff Norman points out about medical education, few self-evident truths are true. [5] Because along came the examiners at St Thomas' Hospital, London, and observing 214 candidates taking their 'finals', conducted a comparison between two long cases, observed and marked with a structured mark sheet and a twenty station OSCE rated with checklists only. [6] The reliability of the two long cases turned out to be 0.84 versus the OSCE reliability of 0.74. The conclusions were obvious. So, what is wrong with the long case? First and foremost we must observe the candidate-patient interaction and secondly a structured and systematic checklist as well as some form of global rating must be used to grade each performance, and lastly the examiners must be prepared for their task with a short orientation. But, doesn't that sound familiar today?

Almost a whole industry has grown up based on workplace-based assessment, essentially using the candidate-patient interaction for the judgments made about the candidate's (developing) clinical competence. [7] Other variants have been reported, such as the direct observation clinical encounter examination (DOCEE), objective structured long examination record (OSLER), direct observation of procedural skills (DOPS), mini-clinical evaluation exercise (Min-CEX) and more, but all of them are predicated on a real life case or patient-candidate interaction. So, back to the long case and its long supposed death. We all would agree that the real thing is better than a simulation, although there are certain clinical competences that can only be observed on a simulation or in a simulated setting. Yet all actually take place where clinical medicine is practiced. Thus, when a postgraduate supervisor observes and evaluates a resident performing a task, such as the emergency admission of a patient, and then completes an in-training evaluation report (ITER) form he is essentially conducting a long case exam, with one important difference, the feedback given making it both an assessment and teaching opportunity. [8] What must now happen is that every resident shall be assessed with at least nine such cases, evaluated by 3 - 4 different supervisors during any 12 month period. The results of this will satisfy the most eloquent critics of the long case.

The Saudi Commission recognises the importance and validity of workplace-based examining when determining a resident's clinical competence and is stressing the importance of in-course evaluations. An OSCE will always be a surrogate for the real thing, but nevertheless will still contribute useful information about the skills of those candidates observed; however, not as a replacement for the long case in postgraduate training.

  References Top

1.Harden RM, Lever R, Wilson GM. Two systems of marking objective examination questions. Lancet 1969;1:40-2.  Back to cited text no. 1
2.Wilson GM, Lever R, Harden RM, Robertson JI. Examination of clinical examiners. Lancet 1969;1:37-40.  Back to cited text no. 2
3.Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. BMJ 1975;1:447-51.  Back to cited text no. 3
4.Meskauskas JA. Studies of the oral examination: The examinations of the subspecialties Board of Cardiovascular Disease of the American Board of Internal Medicine. In: Lloyd JS, Langsley DG, editors. Evaluating the Skills of Medical Specialists. Chicago Il. American Board of Medical Specialties; 1983.  Back to cited text no. 4
5.Norman G. The long case versus the objective structured clinical exam. BMJ 2002;324:748-9.  Back to cited text no. 5
6.Wass V, Jones R, van der Vleuten CP. Standardized or real patients to test clinical competence? Med Educ 1985;19:321-5.  Back to cited text no. 6
7.Norcini J, Burch V. Workplace-based assessment as an educational tool: AMEE Guide 31. Med Teacher 2007; 29:855-71.  Back to cited text no. 7
8.Chou S, Cole G, McLaughlin K, Lockyer J. Can MEDS evaluation in Canadian postgraduate training programmes: Tools used and programme director satisfaction. Med Educ 2008;42:879-86.  Back to cited text no. 8


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article

 Article Access Statistics
    PDF Downloaded154    
    Comments [Add]    

Recommend this journal