Home Print this page Email this page
Users Online: 627
Home About us Editorial board Search Ahead of print Current issue Archives Submit article Instructions Subscribe Contacts Login 

 Table of Contents  
Year : 2013  |  Volume : 1  |  Issue : 2  |  Page : 71-77

Developing the science of selection into the healthcare professions and speciality training within Saudi Arabia and the Gulf region

1 Charles Perkins Centre and Sydney Medical School, University of Sydney, Sydney, Australia
2 Department of Paediatrics, College of Medicine, King Saud bin Abdulaziz University for Health Sciences, Riyadh, Kingdom of Saudi Arabia
3 Department of Medical Education, School of Medicine, Flinders University, Adelaide, Australia
4 Department of Health Professional Education, College of Medicine, University of Illinois, Chicago, Illinois, USA

Date of Web Publication5-Jul-2013

Correspondence Address:
Chris Roberts
Associate Academic Director (Education), Charles Perkins Centre and Sydney Medical School, University of Sydney, Rm K4.01, Level 4, The Quadrangle A14, University Place
Login to access the Email id

DOI: 10.4103/1658-600X.114684

Rights and Permissions

Research about the selection of students into health care practitioner programs, principally medical programs, and into specialty training has become a worldwide phenomenon. Set against a rapid expansion of healthcare professional students, there have been calls to make national policy considerations and provide guidance for an evidence-based selection process. This article considers the implications of the international research base underpinning current selection processes, and makes recommendations for policy makers, health educators, and institutional leaders to consider. We recommend that selection procedures into health professional education and specialty training becomes part of an international conversation that takes account of the complexities of local context, the evidence base into what works and what does not, and the efficient and effective use of resources.

Keywords: Admissions, assessment, Saudi Arabia, selection

How to cite this article:
Roberts C, Al Alwan I, Prideaux D, Tekian A. Developing the science of selection into the healthcare professions and speciality training within Saudi Arabia and the Gulf region. J Health Spec 2013;1:71-7

How to cite this URL:
Roberts C, Al Alwan I, Prideaux D, Tekian A. Developing the science of selection into the healthcare professions and speciality training within Saudi Arabia and the Gulf region. J Health Spec [serial online] 2013 [cited 2018 Dec 17];1:71-7. Available from: http://www.thejhs.org/text.asp?2013/1/2/71/114684

  Background Top

Research about the selection of students into healthcare practitioner programs, principally medical programs and into specialty training has become a worldwide phenomenon. [1],[2] This is set against a background of an expansion internationally in both health professional student numbers in the public and private sectors as well as in the increasing quality assurance in regulation and accreditation of professional training. This expansion is framed within documented inequities in workforce distribution at global, regional and national levels. In Saudi Arabia, where there are severe shortages of trained healthcare professionals in many geographical areas and within disciplines, all health professional programs have seen expansion. For example, the number of medical schools has increased from 13 (12 public and one private) in 2005 to 31 medical schools (24 public and 7 private) in 2010, [3] with a corresponding increase in the medical student intake to around 2,500. [4] This has challenged colleges to compete for and select from sufficiently able students who are capable of graduating as competent doctors. In addition to anxieties about the impact that this rapid growth might have on quality, concerns have also been raised about the high attrition rate. Consequently, there is a need to make national policy considerations and provide guidance for an evidence-based selection process. [4],[5]

Selection procedures are an important and integral part of health professions' basic health education and postgraduate vocational training. They provide a high stakes assessment process governing entry into programs such as medicine from an elite group of high achievers, many of whom have been extensively prepared as part of a career "pipeline" extending from high school. The selection process typically serves two distinct purposes: The first is to reduce the large number of otherwise qualified and capable applicants to fill the number of places available and the second is to enrol students who will most likely succeed in a rigorous program of education and training and subsequently become effective members of their chosen profession. [6] There have been concerns raised that, in some international settings, where there has been insufficient numbers of candidates, the quality criteria for enrolment have been compromised. [7]

Although differing health professional schools use different criteria for selection, such criteria are generally based on a combination of prior academic achievement, written tests of aptitude for future education and an interview to determine values and commitment.

To be defensible, any individual selection method must be reliable, within and across successive cohorts of applicants. It must select on the basis that it claims to test (that is have construct validity), and it should predict the eventual performance of the potential healthcare professionals who have been selected, i.e. have predictive validity. [6]

Within Saudi Arabia, health colleges typically use a range of possible selection tools including the final high school mark, the Saudi National Achievement Exam (Tahsili), the Saudi National Aptitude Exam (Qudraat)  [5] and a college-based interview, either a traditional interview or the multiple mini-interview (MMI). [8] The degree to which each college uses these tools is highly variable across the Kingdom. [4]

For high school marks, students take a national examination designed by the Ministry of Education. Students from the sciences division taking chemistry, biochemistry, mathematics, biology and physics, together with Arabic and English languages and religious studies are considered potential candidates by Health Sciences Colleges. [5],[8]

The Saudi National Achievement Exam is held once annually, again following the final high school exam, testing English language, biology, chemistry, physics and mathematics in the format of multiple choice questions (MCQs).

All health sciences colleges, as a part of their admissions criteria, use the Saudi National Aptitude Exam, administered in the final year of high school, although it is not specific to health sciences colleges alone. This test comprises of linguistic and mathematics sections in MCQ format. The National Centre for Assessment, Ministry of Higher Education, prepares the latter two tests. [8]

Research internationally has largely focussed on establishing the utility of the various selection formats.

  • Tests or composites of prior achievement
  • Written tests of aptitude, both high school achievement tests, undergraduate entry medicine tests or graduate entry medicine tests, e.g., the Medical College Admissions Test (MCAT)
  • Single interviews and the MMI

We do not aim to be exhaustive in describing all the possible methods, rather illustrate our argument with the research from the most well-known of these formats internationally.

  Composites of Prior Achievement Top

The most commonly used composite in selection procedures is the grade point average (GPA). The GPA system is a long established system of academic grading attributed to William Farish (1759 - 1837), a British tutor in chemistry and natural philosophy at the University of Cambridge. [9] However, nearly two centuries later, there is no single preferred definition within the research literature or a common means of calculation, and to add further complexity, GPA calculations vary by nation and by educational institution. Furthermore, there is also a paucity of readily available public information on the reliability of school leaver assessments, and there is little published research on the reliability of GPA calculations, making comparison between nations, institutions or programs of research very difficult. [2] Probably the most well-known research around the predictive validity of the GPA explores definition of the predictive power of undergraduate (i.e. Bachelor's degree) GPAs. For example, Julian [10] found that performance in early medical school assessments was predicted best by GPA from the entrants' prior degrees, combined with written aptitude tests.

  Written Aptitude Tests Top

In the undergraduate setting, internationally, the trend has been to avoid testing knowledge and instead focus on testing reasoning skills. For example, in Australia, a consortium of medical schools in collaboration with the Australian Council for Educational Research produced the Undergraduate Medicine and Health Sciences Admission Test (UMAT) for entry into undergraduate medical, dental and health professional programs (umat.acer.edu.au). [11],[12] It is designed to assess abilities in a range of general skills acquired over a period of time, including the ability to reason, make logical deductions and form judgments. However, the ability of UMAT to predict outcomes in major assessments within medical programmes has been found to be relatively minor in comparison with that of the admission GPA, [12] derived from scores achieved in a common first year university degree. Another example of a test of general skills is the UK Clinical Aptitude Test (UKCAT), which scores student ability in four distinct domains: Quantitative Reasoning, Verbal Reasoning, Abstract Reasoning and Decision Analysis. So far, the evidence on its utility is equivocal. For example, in one cohort of an English medical school, UKCAT scores at admission did not independently predict subsequent performance on the course. [13] In the same vein, in a study of two Scottish medical schools, [14] UKCAT did not predict performance in the first year of medical school.

In graduate entry settings, much of the research is based on the North American experience and, consequently, the best known predictive validity studies on aptitude tests for future clinical training focus on the MCAT used to test graduate level entrants to North American medical schools and several schools internationally including Australia. It contains four main sections: Physical Sciences, Verbal Reasoning, a Writing Sample and Biological Sciences. Donnon et al.,[15] conducted a meta-analysis of 23 studies investigating the predictive validity of MCAT as it related to performance in medical schools and step 1 of the United States Medical Licensing Examination (USMLE). They found a predictive validity coefficient of 0.39 for MCAT for performance in the pre-clinical years of the medical course and 0.6 for performance in the USMLE step 1. The biological sciences subtest was the best predictor on both measures. Of interest is that, in response to much constructive criticism, the MCAT has been re-designed. [16] The assessment will now include four sections: Biological and Biochemical Foundations of Living Systems; Chemical and Physical Foundations of Biological Systems; Psychological, Social, and Biological Foundations of Behaviour; and Critical Analysis and Reasoning Skills. A new research agenda to determine reliability and validity of the new test will be initiated.

In Australia, the Graduate Australian Medical School Admission Test (GAMSAT) has been used since 1996 and lately also for selection into some UK medical schools. It comprises three sections: Reasoning in the Humanities and Social Sciences, Written Communication and Reasoning in Biological and Physical Sciences. GAMSAT research evidence is mixed. Coates [17] found that GAMSAT and GPA scores were best predictors of medical school performance in first year.

  Multiple Mini-Interviews Top

Some reliability studies have focused on the traditional interview, which consists of face-to-face contact with single interviewers or panels with varying degrees of structure. Despite their widespread use for selecting for personal qualities, Kreiter et al.,[18],[19] reported that the existing literature did not provide sufficient evidence regarding interview reliability. In their study, interview scores derived from a standardized interview were found to display low to moderate levels of reliability and did not possess the level of precision found with other measures commonly used to facilitate admissions decisions. Albanese et al.,[19] reached similar conclusions in their review of assessment of personal qualities for selection.

The MMI [20],[21] is a relatively new assessment tool, which addresses concerns about interview reliability. It uses the Objective Structured Clinical Examination (OSCE) format and thus avoids the issues of the long interview (c.f. the long case in clinical competence), where much of the observed mark of the candidate relates to biases from the limited interview content and the interviewer panel. MMIs have been used to assess characteristics of entry-level students concerning values and commitment. Several medical schools now include MMI as a part of their selection process, for example, in Canada, [20],[22] Australia [21],[23] and Saudi Arabia. [8] The MMI format shows greater reliability and content validity for medical school admission processes than the traditional interviews [20],[23],[24],[25],[26],[27] and is more cost-effective. [28] There is evidence of predictive validity through moderate correlations with later clinical assessments such as the OSCE. [29],[30]

  Postgraduate Selection Top

Postgraduate selection is dependent on many factors and not just on the psychometric robustness of each moment of assessment along the health professionals' journey from high school to a career post. For example, there are major workforce issues that need to be addressed where there is much national and disciplinary variation in terms of health practitioner supply and demand. It has been observed internationally that the city enjoys over subscription of practitioners of all disciplines, whereas rural areas are undersubscribed. Such shortages are amplified in certain disciplines such as rural mental health. In practical terms, there is a concern that practitioners who are under qualified are going for disciplines and locations on the basis of supply and demand rather than on the surety that they are suitably qualified and can safely practice. [7]

Internationally, there is a move for governments to require an overarching body to assure standards of medical and health professional education, including accreditation criteria for selection procedures. For example, in Saudi Arabia, in addition to implementing training programs for health professionals, the Saudi Commission for Health Specialties is responsible for assessing and accrediting trainees to work in the Kingdom and setting the rules and standards for the practice of healthcare professionals ( http://arabic.scfhs.org.sa/new/pages/index.html ). In the postgraduate arena, selection aims to predict trainability, i.e. identify individuals who will successfully complete training prior to commencement. [31] Internationally, there are a number of examples where professional training colleges have developed selection focused assessment procedures to determine trainability. Such assessments have used a range of formats, both written and observed, such as personal statements, [32] situational judgment tests, [33] the clinical problem solving test, [34] both low and high fidelity simulations [35] and the MMI. [30],[36] Wherever candidates are required to attend a venue wherein more than one assessment is undertaken, the term assessment centre [34] has been commonly used. The determination of which combination of formats gives the best predictability of trainability is the focus of ongoing investigation. [33],[35],[36]

  The Continuum of Assessment Top

By the time applicants enters a health professions degree program, they are already assessed in high school, undergo written aptitude testing and then assessed according to the selection processes into university or college, for example, with an interview. They also undergo several assessments in the health professional school, including a summative assessment and in specific settings, for example, medical training in the UK, they are subjected to selection procedures into internship and specialty training programs.

Considering the burden of assessment along the whole career trajectory, the question arises as to the balance between spending resources on selecting candidates into a program and assessing them as they graduate from that program. However, there is very little research published on the program of assessments that most health professional go through during their career trajectory. Thus, for example, considering a newly qualified doctor, which is the better predictor of performance in a specialist training scheme? Is it their final summative university assessment or their marks on the selection procedure into that training program?

In many countries, there have been concerns raised about the quality of medical program summative assessments, [37] and thus there are debates about whether national examinations would overcome this problem. Accordingly, in Saudi Arabia, as in many other countries outside of North America where medical licensing examinations have been in place for many years, there has been a call [38] for a national medical licensing examination. It would consist of two parts: Part I (Written), which would test the basic science and clinical knowledge and Part II OSCE, which would test clinical skills and professional behaviours. This has been proposed as a mandated licensure requirement for practising medicine in Saudi Arabia, [39] both for local and international doctors. In any quality framework that integrates assessment and selection processes, it is vital to consider integration between summative selection from medical school and selection into future training. Therefore, for example, if it could be assumed that all candidates for specialty training in Saudi Arabia who have passed the equivalent of the Saudi licensing exam are knowledgeable in scientific and clinical knowledge and skills, then the selection procedures could focus on non-cognitive aspects such as commitment and values.

One way forward to provide data over time to research these types of issues is to consider national approaches to tracking students into the workforce. Elsewhere, this approach has been found to have great potential in, for example, understanding predictive factors that lead to career choices. Examples include the Association of American Medical Colleges longitudinal data of US medical students [40] and the Medical Deans of Australia and New Zealand, Medical Schools Outcomes Database, which was established in 2005. [41]

  Some Challenges for the Theory of Selection Top

The type of selection research that we have so far outlined has led to important advances in selection-focused assessment. It has provided good evidence about the strengths and weaknesses of the various approaches as well as in understandings of their relationships. However, the research says little about how to view selection within a complex health delivery system and within a complex regulatory and accreditation framework.

Thus, most local selection committees have developed their policies for selection procedures in an environment where they acknowledge a complex interplay of factors. These include national, regional and local priorities, competition with similar institutions in attracting high academic achievers, widening access to students of different backgrounds, work force issues including areas of need, challenges in establishing the equivalence in competence of internationally trained students and healthcare professionals, availability of training places and financial constraints on universities.

Within this pragmatic approach to developing policy, the theoretical frameworks underpinning much of the existing selection research can be seen as relatively simplistic. For example, a commonly used distinction is between "cognitive" elements that consist of prior academic achievement and written aptitude tests and "non-cognitive" measures that are aimed at assessing the values and personal characteristics of applicants, including by interview and personal statements. [20],[21]

Contrasting a complex health and health education policy context with a research-based reductionist view of the selection process illustrates the substantive challenge to be resolved in taking forward the debate around selection. More complex theoretical models that take account of the broader issues in selection need to be developed. To be fair, the psychometricians are some of the fiercest critics of the current status quo in theoretical development, which is blocking the translation of research evidence into policy change. A typical argument from the psychometric point of view is that predictive studies ideally would have some "gold standard" outcome variable with which to compare the assessment under scrutiny. Currently, it is claimed that there are no credible holistic measures of "good" or "bad" characteristics of students or, for that matter, "good" or "bad" health professionals. Conveniently, the critique continues that there is much in the way of in-school assessment data to regress selection data against, leading to a plethora of studies interpreting relatively modest correlation co-efficients, which fail to progress new understandings. [1] A related concern of predictive studies is common method variance: The notion of tests merely predicting tests. [11]

In addressing this conundrum of complexity in the real world and over simplicity in current research approaches, some researchers have been consulting with a broader range of stakeholders than is traditionally the case in much selection research. Such stakeholders are likely to have a legitimate interest in requiring more useful outcomes of selection research.

  Alternative Frameworks for Selection Policy Top

Two international prerogatives for universities in the international health professional research literature further illustrate the limitations of the current narrow psychometric research approach to selection. These are the social accountability and widening participation agendas. Universities need to be socially accountable, which is defined by the World Health Organisation (WHO) as the obligation to direct their education, research and service activities towards addressing the priority health concerns of the community, region and/or nation they have a mandate to serve. [42],[43] To date, there is very little theory development or empirical research evidence around the impact this has on selection policy formation, although there has been guidance as to how to approach this. Medical schools can only meet their obligations to WHO standards of social accountability by demonstrating the creation of a capable, inclusive and tolerant medical school population, where students can work towards diverse future careers for the public good, regardless of economic or ethnic background and religious or political beliefs. [43] There has been a suggestion that, within the context of Saudi Arabia, more external drivers, for example, around World Federation of Medical Education (WFME) standards, drive Saudi medical towards embracing the social accountability agenda in health professions education, but the challenges are recognized. [44],[45]

  Conclusion Top

Selection policy in practice is a relatively new area of research. However, there are a number of points that are becoming clear and will be of interest to stakeholders in Saudi Arabia who are interested in continuing to develop a quality assurance framework for selection. At its most basic, selection has to be approached with the rigor of any high quality assessment. The way an institution at a local or national level sets out its selection policies says much about that institution's values and commitment to its understanding of the complex array of factors that impact selection of the right professionals to take forward the agenda to improve the health and well being of all. Pragmatically, selection committees need to develop integrated selection processes that fit with their institutions' educational, scientific, clinical and service-oriented goals. [16] They should adopt the principles of good assessment by defining the purpose of selection, blueprinting of assessable domains, selection of appropriate formats, including a transparent standard setting and decision-making process and an evaluation cycle. [37],[46] Institutional leaders need to articulate those institutional goals through a process of inclusive consultation and planning and to ensure that they are not diverted by narrow and immediate goals.

At a national level, selection needs to be part of the continuum of assessment, where an integrative approach applies the principles of good assessment along the progression hurdles on the education and training pathway within health professional degree, prevocational practice and basic and advanced speciality training. Researching such an approach may be supported by a nationally co-ordinated student-tracking database. [40],[41]

There needs to be a pragmatic balance between selection into future education and training programs and accreditation from past programs that takes account of current workforce needs.

Health science education researchers need to develop interdisciplinary theoretical frameworks that will underpin development of both policy and the research agenda.

We recommend that selection procedures into health professional education and specialty training becomes part of an international conversation that takes account of the complexities of local context, [2] the evidence base on what works and what does not as well the efficient and effective use of resources.

  References Top

1.Roberts C, Prideaux D. Selection for medical schools: Re-imaging as an international discourse. Med Educ 2010;44:1054-6.  Back to cited text no. 1
2.Prideaux D, Roberts C, Eva K, Centeno A, McCrorie P, McManus C, et al. Assessment for selection for the health care professions and specialty training: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach 2011;33:215-23.  Back to cited text no. 2
3.Tekian A, Almazrooa AA. Does Saudi Arabia need an Abraham Flexner? Med Teach 2011;33:72-3.  Back to cited text no. 3
4.Telmesani A, Zaini RG, Ghazi HO. Medical education in Saudi Arabia: A review of recent developments and future challenges. East Mediterr Health J 2011;17:703-7.  Back to cited text no. 4
5.Albishri JA, Aly SM, Alnemary Y. Admission criteria to Saudi medical schools. Which is the best predictor for successful achievement? Saudi Med J 2012;33:1222-6.  Back to cited text no. 5
6.Al-Alwan IA. Association between scores in high school, aptitude and achievement exams and early performance in health science college. Saudi J Kidney Dis Transpl 2009;20:448-53.  Back to cited text no. 6
[PUBMED]  Medknow Journal  
7.Amin Z, Burdick WP, Supe A, Singh T. Relevance of the Flexner Report to contemporary medical education in South Asia. Acad Med 2010;85:333-9.  Back to cited text no. 7
8.Al Alwan I, Al Kushi M, Tamim H, Magzoub M, Elzubeir M. Health sciences and medical college preadmission criteria and prediction of in-course academic performance: A longitudinal cohort study. Adv Health Sci Educ Theory Pract, 2012.  Back to cited text no. 8
9.Soh KC. Grade point average: What's wrong and what's the alternative? J High Educ Policy Manage 2010;33:27-36.  Back to cited text no. 9
10.Julian ER. Validity of the Medical College Admission Test for predicting medical school performance. Acad Med 2005;80:910-7.  Back to cited text no. 10
11.Wilson IG, Roberts C, Flynn EM, Griffin B. Only the best: Medical student selection in Australia. Med J Aust 2012;196:357.  Back to cited text no. 11
12.Poole P, Shulruf B, Rudland J, Wilkinson T. Comparison of UMAT scores and GPA in prediction of performance in medical school: A national study. Med Educ 2012;46:163-71.  Back to cited text no. 12
13.Yates J, James D. The UK clinical aptitude test and clinical course performance at Nottingham: A prospective cohort study. BMC Med Educ 2013;13:32.  Back to cited text no. 13
14.Lynch B, Mackenzie R, Dowell J, Cleland J, Prescott G. Does the UKCAT predict year 1 performance in medical school? Med Educ 2009;43:1203-9.  Back to cited text no. 14
15.Donnon T, Paolucci EO, Violato C. The predictive validity of the MCAT for medical school performance and medical board licensing examinations: A meta-analysis of the published research. Acad Med 2007;82:100-6.  Back to cited text no. 15
16.Schwartzstein RM, Rosenfeld GC, Hilborn R, Oyewole SH, Mitchell K. Redesigning the MCAT Exam: Balancing multiple perspectives. Acad Med 2013;88:560-7.  Back to cited text no. 16
17.Coates H. Establishing the criterion validity of the Graduate Medical School Admissions Test (GAMSAT). Med Educ 2008;42:999-1006.  Back to cited text no. 17
18.Kreiter CD, Yin P, Solow C, Brennan RL. Investigating the reliability of the medical school admissions interview. Adv Health Sci Educ Theory Pract 2004;9:147-59.  Back to cited text no. 18
19.Albanese MA, Snow MH, Skochelak SE, Huggett KN, Farrell PM. Assessing personal qualities in medical school admissions. Acad Med 2003;78:313-21.  Back to cited text no. 19
20.Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: The multiple mini-interview. Med Educ 2004;38:314-26.  Back to cited text no. 20
21.Roberts C, Walton M, Rothnie I, Crossley J, Lyon P, Kumar K, et al. Factors affecting the utility of the multiple mini-interview in selecting candidates for graduate-entry medical school. Med Educ 2008;42:396-404.  Back to cited text no. 21
22.Brownell K, Lockyer J, Collin T, Lemay JF. Introduction of the multiple mini interview into the admissions process at the University of Calgary: Acceptability and feasibility. Med Teach 2007;29:394-6.  Back to cited text no. 22
23.Harris S, Owen C. Discerning quality: Using the multiple mini-interview in student selection for the Australian National University Medical School. Med Educ 2007;41:234-41.  Back to cited text no. 23
24.O'Brien A, Harvey J, Shannon M, Lewis K, Valencia O. A comparison of multiple mini-interviews and structured interviews in a UK setting. Med Teach 2011;33:397-402.  Back to cited text no. 24
25.Uijtdehaage S, Doyle L, Parker N. Enhancing the reliability of the multiple mini-interview for selecting prospective health care leaders. Acad Med 2011;86:1032-9.  Back to cited text no. 25
26.Roberts C, Zoanetti N, Rothnie I. Validating a multiple mini-interview question bank assessing entry-level reasoning skills in candidates for graduate-entry medicine and dentistry programmes. Med Educ 2009;43:350-9.  Back to cited text no. 26
27.Dore KL, Hanson M, Reiter HI, Blanchard M, Deeth K, Eva KW. Medical school admissions: Enhancing the reliability and validity of an autobiographical screening tool. Acad Med 2006;81:S70-3.  Back to cited text no. 27
28.Rosenfeld JM, Reiter HI, Trinh K, Eva KW. A cost efficiency comparison between the multiple mini-interview and traditional admissions interviews. Adv Health Sci Educ Theory Pract 2008;13:43-58.  Back to cited text no. 28
29.Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ 2007;41:378-84.  Back to cited text no. 29
30.Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ 2009;43:767-75.  Back to cited text no. 30
31.Patterson F, Ferguson E, Norfolk T, Lane P. A new selection system to recruit general practice registrars: Preliminary findings from a validation study. BMJ 2005;330:711-4.  Back to cited text no. 31
32.Provan JL, Cuttress L. Preferences of program directors for evaluation of candidates for postgraduate training. CMAJ 1995;153:919-23.  Back to cited text no. 32
33.Patterson F, Ashworth V, Zibarras L, Coan P, Kerrin M, O'Neill P. Evaluations of situational judgement tests to assess non-academic attributes in selection. Med Educ 2012;46:850-68.  Back to cited text no. 33
34.Ahmed H, Rhydderch M, Matthews P. Can knowledge tests and situational judgement tests predict selection centre performance? Med Educ 2012;46:777-84.  Back to cited text no. 34
35.Lievens F, Patterson F. The validity and incremental validity of knowledge tests, low-fidelity simulations, and high-fidelity simulations for predicting job performance in advanced-level high-stakes selection. J Appl Psychol 2011;96:927-40.  Back to cited text no. 35
36.Dore KL, Kreuger S, Ladhani M, Rolfson D, Kurtz D, Kulasegaram K, et al. The reliability and acceptability of the multiple mini-interview as a selection instrument for postgraduate admissions. Acad Med 2010;85(10 Suppl):S60-3.  Back to cited text no. 36
37.Roberts C, Newble D, Jolly B, Reed M, Hampton K. Assuring the quality of high-stakes undergraduate assessments of clinical competence. Med Teach 2006;28:535-43.  Back to cited text no. 37
38.Schuwirth L. The need for national licensing examinations. Med Educ 2007;41:1022-3.  Back to cited text no. 38
39.Bajammal S, Zaini R, Abuznadah W, Al-Rukban M, Aly SM, Boker A, et al. The need for national medical licensing examination in Saudi Arabia. BMC Med Educ 2008;8:53.  Back to cited text no. 39
40.Andriole DA, Jeffe DB, Hageman HL, Ephgrave K, Lypson ML, Mavis B, et al. Variables associated with full-time faculty appointment among contemporary U.S. Medical school graduates: Implications for academic medicine workforce diversity. Acad Med 2010;85:1250-7.  Back to cited text no. 40
41.Humphreys JS, Prideaux D, Beilby JJ, Glasgow NJ. From medical school to medical practice: A national tracking system to underpin planning for a sustainable medical workforce in Australasia. Med J Aust 2009;191:244-5.  Back to cited text no. 41
42.Boelen C, Woollard B. Social accountability and accreditation: A new frontier for educational institutions. Med Educ 2009;43:887-94.  Back to cited text no. 42
43.Murray RB, Larkins S, Russell H, Ewen S, Prideaux D. Medical schools as agents of change: socially accountable medical education. Med J Aust 2012;196:653.  Back to cited text no. 43
44.Al-Subait R, Elzubeir M. Evaluating a masters of medical education program: Attaining minimum quality standards? Med Teach 2012;34 Suppl 1:S67-74.  Back to cited text no. 44
45.Al-Shehri AM, Al-Alwan I. Accreditation and culture of quality in medical schools in Saudi Arabia. Med Teach 2013;35:S8-14.  Back to cited text no. 45
46.Crossley J, Humphris G, Jolly B. Assessing health professionals. Med Educ 2002;36:800-4.  Back to cited text no. 46

This article has been cited by
1 Multiple mini interview (MMI) for general practice training selection in Australia: interviewers’ motivation
Annette Burgess,Chris Roberts,Premala Sureshkumar,Karyn Mossman
BMC Medical Education. 2018; 18(1)
[Pubmed] | [DOI]
2 The social validity of a national assessment centre for selection into general practice training
Annette Burgess,Chris Roberts,Tyler Clark,Karyn Mossman
BMC Medical Education. 2014; 14(1)
[Pubmed] | [DOI]


Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
Access Statistics
Email Alert *
Add to My List *
* Registration required (free)

  In this article
Composites of Pr...
Written Aptitude...
Multiple Mini-In...
Postgraduate Sel...
The Continuum of...
Some Challenges ...
Alternative Fram...

 Article Access Statistics
    PDF Downloaded206    
    Comments [Add]    
    Cited by others 2    

Recommend this journal