Our analysis shows that candidates who have trained at different UK medical schools perform differently in the MRCP(UK) examination. In 2003–2005, 91%, 76% and 67% of students from Oxford, Cambridge and Newcastle passed Part 1 at their first attempt, compared with 32%, 38%, 37% and 41% of Liverpool, Dundee, Belfast and Aberdeen graduates, so that, for instance, twice as many Newcastle graduates pass the exam first time compared with Liverpool graduates (odds ratio = 4.3×).
At the medical school level, performance at Part 1 correlates almost perfectly with performance at Part 2 (and both are multiple-choice examinations), while performance at PACES, which is a clinical examination, still correlates highly with Parts 1 and 2, although there are some small changes in rank order, the most notable being that London graduates perform worse than average at PACES but not at Part 1 and Part 2.
School-leaving examinations are known at the individual level to predict performance in undergraduate medical examinations and in postgraduate careers [23, 24]. Although pre-admission academic qualifications correlate significantly with MRCP(UK) Part 1 performance at the medical school level (r = 0.779), that correlation is substantially less than the correlation found between Part 1 and Part 2 of the examination (r = 0.992). Pre-admission qualifications therefore account for about 62% of the accountable variance, leaving about 38% of the school-level variance dependent on other, unknown, factors. It should be emphasized that because sex and ethnic origin have been entered into the multilevel model at an individual level, there can be no differences at medical school level attributable to ethnicity or sex.
There are at least three broad types of explanation for the differences we have found: differences in those entering the schools (selection effects); differences in education or training at the school (training effects); or differences owing to students from different schools preferring different postgraduate careers (career preference effects).
Selection effects would predict that better qualified students enter schools such as Oxford, Cambridge and Newcastle-upon-Tyne (and Oxford and Cambridge, in particular, have traditionally demanded very high A-levels), so that the better-qualified entrants to those schools would also be likely to perform better in postgraduate examinations. At the individual level it is known that A-level results correlate with performance in MRCP(UK) Part 1  and there are also clear differences in the average pre-admission qualifications of applicants receiving offers at different medical schools (see Figure 2). Our analysis of compositional variables leaves little doubt that one-half or more of the variance between schools can be explained by differences in intake, and that is supported by the correlations found with the data reported in the Guardian tables, which are compiled from a range of official statistics (Table 2). However, even at Part 1 the correlation leaves at least one-third of the variance unexplained. In particular, MRCP(UK) performance is about one SD higher than predicted from pre-admission qualifications alone for Leicester, Oxford, Birmingham, Newcastle-upon-Tyne and London, and about one SD lower than expected for Southampton, Dundee, Aberdeen, Liverpool and Belfast. Neither can differences in pre-admission qualifications explain the relative underperformance of London graduates at PACES, compared with Part 1 and Part 2. Pre-admission qualifications are a part of the story, but are not the entire explanation of medical school differences and the remaining variance is most likely to be related either to other differences in the intake of schools or to differences in the education provided by those schools.
Career preference effects would occur if the differential performance of graduates on MRCP(UK) reflects a form of self-selection into different specialities (and Parkhouse reported, for instance, that amongst those qualifying between 1974 and 1983 that hospital medicine was particularly popular for Oxford, London and Wales graduates, and particularly unpopular for Aberdeen, Dundee and Leicester graduates ). If popularity also equated to status and kudos, then it might be that the most academically gifted students at one school might prefer to go into one particular speciality, whereas at another school they might prefer a different speciality. Candidates would then perform better if they came from schools where a higher proportion of graduates took the MRCP(UK). However, our data show that not to be the case, as the correlation of performance and the proportion taking the exam was non-significant after pre-admission qualifications are taken into account.
Career preference effects also predict that if training at all schools is on aggregate equivalent, then schools performing better at one particular postgraduate examination, because their better students prefer to take it, should also perform less well at other examinations which are taken by their less gifted graduates. Overall there would then be a negative correlation in the ordering of schools across any pair of postgraduate examinations. In a study of performance at MRCGP in the early 1990s , graduates of Oxford, Cambridge and Newcastle-upon-Tyne ranked 1st, 5th and 7th in performance, compared with Belfast, Aberdeen, Dundee and Liverpool graduates who ranked 16th, 23rd, 24th and 26th out of the 27 UK medical schools, with an overall positive correlation of effect sizes of r = 0.480 (p = 0.038, n = 19). More recent data for the MRCGP from 2003–2006 show a similar and somewhat stronger trend (see Table 1). Such positive correlations, if confirmed by other examinations, would make the career selection explanation unlikely.
Institutions can differ in the amount of 'value' that they add, an effect well known in secondary education . Training effects would predict that teaching and training in general medicine at some schools is a better preparation for MRCP(UK) than at others, perhaps because of differences in course emphasis or focus, so that candidates subsequently perform better at the MRCP(UK). If career preferences and pre-admission qualifications cannot explain all of the differences between medical schools, then a reasonable conclusion is that that medical schools also differ in the quality of their training in general medicine. Some schools may therefore be adding more value to their students than others, in relation to taking the MRCP(UK), even taking into account differences in pre-admission qualifications. However, it is of interest that none of the teaching-related measures in the Guardian compilations correlate with MRCP(UK) performance.
The MRCP(UK) examinations are typically taken early in the career, The impact of university teaching on performance is supported by our finding that recency of graduation is a predictor of performance in all three parts of the examination. The coefficient of variation for medical school differences was largest for Part 1 and smallest for PACES, suggesting that postgraduate education dilutes the effects of undergraduate training as time passes. Understanding the mechanisms by which medical school teaching might affect postgraduate examination performance requires more background information than we have available. It is interesting that when a university's students are more likely to report that the teaching of medicine is 'very interesting', then graduates subsequently perform better at MRCP(UK). However, that effect does seem to be secondary to pre-admission qualifications, with students from schools with higher pre-admission qualifications also reporting the teaching of medicine to be more interesting. Teaching can be affected not only by the activities of teachers and students, but also by the environment and institutions in which teaching occurs. A case of particular interest is London, the only university for which there is a specific underperformance of graduates on PACES, the clinical examination of MRCP(UK), and London's medical schools have undergone repeated reorganizations over the past two decades, which might in part explain the effects on clinical teaching. As the data are aggregated for all London schools, this is difficult to explore further here. An additional confounding issue for all schools of medicine is the constant change in curricula. However, our additional analysis of Part1 data going back to those taking the exam in 1989 (who would have entered medical school in about 1982) shows that the broad pattern of results we have found is long-standing, and therefore could only partly be explained by the changes in medical education initiated by the GMC in Tomorrow's Doctors in 1993 . A detailed examination of individual medical schools (see Figures S11a-11e in additional file 1) shows that for many schools there has been little variation in relative performance between 1989 and 2005. Problem-based learning, introduced in Glasgow, Liverpool and Manchester, has had little obvious impact in the latter two schools, although performance did increase in Glasgow. Despite many, much criticised reorganizations in London, performance overall has improved. Oxford and Cambridge both showed sudden increases in performance in the late 1990s, as did Wales. Other schools showed fluctuations, but the overwhelming impression is of constancy rather than change, suggesting that curricular and other changes have had little impact on relative performance of schools.
The MRCP(UK) consists of both written and clinical examinations, and detailed analyses of its rationale and behaviour have been presented elsewhere [3–8]. Of course, the examination does not assess the entire range of knowledge, skills and attitudes necessary to be a successful physician, although it does cover diagnosis and management within internal medicine comprehensively, and the PACES examination assesses a wide range of practical skills, including physical examination, recognition of signs, management of patients, history-taking, communication with patients and relatives, and handling difficult ethical situations. Current work suggests that PACES, in particular, assesses all of the competencies that Modernising Medical Careers recognizes should be assessed in such an examination, and it is an important, coherent and central part of the assessment of competencies within the UK that the GMC and PMETB recognize as needing to be assessed. However, MRCP(UK) cannot assess all of the necessary competencies and it is possible that some of those not assessed are also inculcated better by some medical schools than others, and this possibility must await further evidence from other sources.