A pilot Tuning Project-based national study on recently graduated medical students’ self-assessment of competences

We have validated a questionnaire that can serve as a SWOT analysis tool for medical
education at a national level. In fact, the TEST study provides a pioneer input on
Portuguese medical graduates’ self-assessed clinical competence against an European
referential. For the first time, the effectiveness of Portuguese medical schools in
delivering core competences and exposing students to core clinical settings is explored,
and areas where curricula may benefit from improvements are highlighted.

How did graduates self-assess their acquisition of core competences?

The median values of Clinical Practice and Knowledge factors correspond to a level of self-assessed competence between sufficient and
good. The majority of factors showed a good level of competence. Important differences
were found among CP factors, scores ranged from 1.3 (insufficient) to 4.0 (very good)
in a 0 to 5 Likert scale. Among K factors, variation was not that evident (from 2.0
to 3.0). Furthermore, CP factors also showed the highest and the lowest scored items
in the questionnaire. These results show that the dispersion of results was greater
in CP competences.

Important aspects of medical practice were self-assessed above the median CP value
in all medical schools: “Consultation with a patient and management plan”, “Ethical
principles”, “Psychological and social aspects of illness”, “Evidence-based medicine
and scientific principles”, as well as “Population health and healthcare system”.
Knowledge on “Public Health” was also scored above the median K value in all schools.

On the other hand, CP factors “Medical emergencies”, “Practical procedures”, “Prescribe
drugs” and “Legal principles” scored below the median CP value in all medical schools.
These results are consistent with other studies on self-assessed competence in recent
graduates or junior doctors 28], 29].

Poor grouped self-assessments in “Medical emergencies” might be explained graduates’
feelings of incapacity to deal with emergent clinical scenarios, fear of making mistakes
and limited opportunities for practice in emergency settings during the medical courses.
Regarding “Practical procedures”, low self-assessment scores may reflect the same
needs of improvement in medical curricula, namely more opportunities for practice
in simulated and real patients.

In what refers to the prescription of drugs, Clinical Practice (Prescribe drugs) and Knowledge (Drugs and prescription) factors were coherent, which suggests internal consistency
of the questionnaire and emphasizes low self-assessed competence in this domain. These
results may reflect the lack of practical or case-based teaching approaches to therapeutics
which could be more adequate for transition to postgraduate training than the frequently
undertaken theoretical approaches.

Competence on “Legal principles” in medical practice was very poorly scored by recent
graduates. Both the Clinical Practice (CP8) and the equivalent Knowledge (K6) factor on that domain showed that graduates are not familiar with relevant legislation
and medical paperwork/administrative tasks.

All the above-mentioned domains of clinical competence are required of Portuguese
medical graduates in order to provide quality patient care, notwithstanding that they
are under supervision and integrated in medical teams. The follow-up on recent graduates
through the ‘Common Year’ and residency would clarify whether these self-assessed
deficits in the development of clinical competence are maintained or improved, provide
more information for undergraduate program evaluation and might show latent insufficiencies
in clinical competence. This longitudinal approach has been followed in the United
Kingdom, where research used quantitative and qualitative methods such as interviews
of graduates, their colleagues, senior doctors and other healthcare professionals
28].

Did graduates experience contact with patients in core clinical settings?

The vast majority of graduates experienced contact with patients in most of the core
clinical settings, including internal medicine and surgery admission units, primary
care, care of elderly patients, children, pregnant women and psychiatric patients.

Nevertheless, around one quarter of graduates did not have contact with acutely ill
patients in emergency or intensive care units, which may link to the above-mentioned
results of lower self-assessed competence regarding “Medical emergencies”. This is
also consistent with studies on first-year residents 20]. Percentages of graduates who had contact with patients in other settings may also
unveil deficiencies in undergraduate education: only 40 % of graduates experienced
contact in palliative and anesthetic care. In fact, the item “providing care of the
dying and their families” (2.08) was the lowest scored of its factor, which shows
consistency in the study’s findings.

Clinical Settings with the lowest percentages might be underused for learning purposes during undergraduate
education, which may impact on graduates’ preparation and confidence to deal with
specific types of patients and healthcare needs.

However, we point out that the questionnaire did not evaluate the quality or quantity
of the learning experiences in clinical settings. Further research may show other
settings in which learning experiences were not frequent and/or did not have positive
educational value, thus contributing to lower clinical competence among medical graduates.

Are there differences between graduates from different medical schools?

We found that the highest and lowest scored factors were common among cohorts of graduates
from different Portuguese medical schools. In fact, no single school or groups of
schools showed consistently high or consistently low results across the various parts
of this study. Differences among schools were smaller than differences among different
clinical competences, knowledge domains and clinical settings. This conclusion was
also obtained in previous studies on the effectiveness of medical undergraduate programs
in the United Kingdom 28]. This may indicate that the school effect is less important than the effect of high-quality
clinical experiences in specific disciplines or active learning behaviours.

Nevertheless, significant differences among medical schools were found in some CP
and K factors: school 6 scored the highest in 4 out of 6 K factors. Importantly, differences
among schools with regard to percentages of contact with patients in some Clinical Settings are sometimes substantial, namely in the settings with the lowest percentages, such
as emergency and intensive care units, palliative care, anesthetic care, rehabilitation
medicine and specialized surgical and medical conditions: for example, while 89.4 %
of graduates from school 7 had contact with patients in rehabilitation units, only
22.5 % of graduates from school 3 had the same learning opportunities (in fact, school
3 showed the lowest percentages in 4 out of 14 Clinical Settings).

Differences among schools may be explained by an analysis of differences in their
medical curricula, teaching-learning strategies or even assessment methods. Regarding
Clinical Settings, differences may reveal that only some medical schools have acknowledged the importance
of all the Tuning core clinical settings in undergraduate education. Marked differences
among schools can also be influenced by the available healthcare units and the collaboration
with teaching hospitals.

We found that Portuguese medical schools may not be considerably different with regard
to their effectiveness in delivering core competences, but this requires further research.
In fact, comparative research may lead to substantial progress in medical education
30]. Outcome-based program evaluations might stimulate faculty development, guide recently
established medical schools 21] and strengthen schools’ accountability as elements of larger healthcare systems.
Medical schools’ collaborative efforts for program evaluation and detection of areas
needing improvement in undergraduate education have been developed, which included
recent graduates’ self-assessments 28].

Limitations

The TEST study analyzed recent medical graduates’ grouped self-assessment of core
competences in order to infer about real clinical competence and consequently about
the effectiveness of undergraduate programs. This emphasizes the need to interpret
results with care, considering beforehand some relevant topics.

We consider that our sample size is representative of the study population. In fact,
almost one out of four Portuguese medical graduates answered the survey – which fulfilled
our aim of five participants per item for the purpose of factor analysis – and the
sample closely resembles the population in terms of gender, age and admission contingent.
Also, medical schools are represented in the sample in accordance to their number
of admissions, and cohorts of graduates from different schools were not different
in terms of gender, age or modality of admission.

Questionnaires were implemented in preparation lectures for the national exam to access
residency, which might have induced a selection bias that favored more interested
students or, conversely, students that need improvement. Moreover, the possibility
of more than one response to the survey per person needs to be considered, since paper
and electronic versions of the questionnaire were distributed. However, explicit indications
were given so that only one version of the questionnaire was filled and we have no
reasons to believe that a selection bias had a significant impact in the study.

The final year of the medical course (which graduates had completed in July) might
have had considerable impact on graduates’ self-assessments. Hence, the study’s findings
may be more reflective of the later stages of medical curricula, especially clinical
experiences in the final year. This suggests that the lowest scored factors highlight
areas of clinical competence and knowledge that might be improved by the educational
development of the transition period between undergraduate and postgraduate training.
In fact, the final year of the medical course in Portugal shows some of the problems
pointed out in the literature 31].

Regarding the study instrument, Tuning core outcomes are reassuring in terms of content
validity and the exploratory factor analysis yielded meaningful factors that explained
a great proportion of the variance of answers. Nevertheless, concerns regarding the
questionnaire’s face validity may be raised. We used a 6-point Likert scale of levels
of competence (from non-existent to excellent competence) in which graduates had to
define what “sufficient” competence meant, for each item of questionnaire. The concept
of “sufficient competence” may be difficult to define and interpreted differently
among graduates. Also, they may have different interpretations of the concept depending
on their medical school, since learning outcomes, curricula, learning and assessment
experiences might have influenced their expectations and standards. These limitations
may harm the validity of comparisons among schools. A non-differential bias may also
explain why only one of the factors (CP8) was self-assessed below a sufficient level
of competence; in order to obtain a clearer view on the highest and lowest areas of
self-assessed competence, we interpreted factor scores considering their absolute
scores and their position in relation to the median value of all Clinical Practice or Knowledge factors. Narrative descriptions might improve the questionnaire’s face validity by
associating each level of competence in each item to specific clinical scenarios.

Importantly, competence on practical procedures seems to be more accurately self-assessed
than knowledge 32]. Also, the focus on medical knowledge is reduced at later stages of the medical courses
in Portugal. Graduates may therefore have more difficulties to self-assess their knowledge
than their competence in procedural skills, which leads us to consider that this study’s
findings regarding Clinical Practice factors show a better correlation with Portuguese graduates’ real competence and
prospective difficulties in the ‘Common Year’.

Grouped self-assessments are particularly important for program evaluation in the
Portuguese context, where it is difficult to define an objective measure of clinical
competence which can be considered the gold-standard at a national level. Also, research
based on grouped self-assessments is inexpensive and stimulates graduate’s self-reflection
and engagement in medical education. In fact, the purpose of this study is not to
obtain a precise measurement of individual or even group competence, but to provide
important data for the purpose of evaluating program effectiveness and driving outcome-based
curricular improvement. We believe that results regarding core clinical competences
with the lowest self-assessed scores are the most relevant for program evaluation
purposes and deserve more attention from medical schools. Further research may refine
this pilot initiative, emphasizing domains of clinical competence which are more prone
to deliver valid grouped self-assessment data, and improving the survey’s implementation
in collaboration with all Portuguese medical schools.