A simple eye model for objectively assessing the competency of direct ophthalmoscopy
Summary of main findings and comparison with existing literature
In this study, we developed a robust and simple eye model and found it effective in assessment of the competency of direct ophthalmoscopy when compared with the traditional checklist. The subjective checklist score was 9.25 ± 0.47, with a discrimination index of 0.11. The model-assessment score was 4.24 ± 3.10, with a discrimination index of 0.79. Notably, two-thirds of the residents agreed or strongly agreed that model-assessment could reflect the ability to visualize the fundus.
In usual practice, ophthalmoscopic skill was assessed by trainers who gave scores according to a checklist, mainly on the operating procedure . In our study, the checklist for assessment could be divided into 3 parts: (a) preparation of the patients, the device and the environment; (b) operating procedure including manipulating the light, adjusting the dioptre and controlling the working distance; (c) the overall proficiency. The checklist-assessment showed a high level of performance with a mean score reaching 9.25, indicating that it was very easy for the residents to perform direct ophthalmoscopy with memorized operating procedure. On the other hand, the checklist of the operational steps appeared to be a separate measure tool compared with the model-assessment which showed the ability to visualize the letters on the fundus. It has been reported that few young ophthalmologists were able to use an ophthalmoscope effectively even though they have memorized the operational procedure [1, 5].
Competency in ophthalmoscopic skills has been a concern in medical education. Eye models, with fundus images inside, were increasingly used as adjunctive tools for task-based skill assessment [2, 8]. Though task-based assessment could reflect competency more objectively and accurately, it may be technically challenging and not suitable for students or even residents with limited, if any, clinical experience since capability to make disease diagnosis was required . On the other hand, the fundus images, mainly located centrally, could hardly be used to assess the competency of inspecting the peripheral retina. Thus, a new approach is needed for a more targeted assessment of the ability to visualize the fundus.
Paul Bradley ingeniously used a table tennis ball for an eye model with five sets of text in the fundus for objective assessment of direct ophthalmoscopy in 803 undergraduate medical students in the University of Liverpool, UK . In Bradley’s study, the mean score of visualizing the text was 4.4 (5 for total score, difficulty index: 0.88) with 95% confidence intervals of (4.3, 4.5) and an estimated coefficient of variation (CV) of 32%. By contrast, model assessment in our study seemed more difficult (difficulty index = 0.42), and the score distribution was more discrete (CV = 73%).
Some reasons may attribute to the difference in difficulty. First, there was no time limit in Bradley’s assessment. In contrast, the residents in our study were required to finish direct ophthalmoscopy within 5 min, which increased the degree of difficulty. Second, we used the plastic ball in brown color to form a black box, simulating the structural feature of the choroid. However, Bradley’s report did not mention how to avoid light into the pale table tennis ball from the wall, which might reduce the difficulty in manipulating the light beam. Finally, assessment by reading text was much easier than reading randomized letters, even in the same size, since the text message could be guessed by association. Fourth, the eye model in our study was 26-mm in diameter, which was closer to the diameter of a human eyeball and smaller than Bradley’s model based on a table tennis ball (40-mm approximately). Fifth, unlike Bradley’s model without a refractive component, we added a convex lens with a certain focal length to simulate an emmetropic eye, and the residents could not visualize the fundus without an ophthalmoscope.
In addition, we measured the difficulty index and discrimination index, which were frequently used in evaluating the quality of a test . The higher the difficulty index, the lower is the difficulty. Tests with a difficulty index above 0.90 should be very easy and probably not worth testing. On the other hand, tests with a very low difficulty index are difficult and inappropriate for students or residents. Discrimination index describes how effectively the test differentiates between high ability and low ability students. Notably, a high discrimination index is desirable in skill assessment. The relationship between these two indexes has been recognized. Si-Mui Sim et al. found that the maximum discrimination index occurred with a difficulty index between 40 and 74% . Either a very large or very small difficulty index would lead to a decrease in discrimination index. Thus, it fails to differentiate between weak and competent students. Therefore, the model-assessment in our study was moderate in difficulty (difficulty index = 0.42) and acceptable for the residents. Moreover, using our eye model to assess the ophthalmoscopic competency could well differentiate between poor and competent performers (discrimination index = 0.79), compared with Bradley’s model based on CV as a reference for discrimination index and the checklist (discrimination index = 0.11).
Inter-model comparison suggested no significant difference among the models, indicating good reproducibility in the models designed in our study. It is noteworthy that the residents seemed to get higher scores when examining the right eye model, which presumably resulted from the laterality of dominant eye and dextromanuality. Further study to confirm the relationship between the performance of ophthalmoscopy and the dominant eye and dominant hand is warranted.
Feedback from the residents showed their belief that this simple eye model was able to simulate the eyeball for fundus visualization, and that the model-assessment could reflect their ability of ophthalmoscopy. Most residents agreed this model of value as a tool for ophthalmoscopic practice and expected it to become popularly used in medical education. In addition, practicing ophthalmoscopy on our model avoids closed contact with the simulated patients and helps to prevent infection via respiratory droplets, especially in the era of COVID-19.
Also, most residents felt it easy to assemble the eye model after the demonstration and the following practice procedure. The plastic ball was low-cost and easy to buy online. Unlike Bradley’s model in that the table tennis balls had to be cut into halves, our double hemispheres were ready in the original design and factory made. No cutting was needed and we could paint and glue things in the inner surface directly, which are easy endeavors.
Strengths, limitations and further research
The strengths of this study included the design of a simple eye model with high fidelity, the objectiveness in assessment of ophthalmoscopic skill and the novel use of special indexes to evaluate effectiveness of assessment.
There are also limitations in this study. First, visualization of the fundus varied in difficulty with the anatomical structure, easier for the central than the peripheral retina. Also, visualization and location of the fundus were not easy as quite a few residents recorded the letters into the adjacent positions. In order to assess the ophthalmoscopic skill more accurately, further improvements, including weighting for scores and correction for the wrong location, are necessary. Last, the present study was also limited by the difficulties in distinguishing randomly selected letters although inter-model analysis showed no significant difference.
It should be noted that it was a cross-sectional study in a single training and assessment session. We only focused on the skill assessment, rather than skill training. Therefore, further study with a longitudinal randomized controlled design will be needed to confirm the effectiveness of the model in skill training. To explore its application, a series of eye models with various pupil sizes, dioptres and severities of refractive media opacity will be provided to simulate various clinical situations.
In conclusion, we have developed a simple and reproducible eye model to meet the needs of the objective assessment of ophthalmoscopic skills. The model-assessment accurately reflected the competency in ophthalmoscopic skill with discriminatory power to differentiate performers in different skill levels. Moreover, this model was low-cost and easy to assemble. It is potentially useful in applications in ophthalmologic education.