- Top of page
- MATERIALS AND METHODS
There is increasing evidence to support the benefit to patients whose rheumatologists perform musculoskeletal ultrasonography (MUS) (1–7). This imaging technique is a valuable complementary clinical tool enabling clinicians to improve the accuracy of their diagnostic skills and management decisions essential for maintaining the highest standards of patient care (8). It is also of considerable utility to the researcher, allowing improved understanding of the pathophysiology of rheumatic diseases and as an objective outcome measure for clinical research studies (9–11). This has encouraged an increasing number of rheumatologists to purchase ultrasound machines and carry out MUS examinations as part of their routine practice (12). This has a number of important educational implications, particularly with regard to training and competency assessment. The current situation is a challenge that needs to be addressed by the rheumatology and radiology communities (13, 14).
Although rheumatologic ultrasonography is an expanding area, the published information on training is limited (15, 16). A review of training and assessment regimens of current MUS practitioners confirms a wide variety of approaches to training and minimal exposure to competency assessment (17). Courses in MUS are popular, but these aim to introduce concepts and aid understanding rather than provide formal training. Guidelines are available on image acquisition, equipment, and practice standards (18, 19) but there is little information and no published agreement on any other fundamental issue with regard to education in this area. For example, there are no guidelines in place to direct a rheumatologist as to the appropriate knowledge and skills that they require to perform an adequate MUS assessment, or even for what indications or in which anatomic areas it is appropriate to perform such an examination. Similarly, there are no published requirements for training nor any established standards or outcomes that rheumatologists must achieve to be deemed proficient in this technique. In addition, there is no objective assessment process to ensure that competency standards are achieved and maintained, and there are no requirements for life-long learning, continuous assessment, or revalidation. Overall, there is currently insufficient basic information available with which to make recommendations for training, define suitable standards, or determine the nature of assessment. Answers to these important basic questions are essential to advance this important area and enable the development of a structured rheumatology MUS curriculum and a valid criterion-referenced system of competency assessment. Appropriately trained clinicians are essential for competent patient diagnosis and management. Without this, patients will not receive the maximum benefit from this imaging modality, and their safety may be compromised due to inappropriate examination or misdiagnosis because of inadequately trained operators.
We have used consensus-defining methodology to obtain a wide range of information, insights, and opinions from expert practitioners in the field of MUS, with the aim of establishing an informed consensus of appropriate indications, anatomic areas, and knowledge and skills required by rheumatologists who perform MUS. This important information will enable us to develop and evaluate competency standards and devise educational outcomes, which will form the basis for the development of a specific training curriculum and assessment process for rheumatologists in MUS.
- Top of page
- MATERIALS AND METHODS
There are currently insufficient published data to make recommendations for the training and assessment of rheumatologists performing MUS or to guide a rheumatologist as to what is appropriate MUS practice. This is the first study to provide information in a number of these fundamental areas; the results will facilitate much-needed informed educational development in this field and will direct future rheumatology MUS practice. We have produced expert-derived consensus guidelines of appropriate indications, anatomic areas, and knowledge and skills required by rheumatologists who perform MUS. This is a substantial step forward in this important and developing area of rheumatologic imaging.
We identified 57 international experts in MUS comprising 20 rheumatologists and 37 radiologists, all of whom have a track record of teaching, research, and active MUS practice over a number of years. The response rate was particularly good among the rheumatologists and there was excellent retention of all respondents between rounds, implying a high level of motivation and interest among expert practitioners.
Extensive preliminary research enabled us to develop a focused questionnaire containing 37 categories of possible areas of importance for rheumatologists undertaking MUS. This was divided into 4 sections comprising indications, anatomic areas, knowledge and skills, and free text. Using our strict criteria of group and consensus agreement, we were able to classify these into 30 categories that satisfied our criteria as being appropriate for rheumatologists and 7 that were considered inappropriate (Tables 5 and 6).
Table 6. Categories defined by expert group consensus agreement as being inappropriate for musculoskeletal ultrasound by rheumatologists
|Indications||Anatomic areas||Knowledge and skills|
|Degenerative arthritis||Groin|| |
|Muscle injury||Soft tissue|| |
|Ligament injury|| || |
|Soft tissue mass|| || |
|Nerve lesions|| || |
A number of interesting observations can be made from this data. Two distinct groups of conditions were identified within the possible indications list. The first group had total cumulative agreement scores ranging from 81% to 89%, easily above our threshold value of 70% to signify group agreement. The net change in cumulative agreement between rounds for all of these categories satisfied our criteria for group consensus with the scores becoming more positive, indicating an increase in agreement within the expert group that these indications were indeed appropriate. The second group of indications all had cumulative agreement scores ranging from 46% to 57%, well below the cutoff value of 70% to signify group agreement. Although the net change in cumulative agreement was relatively small and satisfied our criteria for consensus agreement, the absolute change was in a negative direction, implying that the group was more definite in its opinion that these categories should indeed be rejected. The inappropriate indications for rheumatologist MUS comprise degenerative arthritis, muscle and ligament injury, soft tissue masses, and nerve lesions. In our opinion, although MUS may be an appropriate first-line investigation for some of these indications (although one could argue at present that a radiograph may be a more appropriate investigation for degenerative arthritis), it is likely that correlation of MUS findings with those of other imaging modalities, e.g., magnetic resonance imaging, will be required. It is expected that this additional imaging would be interpreted by a radiologist, suggesting that they may be the more appropriate specialists to undertake any initial MUS examination for these indications.
The category of tendon pathology had a slightly different pattern of scores. This category did satisfy our criteria to be included as an appropriate indication with a total cumulative agreement score of 75%, although this value was relatively lower than the other accepted indications, with a net change between rounds of −3%, suggesting some uncertainty within the group. This observation may be linked to the scores for the shoulder category in the anatomic areas section, which scored 72% total cumulative agreement, just above the limit for group agreement, with a net change of +2%. These 2 categories have among the largest differences in total cumulative agreement scores between radiologists and rheumatologists (50% versus 100% for tendon pathology; 50% versus 95% for the shoulder). One of the most common indications for MUS in traditional orthopedic radiology practice is examination of the rotator cuff tendons in the shoulder. The shoulder is a recognized controversial area in MUS because it is one of the most difficult areas to scan proficiently with possibly the greatest learning curve. For this reason, and the possible requirement for correlation of MUS findings with those of other imaging modalities, some believe that it is inappropriate for anyone other than a radiologist to perform a MUS examination of the shoulder. Although one may speculate regarding a possible association between the categories of tendon pathology and shoulder, further work is ongoing to formally establish any relationship between the indication and anatomy categories and to determine linkage with pathology. This will enable us to determine, for example, the anatomic areas in which tendon examination is appropriate and what pathologic processes should be identified by rheumatologists using MUS.
In the anatomic area section, agreement is greatest in the hand and wrist categories, with total cumulative agreement scores of 81% with net scores of +6%, implying increasingly positive consensus in these areas. The issue of the shoulder has been dealt with above but it is perhaps a little surprising that the other anatomic areas, although satisfying the criteria for group consensus agreement, do not score a little higher. The relatively low scores given by the radiologists seem to account for this, with total cumulative agreement of only 50% in the knee, ankle and heel, and forefoot with no change between rounds in the former categories and even a reduction in net agreement in the latter. This is in stark contrast to the opinions given by the rheumatologists, who score 100% total cumulative agreement in all of these categories.
Group consensus was established in all categories of knowledge and skills, with all criteria being comfortably satisfied. All net total cumulative agreement scores remained unchanged or increased, implying positive consensus agreement. No statistically significant differences were seen in the responses given by each specialist group for any of the knowledge and skill categories, with high levels of total and specialty cumulative agreement (83–100%). This implies that our experts were unanimous in their opinion regarding these attributes and so group consensus in this section was readily established.
Overall there was relatively little change in individual or total cumulative agreement scores between Delphi rounds, despite experts being presented with the group results and offered time to reflect and change their original answers. This implies that the experts were confident in their own opinions, which may reflect the fact that they are experienced practitioners and have developed their own firm views that are unlikely to be changed by the collective opinion of the group. The small overall change in scores between rounds implies stable and reliable individual and group opinion and corroborates the accuracy of this data. Regardless of the total cumulative agreement score, all categories fulfilled our criteria for group consensus because the net total cumulative agreement scores were within the defined limits of ±10%. This implies that the Delphi process had been successful in obtaining a reliable group consensus after 2 rounds and that further questioning was unnecessary.
The difference in scores between the 2 specialties of rheumatology and radiology are interesting. The radiologists have total cumulative agreement scores that are consistently below those of the rheumatologists, with statistically significant differences in all indication and anatomic area categories. This may reflect both the enthusiasm of the rheumatologist to perform a MUS examination for all indications and in all anatomic areas and a more cautious approach by the radiologist exercising an initial degree of control on how much MUS is appropriate for a rheumatologist to undertake. However, even though the radiologists' scores are lower, the trend and relative change in results compared with those of the rheumatologist is similar. For example, among the 7 categories that were identified as being inappropriate, the mean total cumulative agreement is much lower than in the categories that were accepted (rheumatologists 79% [range 67–89%] versus 99% [83–100%]; radiologists 25% [16–33%] versus 77% [50–100%]). In addition, in the categories where consensus agreement was satisfied, the mean overall net change in cumulative agreement score was +1.7%, implying positive consensus; whereas in the categories defined as inappropriate for rheumatologists it was −1.8%, implying a more negative consensus. When divided per specialty, the trend is similar, with a mean net total cumulative agreement score in the accepted categories of +2.2% among rheumatologists compared with +1.6% for radiologists. In the rejected categories, the mean net change was −6.2% among rheumatologists and +1.0% for radiologists. This data therefore suggests relative agreement between the 2 specialties.
The Delphi consensus-defining methodology was chosen because it represents a recognized method of obtaining considered opinions from knowledgeable, informed professionals and is particularly suited to providing insights into areas in which there is currently limited published data, such as rheumatologist-performed MUS. It has proved to be an effective technique that has allowed us to determine expert consensus agreement relating to future rheumatologist MUS practice. Explicit criteria were applied to the selection of experts to ensure that they were representative of the wider MUS specialist community. Likewise, strict definitions of group agreement and consensus were adopted. A wide variety of resources were used to construct the initial questionnaire to eliminate any potential bias from the authors. The 2 iterative phases allowed the experts to interact with the questionnaire, reflect on their initial judgments, gather any required information, and alter their responses based on feedback from their peers. Opportunities for repeated consideration also provided data on attitudes about the degree of cooperation and acceptance among panelists regarding the role of rheumatologists in MUS. The excellent retention of respondents between rounds implies a high level of motivation and ownership among our expert panelists, which increases the likelihood of acceptance, dissemination, and implementation of our findings. This rigorous approach was necessary to maximize the validity of this process and ensure the relevance, credibility, applicability, and transferability to rheumatology MUS practice. Repeat testing of this study's finding against observed practice in the future will help to further reinforce validity and reliability. Although there are potential disadvantages to the Delphi method, the other alternative would be an anecdotal or subjective approach, which clearly would have been far less satisfactory.
We have obtained the first interdisciplinary consensus agreement among expert practitioners of recommendations for best practice among rheumatologists performing MUS. This important information will not only direct future rheumatology MUS practice and research, but will also facilitate informed educational development in this rapidly evolving field. This data will be used to develop precise learning outcomes and competency standards that will enable the introduction of a specific training curriculum and assessment process to ensure competent rheumatologist ultrasonographers.