The effect of clinical information on radiology reporting: A systematic review

Abstract Introduction The aim of this study was to investigate the effects of clinical information on the accuracy, timeliness, reporting confidence and clinical relevance of the radiology report. Methods A systematic review of studies that investigated a link between primary communication of clinical information to the radiologist and the resultant report was conducted. Relevant studies were identified by a comprehensive search of electronic databases (PubMed, Scopus and EMBASE). Studies were screened using pre‐defined criteria. Methodological quality was assessed using the Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi‐Experimental Studies. Synthesis of findings was narrative. Results were reported according to the Preferred Reporting Items for Systematic Reviews and Meta‐Analyses (PRISMA) guidelines. Results There were 21 studies which met the inclusion criteria, of which 20 were included in our review following quality assessment. Sixteen studies investigated the effect of clinical information on reporting accuracy, three studies investigated the effect of clinical information on reporting confidence, three studies explored the impact of clinical information on clinical relevance, and two studies investigated the impact of clinical information on reporting timeliness. Some studies explored multiple outcomes. Studies concluded that clinical information improved interpretation accuracy, clinical relevance and reporting confidence; however, reporting time was not substantially affected by the addition of clinical information. Conclusion The findings of this review suggest clinical information has a positive impact on the radiology report. It is in the best interests of radiologists to communicate the importance of clinical information to reporting via the creation of criteria standards to guide the requesting practices of medical imaging referrers. Further work is recommended to establish these criteria standards.


Introduction
It is common practice for radiologists to interpret imaging examinations and formulate a report using clinical information communicated to assist with this process. Clinical information refers to all information detailing the patient's clinical situation and can include the current problem, co-existing and past medical history, current medications, allergies, fasting status, suspected diagnosis and clinical question to be answered. 1 It is used to provide the radiologist with a greater understanding of the clinical context. For all medical imaging examinations in Australia to be performed, a request must be completed by a referrer. 2,3 The request must list the patient's identifying details and indicate the type of examination requested. 2,3 It is also essential that the referrer provides adequate clinical information describing the reason for the examination. 1 The request must be signed and dated by the referrer. 2 This allows compliance with radiation safety regulations and maximum workflow efficiency.
When the patient presents to the referrer, they are medically assessed and a request for imaging is completed, using information about the patient's medical history and current presentation. This request can take one of two paths from the referrer to the radiologist, via the radiographer, who completes the imaging before sending it along with the request to the radiologist; or the request is transmitted directly to the radiologist who then reviews the clinical information and selects the imaging protocol to be performed, before transferring it to the radiographer. The radiologist is also able to review clinical information in the request when interpreting imaging and formulating their report.
Loy & Irwig's 4 2004 review established that radiology reporting with clinical information improved interpretation accuracy. Since this review, there have been technological advances such as the increased use of crosssectional imaging and widespread adoption of electronic health records (EHR). These developments may have reduced the referring clinician's perception of the importance of clinical information on radiology reporting, as it may be assumed that this clinical information is readily available and easily accessed by all clinicians and medical imaging staff. 5 The aim of this study was to investigate the effects of clinical information communicated to the radiologist, on the accuracy, timeliness, reporting confidence and clinical relevance of the radiology report.

Search strategy
This review followed the methods described in a published protocol in the PROSPERO register (CRD42019138509). 6 To identify relevant articles the PubMed, Scopus and EMBASE databases were searched using relevant keywords for request, clinical information, diagnostic imaging and radiology report. The syntax used to search the PubMed electronic database is detailed in Table 1. No limits were placed on publication date. Searches were conducted in June 2019.

Inclusion and exclusion criteria
Studies were included if they were as follows: (1) primary studies, published in peer-reviewed journals, (2) related to diagnostic imaging for any population of human patients and (3) investigated a relationship between primary communication of clinical information to the radiologist and the resultant radiology report. This review defined primary communication as any method of communication given directly to the radiologist, such as clinical information accompanying imaging (within the medical imaging request and additional information provided at the time of imaging), clinical information received in patient charts or verbal communication between referrer and radiologist. Studies published in languages other than English were excluded. Conference proceedings, reviews, case reports, study protocols, commentary and letters to the editor were also excluded.

Selection process
After duplicates were removed, titles and abstracts of studies were screened by two reviewers (CC and TS) to determine eligibility for inclusion. Screening of full text of publications was performed if the abstract provided insufficient information to judge eligibility. Disagreement or uncertainty of study eligibility was resolved by consensus discussion. The reference lists of all included studies were interrogated and subjected to the same screening process.

Data extraction and quality assessment
The full text of included studies was read by two reviewers (CC and LC). Data were extracted on study characteristics (year, diagnostic test/s, indications or disease, reference standard, number of studies, number of reviewers, methodology), interobserver agreement, outcome measures and results summary related to the research question. Data extraction was performed by one reviewer (CC), with validation by a second reviewer (LC). Disagreements were resolved through discussion. The Joanna Briggs Institute (JBI) Critical Appraisal Checklist for Quasi-Experimental Studies 7 was used to assess the quality of each study by examining the extent to which a study addressed the possibility of bias in its design, conduct and analysis. The JBI quality score was a value out of nine points, with higher scores indicating higher quality studies. This checklist included nine questions which assessed internal validity, similarity of participants of compared groups, reliability of outcomes measured and appropriateness of statistical analysis. The quality and risk of bias assessment was conducted independently by two reviewers (CC and LC); disputes were resolved by consensus discussion. A cut-off score of three was used to exclude low-quality studies from synthesis.

Analysis
Whilst some included studies shared commonalities in design, heterogeneity of methodologies, interventions and statistical analysis rendered them difficult to compare statistically. Therefore, a narrative synthesis was conducted to contextualise findings relevant to the review question, these being reporting accuracy, confidence, timeliness and clinical relevance.
The data extraction process allowed us to categorise study characteristics into consistent fields across included studies. The data extraction and categorisation facilitated narrative synthesis by allowing us to examine the context of each study. All authors met regularly during the process and using the extracted data, discussed and subsequently refined the narrative. Results were reported according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. 9

Results
We identified 21 studies that met our inclusion criteria, and after quality assessment, 20 studies were included in our review. The excluded study 8 was deemed to lack clarity regarding cause and effect and to have measured outcomes in an unreliable way. The results for each stage of the search are demonstrated in the PRISMA flow diagram 9 (Fig. 1).

Study characteristics
Sixteen studies [14][15][16][17][18][19][20][21][22][23][24][25][26][27][28][29] investigated the effect of clinical information on report accuracy, three studies 16,25,30 investigated the effect of clinical information on reporting confidence, three studies 32-34 explored the impact of clinical information on clinical relevance, and two studies 24,31 investigated the impact of clinical information on reporting time. We found three studies 16,24,25 which investigated the effect of clinical information on more than one outcome. One study 16 investigated effects on reporting accuracy, confidence and timeliness. Another study 24 evaluated effects on both reporting accuracy and timeliness, and another explored the effects on both reporting accuracy and confidence. 25 X-ray examinations were the diagnostic test in 12 (57%) of included studies. 8,16,[20][21][22][23][24][25][26]28,29,32 Five studies 18,19,27,30,31 (24%) focused on computed tomography (CT) and one 33 (5%) on magnetic resonance imaging (MRI). The remaining three (14%) studies 14,15,17 included two modalities. The X-ray studies were published between 1963 and 2014. Six of 12 studies 8,21,24,26,28,29 focused on chest X-ray examinations, the remaining five involved chest and abdomen 20 , abdomen 32 , extremity 16,22,25 or a combination of X-ray examinations. 23 Three of these studies involved paediatric cohorts only. 20,21,32 Of the five studies 18,19,27,30,31 on CT examinations, two 19,27 focused on CT head, one 30 on CT abdomen/pelvis, one 31 on CT temporal bones and one on various 18 CT scans. These studies were published between 1983 and 2017. The study 33  The size of data sets and the number and consistency of reviewers varied throughout studies. Data set sizes ranged from seven 28 to 561 17 cases. The number of reviewers ranged from one 32 to 11. 29 Some studies featured consistency of readers before and after intervention, whilst others utilised radiologists on duty at the time of reporting and did not disclose the exact number of assessors.
A total of 16 of 20 studies used a similar method involving a sample set of images, assessed twice by a group of reviewers. 8,15,16,[18][19][20][22][23][24][25][26][27][28][29] Each review had different amounts or qualities of clinical information. Three studies 14,17,32 asked radiologists to subjectively rate the impact of available clinical information on reporting, and one study 31 evaluated the impact of clinical information in two samples, pre-and post-intervention. This study was one of two which featured departmental guidelines to classify clinical information in requests as either adequate or inadequate. One study 17 evaluated the impact of clinical indications of stroke in CT head and MRI brain requests on final discharge diagnosis. The other CT and MRI study 15 14 involving X-ray and ultrasound evaluated the impact of additional clinical information contributed by imaging technologists on the quality of the report. This study instructed imaging technologists to contribute clinical information on patient symptoms, including duration and onset.
Additional information available to readers varied significantly between studies. Whilst many included all clinical information available to referrers at the time of reporting in the second read, others tried to demonstrate effect of an intervention to evaluate any change to reporting.
These interventions included patient questionnaires, 30 inclusion of a clinical question, 33 additional information from imaging technologists 14 and a graphic indicating site of pain. 16 The results of the data extraction from the included studies are shown in Table 2.

Study quality
The JBI quality score ranged from 2 to 7 out of a possible 9 points with a median score of 4 (Fig. 2). The highest scoring study was the only study 31 to include a   control group. Lower scores were due to using multiple different assessors instead of one group of assessors, using only one assessor and failure to conduct appropriate statistical analysis.

Interpretation accuracy
Sixteen studies investigated the effect of clinical information on the accuracy of reporting. Of these, three studies 16,17,28 reported sensitivity and specificity. All three reported that the addition of clinical information improved sensitivity. Reported changes in sensitivity were 38% to 84%, 28 67% to 73% 16 and 38% to 52%. 17 Sarwar et al 16  Six studies used area under the receiver operator characteristic (ROC) curves to quantify the average difference in improvement in accuracy. Results ranged from minimal improvement 20 to significant improvement. 21,23,[25][26][27] Overall, these studies demonstrated that clinical information improved diagnostic accuracy in various conditions.
Three studies described an impact on overall accuracy of reporting. 17,22,29 Rickett, Finlay and Jagger 22 found an increase from 72% to 80% in diagnostic accuracy, Schreiber 29 reported an improvement in accuracy without numerical data, and Mullins 17 found an overall improvement in diagnostic accuracy from 47% to 59%.
Three studies described accuracy in terms of influencing change to the original radiologist report. 14 Two studies 19,24 found the addition of clinical information did not change reporting accuracy. The results relevant to the accuracy outcome measure have been further summarised in Table 3.

Reporting confidence
Three studies investigated the effect of clinical information on the confidence of reporting, each in a different way. 16 Table 4.

Clinical relevance of reports
The importance of the inclusion of a specific clinical question in the imaging request was investigated in three of the included studies. Aubin et al 34 Table 5.

Reporting time
The impact of clinical information on radiologist reporting time was investigated in two studies. 16 Table 4.

Discussion
The majority of included studies support the notion that clinical information has a positive effect on the reporting process. Studies demonstrated improved interpretation accuracy, clinical relevance and reporting confidence. The addition of clinical information was found not to substantially affect reporting time. These findings were based on studies of moderate quality, with a median quality and risk of bias assessment score of 4 out of 9. 7 Studies deemed to be of lower quality failed to perform appropriate statistical analysis to demonstrate a statistically significant effect.  One of the studies investigated the impact of the timing of when clinical information is introduced. Berbaum et al 20 found that the provision of clinical information at the time of interpretation has a positive effect on radiologist perception, whilst providing this information after interpretation was of no benefit. This study supports the notion that educating referrers to provide quality clinical information to radiologists would result in a greater benefit in reporting outcomes, than radiologists correlating findings with patient notes.
Other studies, which were outside the scope of this review, have investigated the effect of prevalence expectation on diagnostic performance of radiologists. Littlefair et al's 34 study demonstrates that prior expectations can impact diagnostic efficacy, whereby increased prevalence expectations influence radiologists to assign a false-positive outcome to a normal image. Although this finding highlights that provision of clinical information can lead to overcalling, the variables tested were extreme and not necessarily reflective of clinical practice. Littlefair et al 34 recommended referral criteria for those requesting, which is also an outcome of our review.
Another study by Littlefair et al 35 also discusses the topic of overcalling. Whilst this study focused on the influence of expectation of abnormality and prior knowledge of the outcome, it also indicates that highly specific clinical information can significantly improve location sensitivity. In other words, when specific clinical information is provided to the radiologist prior to image interpretation, the accuracy and clinical relevance of their report can be enhanced. Our study was limited by the number of eligible studies specific to the research question. Whilst 21 articles were deemed eligible for inclusion, not all of these studies solely focused on the effect of clinical information on the radiology report. Similarly, the broad range of publication dates of included studies may be perceived as a limitation. We found this difficult to restrict as there was no existing review on the effects of clinical information on all aspects of reporting. However, the broad range of publication dates may demonstrate the issue of inadequate clinical information communicated to radiologists has persisted over several decades.
The rationale of three of the most recently published included studies 14,15,30 may highlight an issue with the quality of clinical information currently being received by radiologists. Doshi et al's 13 utilisation of patient questionnaires to evaluate the effect on the completeness of clinical information suggests there is a lack of useful clinical information in requests to enable confident reporting. The fact that information provided by patients on the day of their CT scan increased radiologists' confidence in their findings indicates that useful clinical information was missing in requests. Lacson et al 15 recognised the limitation of requests but investigated the usefulness of other supplemental sources of information, namely the EHR. Maizlin & Somers 14 sought to address the shortfall a different way again, by demonstrating that extra clinical information added by radiographers had a positive impact on the resultant report. These three examples could be described as workarounds, defined as solutions which health professionals (and others) use to avoid hindrances to efficiency and achieve improvements in workflow. 36 The interventions implemented in these studies suggest the perceived communication between referrer and radiologist needs improvement.
Whilst many of the included studies shared similar elements of design, it was clear there was no gold standard or standardisation of requirements for clinical information. This made results difficult to compare, as many studies relied on the expert opinion of radiologists to determine whether clinical information was deemed important or useful when reporting. This measurement of usefulness of clinical information varied across studies, as radiologists taking part in studies would have had different training, skills and specialisations.
In contrast, both Cooperstein et al 24 and Qureishi et al 31 specified the type of clinical information required from the requesting clinician. Cooperstein et al's 24 criteria for clinical information were generalised and could be used for any examination, and the results of the study demonstrated no significant effect on reporting. However, Qureishi et al's 31 departmental guidelines for clinical information required in requests were specific to CT temporal bone examinations. The guidelines specifically identified key information to be provided in requests and were found to demonstrate a positive impact on clinical relevance and confidence in reporting. As there are more than two decades between the publications, it is possible that the technological advancements in CT and its increased utility 37 have prompted further investigation into the topic of clinical information to assist with reporting. This idea is supported by Leslie et al 18 who found the importance of clinical information to increase with the complexity of imaging, due to the greater volume of images produced and the greater list of differential diagnoses. Subsequently, the role clinical information plays is accentuated. It is possible that a lack Not reported When the request indicated tube placement, the location of the tube tip included in the report 134/141 (95%) and not mentioned 7/141 (5%) times. When the request failed to mention tube location within study indication, the report only mentioned the tube tip location 4 (31%) times and failed to mention it 9 (69%) times.
When clinical questions are included in requests for imaging, radiology reports are more likely to answer clinical question of clinical information would be a risk factor for missed diagnoses and reduced confidence in incidental findings. In such cases, adequate clinical information may assist radiologists to contextualise incidental findings and subsequently add value to the report.
Given the findings of this review regarding clinical information and its effect on the accuracy, confidence, clinical relevance and timeliness of reporting, Qureishi et al's 31 study provided evidence for a novel intervention for improving clinical information provided, in the form of departmental guidelines. The guidelines served as a criteria standard, as they outlined recommendations for specific elements of clinical information useful for reporting a particular examination. Criteria standards have been previously used to educate and change behaviours of referrers when requesting by Gunderman et al 38 who sought to educate referrers on Health Care Financing Administration regulations to improve billing efficiency. This intervention improved compliance with the regulations. Subsequently, the frequency of inadequate clinical information on requests was decreased by approximately two-thirds.
It is clear the lack of clinical information in requests is an issue affecting reporting quality. One of the possible causes for this may be a lack of awareness or education of referring clinicians on what constitutes relevant clinical information. It may be in the best interests of radiologists to seek to educate referrers on the effect of clinical information on diagnostic performance, including the rationale behind providing high-quality clinical information. 38 This need for further education is reflected in a recent study by Glenn-Cox et al, 39 who identified that Australian junior doctors do not feel confident to request medical imaging tests accurately. With 66% of Australian junior doctors surveyed claiming to request imaging once a day or more frequently, 39 it is expected that development of criteria standards for clinical information when requesting medical imaging would be advantageous in improving the quality of the radiology report.

Conclusion
The findings of this review indicate that clinical information communicated to the radiologist has a positive impact on the radiology report. These results are relevant to the main consumers of medical imaging, those being referrers and by extension their patients. These results are also relevant to radiologists, as they demonstrate the potential improvement that the communication of clinical information can have on the quality of reporting. It is in the best interests of radiologists to communicate the importance of clinical information for reporting via the creation of criteria standards to guide the requesting practices of medical imaging referrers.