SEARCH

SEARCH BY CITATION

Keywords:

  • validity;
  • International Classification of Diseases;
  • administrative data;
  • sensitivity;
  • specificity;
  • positive predictive value

ABSTRACT

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

Purpose

To overview the methods used in the Mini-Sentinel systematic reviews of validation studies of algorithms to identify health outcomes in administrative and claims data and to describe lessons learned in the development of search strategies, including their ability to identify articles from previous systematic reviews which used different search strategies.

Methods

Literature searches were conducted using PubMed and the citation database of the Iowa Drug Information Service. Embase was searched for some outcomes. The searches were based on a strategy developed by the Observational Medical Outcomes Partnership (OMOP) researchers. All citations were reviewed by two investigators. Exclusion criteria were applied at abstract and full-text review stages to ultimately identify algorithm validation studies that used data sources from the USA or Canada, as the results of these studies were considered most likely to generalize to Mini-Sentinel data. Nonvalidated algorithms were reviewed if fewer than five algorithm validation studies were identified.

Results

The results of this project are described in the separate articles and reports written on algorithms to identify each outcome of interest.

Conclusions

The Mini-Sentinel systematic reviews of algorithms to identify health outcomes in administrative and claims data are expected to be relatively complete, despite some limitations. Algorithm validation studies are inconsistently indexed in PubMed, creating challenges in conducting systematic reviews of these studies. Google Scholar searches, which can perform text word searches of electronically available articles, are suggested as a strategy to identify studies that are not captured through searches of standard citation databases. Copyright © 2012 John Wiley & Sons, Ltd.


BACKGROUND

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

The Mini-Sentinel pilot program of the Food and Drug Administration (FDA) aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. To perform this work, the program needed to identify algorithms used to detect various health outcomes of interest (HOIs) using administrative and claims data sources and identify the performance characteristics of these algorithms as measured in the studies in which they were used.

The data sources of interest were limited to those from the USA or Canada to increase their relevance to the Mini-Sentinel data sources, which are all from the USA. The Mini-Sentinel Protocol Core team developed a preliminary list of approximately 140 potential HOIs, based on several criteria. These criteria included (i) previous validation studies had been identified in a textbook chapter reviewing the validity of drug and diagnosis data used in pharmacoepidemiological studies,[1] (ii) a list of designated medical events from a proposed FDA rule on the safety reporting requirements for human drug and biological products,[2] and (iii) the Observational Medical Outcomes Partnership (OMOP) had commissioned reports on algorithms used to identify the health outcome using administrative and claims data.[3]

From the original list of 140 HOIs, the FDA selected 20 for which reviews of algorithms were completed. HOIs for which OMOP had already commissioned reports were purposefully excluded to avoid duplication of effort. Table 1 lists the HOIs selected for reviews. The two HOIs related to orthopedic implants were combined into one report due to substantial overlap in the search terms and the ultimate lack of studies examining algorithms for implant removal.

Table 1. Health outcomes selected for systematic reviews
Acute respiratory failure
Anaphylaxis, including anaphylactic shock and a angioneurotic edema
Atrial fibrillation
Ventricular arrhythmias
Cerebrovascular accident or transient ischemic attack
Congestive heart failure
Depression
Erythema multiforme, Stevens-Johnson syndrome, or toxic epidermal necrosis
Explantation of orthopedic device/removal of implanted device
Hypersensitivity reactions other than anaphylaxis (fever, rash, and lymphadenopathy)
Infection associated with blood products or tissue grafts
Lymphoma
Pancreatitis
Pulmonary fibrosis, interstitial lung disease
Seizure, convulsion, epilepsy
Suicide—including completed, attempted, and suicidal ideation
Surgical revision of implantable orthopedic devices
Transfusion-related ABO incompatibility reaction
Transfusion-related septicemia and sepsis
Venous thromboembolism

The purpose of this article was to describe the methods for these systematic reviews to reduce redundancy throughout this Mini-Sentinel supplement to Pharmacoepidemiology and Drug Safety because it contains a large number of articles describing these reviews.[4-23] A secondary purpose was to describe some lessons learned related to the search strategy to identify articles that report on the performance characteristics of algorithms to identify HOIs using administrative and claims data. This includes a description of an exploratory search to determine the ability of the PubMed search strategy to capture articles included in two systematic previously conducted reviews of validated algorithms to identify acute renal failure. This was considered a test case to evaluate the sensitivity of the search strategy.

METHODS

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

Search strategy

The general search strategy was developed based on prior work by OMOP and its contractors, which was modified slightly for these reports. Originally, OMOP contracted with two organizations to perform reviews of 10 HOIs. Because the search strategies used by each organization resulted in very different sets of articles, OMOP investigators reviewed the PubMed indexing of the articles deemed useful in final reports and developed a strategy that would identify the majority of these citations while maintaining efficiency in the number of abstracts that would need to be reviewed. Mini-Sentinel investigators made minor changes to this strategy that would result in the identification of more citations and confirmed empirically that the majority of relevant articles from one set of OMOP reports (angioedema)[23, 24] would be identified using this approach. A second more thorough evaluation was conducted using the acute renal failure reports,[25, 26] which will be described in the Discussion section of this article. This evaluation was conducted prior to adding several terms to identify additional administrative and claims databases, most of which were from Canadian provinces. The base search strategy was combined with PubMed terms representing the HOIs. Medical subject heading (MeSH) terms were generally preferred as HOI search terms due to their likely specificity. Text word searches were sometimes used, particularly when the MeSH search resulted in a small number of citations for review. The details of the PubMed terms used to represent the HOIs can be found in the individual articles. The workgroup also searched the database of the Iowa Drug Information Service (IDIS/Web) using a similar search strategy to identify other relevant articles that were not found in the PubMed search. Embase searches were conducted for a limited number of outcomes for which very few citations were identified from PubMed and IDIS/Web searches. Search results were restricted to articles published on or after 1 January 1990. Table 2 illustrates the search strategy for PubMed. The IDIS/Web searches varied somewhat in structure depending on the HOI, so a general search strategy is not presented here but can be found in individual reports available on the Mini-Sentinel Web site (http://mini-sentinel.org/foundational_activities/related_projects/default.aspx). A number of searches were amended with additional terms to identify administrative and claims database studies after the original searches had been conducted and processed into documentation materials, so the individual reports sometimes describe a second search. The search terms illustrated here capture the content of those two-part searches.

Table 2. PubMed search strategy
1. Terms to identify drug adverse event studies and other studies thought likely to contain validation of an outcome measure
(“Pharmaceutical preparations/adverse effects” [Mesh] OR “Pharmaceutical preparations/contraindications” [Mesh] OR “Pharmaceutical preparations/poisoning” [Mesh] OR “Pharmaceutical preparations/therapeutic use” [Mesh] OR “Pharmaceutical preparations/toxicity” [Mesh] OR “Pharmaceutical preparations/therapy” [Mesh] OR “Pharmaceutical preparations/analysis” [Mesh] OR “Chemical actions and uses/adverse effects” [Mesh] OR “Chemical actions and uses/contraindications” [Mesh] OR “Chemical actions and uses/poisoning” [Mesh] OR “Chemical actions and uses/therapeutic use” [Mesh] OR “Chemical actions and uses/toxicity” [Mesh] OR “Chemical actions and uses/therapy” [Mesh] OR “Chemical actions and uses/analysis” [Mesh] OR “Chemical actions and uses/epidemiology” [Mesh] OR“Drug toxicity” [Mesh] OR “Diseases Category/chemically induced” [Mesh] OR “Diseases Category/drug therapy” [Mesh] OR “Diseases Category/epidemiology” [Mesh] OR “Validation Studies” [pt] OR “Validation Studies as Topic” [Mesh] OR “Sensitivity and Specificity” [Mesh] OR “Predictive Value of Tests” [Mesh] OR “Reproducibility of Results” [Mesh] OR “Predictive Value” [tw]). Limits: humans, English, publication date from 1 January 1990 to 1 January 2011.
2. Terms to identify administrative and claims database studies from the USA or Canada
(“Premier” [All] OR ”Solucient” [All] OR ”Cerner” [All] OR ”Ingenix” [All] OR ”LabRx” [All] OR ”IHCIS” [All] OR ”marketscan” [All] OR ”market scan” [All] OR ”Medstat” [All] OR ”Thomson” [All] OR ”pharmetrics” [All] OR ”healthcore” [All] OR ”united healthcare” [All] OR ”UnitedHealthcare” [All] OR ”UHC” [All] OR ”Research Database” [All] OR ”Group Health” [All] OR ”HCUP” [All] OR (”Healthcare Cost” [All] AND ”Utilization Project” [All]) OR (”Health Care Cost” [All] AND ”Utilization Project” [All]) OR ”MEPS” [All] OR ”Medical Expenditure Panel Survey” [All] OR ”NAMCS” [All] OR ”National Hospital Ambulatory Medical Care Survey” [All] OR ”National Ambulatory Medical Care Survey” [All] OR ”NHIS” [All] OR ”National Health Interview Survey” [All] OR ”Kaiser” [All] OR ”HMO Research” [All] OR ”Health Maintenance Organization” [All] OR ”HMO” [All] OR ”Cleveland Clinic” [All] OR ”Lovelace” [All] OR ”Department of Defense” [All] OR ”Henry Ford” [All] OR ”i3 Drug Safety” [All] OR ”i3” [All] OR ”Aetna” [All] OR ”Humana” [All] OR ”Wellpoint” [All] OR ”IMS” [All] OR ”Intercontinental Marketing Services” [All] OR ”IMS Health” [All] OR ”Geisinger” [All] OR ”GE Healthcare” [All] OR ”MQIC” [All] OR ”PHARMO” [All] OR ”Institute for Drug Outcome Research” [All] OR ”Pilgrim” [All] OR ”Puget Sound” [All] OR ”Regenstrief” [All] OR ”Saskatchewan” [All] OR ”Tayside” [All] OR ”MEMO” [All] OR ”Veterans Affairs” [All] OR ”Partners Healthcare” [All] OR ”Mayo Clinic” [All] OR ”Rochester Epidemiology” [All] OR ”Indiana Health Information Exchange” [All] OR ”Indiana Health” [All] OR ”Intermountain” [All] OR ”blue cross” [All] OR ”health partners” [All] OR ”health plan” [All] OR ”health services” [All] OR ”Nationwide Inpatient Sample” [All] OR ”National Inpatient Sample” [All] OR ”medicaid” [All] OR ”medicare” [All] OR ”MediPlus” [All] OR ”Outcome Assessment” [All] OR ”insurance database” [All] OR ”insurance databases” [All] OR ”Data Warehouse” [All] OR ”ICD-9” [All] OR ”international statistical classification” [All] OR ”international classification of diseases” [All] OR ”ICD-10” [All] OR ”Database Management Systems” [Mesh] OR ”Medical Records Systems, Computerized” [Mesh] OR ”CPT” [All] OR ”Current procedural terminology” [All] OR ”drug surveillance” [All] OR (”claims” [tw] AND ”administrative” [tw]) OR (”data” [tw] AND ”administrative” [tw]) OR ”Databases, Factual” [Mesh] OR ”Databases as topic” [Mesh] OR ”Medical Record Linkage” [Mesh] OR ”ICD-9-CM” [All Fields] OR ”ICD-10-CM” [All Fields] OR (TennCare [tiab]) OR (RAMQ [tiab]) OR (Cigna [tiab]) OR ((british columbia [tiab]) AND ((health [tiab]) OR (data [tiab]) OR (database [tiab]) OR (population [tiab]))) OR (CIHI [All Fields]) OR ((manitoba [tiab]) AND ((center for health policy [all fields]) OR (population [tiab]) OR (health insurance [tiab]))) OR ((ontario [tiab]) AND ((population [tiab]) OR (OHIP [tiab]) OR (registered persons database [tiab]) OR (health insurance [tiab]) OR (ICES [All Fields]) OR (Institute for Clinical Evaluative Sciences [All Fields]))) OR ((Alberta [tiab]) AND ((health [tiab]) OR (data [tiab]) OR (database [tiab]) OR (population [tiab]) OR (Alberta Health and Wellness [All Fields]))). Limits: humans, English, publication date from 1 January 1990 to 1 January 2011.
3. Terms to exclude studies not likely to utilize administrative and claims data
(“Editorial” [pt] OR “Letter“ [pt] OR “Meta-Analysis“ [pt] OR “Randomized Controlled Trial“ [pt] OR “Clinical Trial, Phase I“ [pt] OR “Clinical Trial, Phase II“ [pt] OR “Clinical Trial, Phase III“ [pt] OR “Clinical Trial, Phase IV“ [pt] OR “Comment“ [pt] OR “Controlled Clinical Trial“ [pt] OR “case reports“ [pt] OR “Clinical Trials as Topic“ [Mesh] OR “double-blind“ [All] OR “placebo-controlled“ [All] OR “pilot study“ [All] OR “pilot projects“ [Mesh] OR “Review“ [pt] OR “Prospective Studies“ [Mesh]). Limits: humans, English, publication date from 1 January 1990 to 1 January 2011.
4. Combining Searches 1 and 2, excluding Search 3
#1 and #2 not #3
5. Health outcome of interest search terms (specific to outcome)
Details provided in individual articles and reports
6. Combining base search with health outcome of interest search terms
#4 and #5

University of Iowa investigators compiled the search results from different databases and eliminated duplicate results using a citation manager program. The results were then output into two sets of files, one containing the abstracts for review and the other documenting abstract review results.

Abstract review

Each abstract was reviewed independently by two investigators to determine whether the full-text article should be reviewed. Exclusion criteria (listed below) were documented sequentially (i.e., if Exclusion Criterion 1 was met, then the other criteria were not documented). If the reviewers disagreed on whether the full-text should be reviewed, then it was selected for review. Interrater agreement on whether to include or exclude an abstract was calculated using a Cohen's kappa statistic. The goal was to review any administrative or claims database study that used data from the USA or Canada and studied the HOI. Validation components of studies are not necessarily included in the abstract, and other relevant citations might be identified from the references of such studies. Therefore, proof that validation was conducted was not a requirement for designation of a study for full-text review.

Abstract exclusion criteria
  1. Did not study the HOI
  2. Not an administrative or claims database study. Eligible sources included insurance claims databases and other secondary databases that identify health outcomes using billing codes.
  3. Data source not from the USA or Canada

Full-text review

Full-text articles were reviewed independently by two investigators, with a goal of identifying algorithm performance statistics described in the article itself or possible algorithm validation studies identified from the reference section of the article. Citations from the article's references were selected for full-text review if they were cited as a source for the HOI algorithm or were otherwise deemed likely to be relevant. Full-text review exclusion criteria (see the next section) were applied sequentially. If fewer than five validation studies were identified, up to 10 of the articles excluded based on the second criterion were incorporated into the final report. If studies using nonvalidated algorithms were reviewed, authors were instructed to select those that were most recent and provided unique information. In most cases in which nonvalidated algorithms were reviewed, authors elected to include all the nonvalidated studies. If there was disagreement on whether a study should be included, the two reviewers attempted to reach consensus on inclusion by discussion. If the reviewers could not agree, a third investigator was consulted to make the final the decision. The initial agreement between reviewers on whether to include an article or not was quantified using a Cohen's kappa statistic.

Full-text exclusion criteria
  1. Poorly described HOI identification algorithm that would be difficult to operationalize
  2. No validation of outcome definition or reporting of validity statistics

Mini-Sentinel collaborator input

Mini-Sentinel collaborators were asked to provide information on any published or unpublished studies that validated an algorithm to identify an HOI in administrative and claims data. Studies that would not be excluded by one of the aforementioned criteria were included in the final reports.

Evidence table creation

Evidence tables were created focusing on the following information: study populations, inclusion and exclusion criteria, outcomes studied, algorithms to identify the outcomes, outcome validation methods, and algorithm validation results. The positive predictive value was the most common statistic reported in algorithm validation studies because this can be determined by reviewing the medical records of potential cases identified through administrative and claims data. Determining the sensitivity generally requires that the study starts with a set of confirmed cases and examines claims to determine if they meet the algorithm criteria to be considered a case. Because most algorithm validation studies start with administrative and claims data and request medical records, sensitivity was only occasionally reported. A single investigator abstracted each study for the final evidence table. The data included in the table were confirmed by a second investigator for accuracy.

Clinician or topic-expert consultation

A clinician or topic expert was consulted to review the results of the evidence table and discuss how they compare and contrast with the results of the diagnostic methods currently used in clinical practice. This included whether certain diagnostic codes used in clinical practice were missing from the algorithms and the appropriateness of the validation definitions compared with diagnostic criteria currently used in clinical practice. A summary of this consultation was included in each report.

RESULTS

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

The results of individual HOI reports are summarized in the accompanying articles,[5-23] and the reports themselves can be found at http://mini-sentinel.org/foundational_activities/related_projects/default.aspx.

DISCUSSION

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

The main purpose of this discussion is to describe decision-making processes and lessons learned during this project, primarily related to the search strategies for the reports. For a number of HOIs, MeSH terms were exclusively used to identify the HOI in PubMed, despite the awareness that text word searches might identify additional relevant citations. Text word searches in PubMed search the title and abstract for the specific character string. The decision to use or not use text words was simply a matter of making trade-offs between the resources available, timeline for completion, and the desire to be thorough. The increase in citations identified when text word searches were used, although variable, could be quite substantial. For some HOIs including text word searches led to more than 1000 additional citations, many of which were expected to be irrelevant. Although no specific threshold was used, when text word searches would have led to substantial additional workload, they were generally not used for the final search. If the additional workload was on the scale of several hundred abstracts or less, or the total number of citations was relatively smaller compared with other HOIs, text word searches were generally included. The omission of text words from a number of searches is a limitation of the reports.

To examine the sensitivity of the search strategy, a test search was conducted and compared with the findings of prior reports commissioned by OMOP to review validated algorithms to identify acute renal failure.[25, 26] The combined OMOP reports identified 13 studies that examined the performance of algorithms. The Mini-Sentinel PubMed search strategy, combined with MeSH terms representing acute and chronic renal failure, identified 7 of those studies, and 3 additional studies were cited in studies identified in the search. The addition of chronic kidney disease MeSH terms led to a much larger number of results (1078 when “kidney failure, chronic,” “kidney insufficiency, chronic,” “renal failure, chronic,” and “renal insufficiency, chronic” were added vs. 111 with the “kidney failure, acute” and “renal insufficiency, acute” MeSH terms only). The broader search strategy appeared appropriate, however, because several studies included in the OMOP reports were focused on chronic kidney disease. Of the 3 studies identified in the references of articles obtained in the search, one would have been identified if text word searches for acute renal or kidney failure had been included. One of these studies was improperly indexed as a randomized controlled trial and thus excluded from the search. One study was an anomaly that seemed to meet all the criteria for the search but was not identified for reasons that could not be determined. Of the three studies that were not identified in the search or the citations of articles identified in the search, all would have been identified by including text word search terms for the HOI. The addition of the following text words to the HOI search terms for the PubMed search for this outcome led to identification of 1525 citations: “acute renal failure,” “acute renal insufficiency,” “acute kidney failure,” “acute kidney insufficiency,” “chronic renal failure,” “chronic renal insufficiency,” “chronic kidney failure,” “chronic kidney insufficiency,” and “chronic kidney disease.”

As described earlier, some limitations related to the indexing of citations. Some articles are indexed in PubMed as randomized controlled trials when in fact they are not. Other articles discuss a health outcome but are not indexed with a corresponding MeSH term. Occasionally, this is because an article discusses multiple health outcomes, and in other cases, the rationale is less clear. It is important to recognize that citation indexing is limited by the potential for human error and subjective determinations of the most important topics within an article to index. In other cases, articles such as those evaluating comorbidity indices may perform validation studies on a large number of health outcomes. In this type of study, the HOI may not be listed in the abstract or indexing terms, such that it would be nearly impossible to identify the study in a PubMed search focused on that HOI. Another limitation is that there seems to be no standard convention for indexing studies that validate algorithms for identifying HOIs. In reviewing studies missed by several HOI searches but identified through other means, such as the references of reviewed articles, the indexing varied widely. This issue is compounded by the fact that some studies of interest focus specifically on validation of algorithms, whereas others include an algorithm validation study within the context of a study examining risk factors for an HOI. Thus, it seems that there is no perfect search strategy for ensuring the identification of all the studies of interest in a systematic review focused on algorithm validity that relies on citation indexing databases.

How could these limitations be overcome? One possibility is to increase the number of databases searched. For these reviews, IDIS/Web searches were conducted. Only articles relating to drug therapy in humans are indexed in the IDIS database, and this database uses a subset of 200 high-impact journals, so these searches identified much smaller numbers of citations compared with PubMed searches. No formal examination of the value gained by adding this additional search was conducted for this project. However, valid citations were obtained in IDIS/Web searches. Some of these citations were overlapping with the results of PubMed searches, and many were unique to the IDIS/Web search. Embase searches may identify additional citations, and these were conducted for a number of HOIs for which PubMed searches resulted in a relatively small number of citations. Embase searches often result in a much larger number of citations, so there is an additional trade-off in efficiency. Embase indexes each section of articles rather than focusing on the title and abstract, as is the case with PubMed. The changes in efficiency that would result from adding Embase also were not specifically examined for this project and presumably would vary by HOI. One limitation of using Embase is that it is a proprietary database with substantial cost, in contrast to PubMed, which is freely available. A final method of searching for studies that may have particular value is to use a text mining search strategy as can be conducted with Google Scholar.

Google Scholar was found to be particularly valuable in searching when few or no relevant studies were identified for an HOI. The strength of Google Scholar comes in its ability to conduct text word searches of entire articles as long as they are available on the Internet. This allows for very specific searches for relevant terms such as “predictive value” and “International Classification of Diseases” or “ICD.” If it is highly likely that a particular code would be contained within algorithms relevant to an HOI, this code should be included in the Google Scholar search. This greatly enhances the specificity of such searches, which is helpful because they can sometimes produce many thousands of results. For example, at the time of this writing, a Google Scholar search for ‘respiratory failure “predictive value” icd’ identified over 6000 results. Although combining terms in quotes is helpful, “predictive value” is not necessarily specific because it might be used to describe screening tools, algorithms to predict outcomes, or diagnostic tests. The acute respiratory failure HOI report identified no studies which examined the performance characteristics of algorithms to identify the HOI, but it did find two studies that used International Classification of Diseases, 9th edition (ICD-9) code 518.8 to identify the HOI. So this code was added to the search string. When “518.8” was added to the Google Scholar search string, only 11 results were returned. One of the identified articles cited three studies that examined the performance characteristics of algorithms to identify acute respiratory distress syndrome, two of which were meeting abstracts which are not indexed in PubMed. So although the systematic review suggested that no studies had been conducted, it was possible to incorporate the evidence from these studies into a review of evidence gaps to inform future research. Google Scholar searches were also conducted for several other HOIs for which no relevant studies were identified, such as ABO incompatibility reactions. Future research might explore the efficiencies and number of studies identified in citation indexing databases such as PubMed compared with various approaches to searching Google Scholar. It is possible that the efficiencies of searches and the screening process could be enhanced through the use of Google Scholar. If nothing else, it might be used as a final check to help ensure that easily identifiable studies are not missed due to the limitations of searching PubMed and other citation indexing databases.

Another recommendation for future systematic reviews of this kind is to get input from someone with expertise in each HOI prior to finalizing the HOI-specific search terms to ensure that no relevant terms are missed. Many searches for this project received topic expert review, but in some cases, topic experts were identified later in the process of the systematic reviews so they did not help with the selection of search terms. Although it is not clear as of yet that any important search terms were missed as a result, including a topic expert early in the process is preferred. Consultation with an experienced medical librarian is also highly recommended.

CONCLUSION

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

The ideal methods for conducting systematic reviews of validated algorithms for the identification of HOIs are yet to be determined. Although PubMed searches are the basic standard for conducting systematic reviews, any citation indexing database has limitations. It does not appear that there is any consistent method for indexing the types of studies relevant to these reviews. Google Scholar searches may help to overcome some of these challenges, but optimal search strategies need further exploration. Future work might further explore the ideal methods for searching Google Scholar, and the differential performance and efficiency of using various focused Google Scholar searches compared with searching citation indexing databases. Finally, it is recommended that topic experts be consulted early in the search process for optimal results. Overall, the systematic review process that was enacted appears to have generally resulted in informative reports. It is important to recognize that limitations remain, and not all reports will contain every study that might have been relevant.

CONFLICT OF INTEREST

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

The authors declare no conflict of interest.

KEY POINTS

  • Searches to identify studies that examine the validity of algorithms to identify health outcomes using administrative and claims data are challenging due to citation indexing limitations.
  • The search strategies used for these reviews involved trade-offs between search sensitivity and resources available for conducting reviews but appeared to result in fairly thorough reviews.
  • Google Scholar appears to be useful for identifying algorithm validation studies not captured by citation indexing databases because it has the unique ability to conduct text word searches of articles. Further refinement of Google Scholar search strategies may be warranted due to the unique nature of this kind of search.
  • It is important to include clinical topic experts throughout the entire systematic review process to ensure that all relevant data are identified, particularly when searching for data on outcomes that represent a range of specific disorders.

ACKNOWLEDGEMENTS

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES

Mini-Sentinel is funded by the FDA through Department of Health and Human Services (HHS) Contract Number HHSF223200910006I. The views expressed in this document do not necessarily reflect the official policies of the Department of Health and Human Services, nor does mention of trade names, commercial practices, or organizations imply endorsement by the US government. This project would not have been possible without the valuable input and work of many people. We would particularly like to acknowledge the following individuals. Ronald Herman, PhD, of the University of Iowa College of Pharmacy, Division of Drug Information Service, and Jonathan Koffel of the University of Iowa libraries provided advice and worked on designing and conducting searches, managing citations, and producing the abstract review documents. Patrick Ryan, PhD, provided helpful insights into the integrated search strategy developed by OMOP, on which our searches were built. Carol Mita of the Harvard library conducted a number of Embase searches that provided insight on the potential value of this database. Swati Sharma provided essential project management assistance. Elizabeth Chrischilles, PhD; Sean Hennessey, PhD.; Darren Toh, PhD; Kimberly Lane, MPH; and Judy Racoosin, MD, MPH, provided important input on many aspects of the project through their work with the Mini-Sentinel Protocol Core. Richard Platt, MD, MSc, provided valuable advice whenever it was requested. Finally, the project would not have been possible without the hard work and input of the HOI report authors and the reviewers who generously donated their time to improve the reports and articles.

REFERENCES

  1. Top of page
  2. ABSTRACT
  3. BACKGROUND
  4. METHODS
  5. RESULTS
  6. DISCUSSION
  7. CONCLUSION
  8. CONFLICT OF INTEREST
  9. ACKNOWLEDGEMENTS
  10. REFERENCES