SEARCH

SEARCH BY CITATION

We propose that the toxicology community move toward an expectation that raw toxicity data (i.e., descriptions of pertinent experimental design with relevant data, when measured, and data handling details) be routinely provided upon publication of manuscripts. We believe that this will create at least 3 benefits, as explained below.

Increased data accessibility

  1. Top of page
  2. Increased data accessibility
  3. A resolution to the NOEC debate
  4. Increased transparency
  5. REFERENCES

Many toxicologists routinely carry out exposure–response experiments but never report them. For example, researchers commonly determine sublethal exposure levels before carrying out further experiments that are the focus of a research article. However, the dose–response data from these initial toxicity tests are rarely shown; rather, researchers typically write something along the lines of “The exposure concentrations were selected based on preliminary lethality assays (data not shown)…” (Leung et al. 2010). When such data are not critical to the article, this may be scientifically justifiable, but an opportunity is missed. The data may be of great use to regulators, the regulated community, and various stakeholders, especially when we consider the volume of largely inaccessible data.

Accessibility of test replicates data for the initial acute testing and subsequent testing would prove useful in an applied context for numerous reasons. In risk assessments, evaluation of uncertainty regarding toxicity reference values (TRVs) may be overly general because the variability present in the raw data cannot be sufficiently evaluated. However, access to the full data set used to develop TRVs would allow an uncertainty discussion based on the range of effects observed at specific doses. Review of specific toxicity results from several tests would also allow evaluation of the variability that may arise from differing test protocols (e.g., method or chemical form of dose administration). Additionally, if raw toxicity results were available, then toxicity endpoints, alternative to those statistically estimated and presented in the published research, could be developed.

Development of alternative endpoints in the dose–response curve is critical when incorporating toxicity into population-level effect models; specific dose–responses in individuals are necessary to establish toxic responses in vital rates (e.g., fertility or survival) that affect growth rates and ultimately population model effect estimates. Toxicity endpoints other than no-observed-effect-concentrations (NOECs) and lowest-observed-effect-concentrations (LOECs) are also fundamental to injury assessment in natural resource damage (NRD) cases. Although risk assessments may focus on prediction of potential risk (e.g., potential risk from exposure between the NOEC and LOEC), injury assessments in NRD cases must focus on quantifiable effects to ecological services that may not manifest in the field at doses even greater than the LOEC (e.g., low-level fertility reductions resulting from chemical exposure may be overshadowed by population dynamics or other stressors).

A resolution to the NOEC debate

  1. Top of page
  2. Increased data accessibility
  3. A resolution to the NOEC debate
  4. Increased transparency
  5. REFERENCES

The debate over the (mis)use of NOEC (Fox et al. 2012) and related data analysis approaches including the null hypothesis significance test (Newman 2012) is important yet complex. The limitations of such approaches under most circumstances are clearly outlined in these and related communications. Yet, it seems unlikely that a blanket “one size fits all” approach such as banning all use of NOECs is advisable (again as addressed via Learned Discourses and other means). Why not allow researchers to present their data in articles and reports as they see fit (subject to peer review and editorial discretion), but require them to provide the raw data so that other researchers can carry out their own analyses?

Increased transparency

  1. Top of page
  2. Increased data accessibility
  3. A resolution to the NOEC debate
  4. Increased transparency
  5. REFERENCES

Finally, an obvious benefit of such an approach is increased transparency regarding the quality of reported data. Data quality is harder to evaluate when only mathematically summarized data are presented and when no data regarding environmental conditions specific to dose replicates are presented (e.g., if dissolved O2 levels dropped in one replicate). An analogy can be found in the realm of -omics data. Data are not publishable unless the raw data along with experimental metadata are uploaded to the Gene Expression Omnibus (GEO) Web site (http://www.ncbi.nlm.nih.gov/geo/). This must be done in compliance with the Minimum Information About a Microarray Experiment (MIAME) standards (http://www.ncbi.nlm.nih.gov/geo/info/MIAME.html).

The existence of such data, which are fully accessible to anyone, permits the identification of fraud or error outside of the peer-review process, as in the notorious example of data manipulation by a cancer researcher (Coombes et al. 2007). An additional benefit to increased transparency would be greater confidence in regulatory criteria and guidance (e.g., US Environmental Protection Agency [USEPA] National Recommended Water Quality Criteria and Regional Screening Levels). When regulatory criteria or guidance are developed based on toxicity test results, it is a reasonable expectation to have specifics regarding test conditions and results available for review. This allows stakeholders to evaluate toxicity and quality assurance data, and assess whether the selected endpoint is appropriate or an artifact of laboratory testing (e.g., inconsistent environmental variables) or statistical handling (e.g., treatment of outliers). Such a detailed evaluation may not be feasible during the publication peer-review process due to time or resource constraints. Barron and Wharton (2005) detail methods used to derive various regulatory screening values and the types of uncertainty inherent in those methods. The level of conservatism associated with these screening values would be better understood and uncertainty evaluations in risk assessments greatly improved if the underlying data were readily available.

Is it feasible to report our data more fully? At least from the perspective of peer-reviewed publications, absolutely. Most journals now offer the option of online supplementary data files, which can be extensive. For example, in an article on the toxicity of Ag nanoparticles, we showed full concentration–response curves in the supplemental files, whereas reporting only EC50 values in the text (Yang et al. 2012). It would have been a simple matter to also include the full data set. Maintenance of a database comparable to GEO for toxicity data would be more financially and logistically challenging, but also more rewarding in ease of access. The regulated community is already accustomed to submitting the full scope of data collected and tests conducted. Data collected for environmental site investigations are typically submitted to on-line repositories (e.g., State of California's GeoTracker [http://geotracker.waterboards.ca.gov/]) or published in publically available documents.

There are important questions regarding reporting expectations that we will not try to answer here. For example, what format should be used (Excel files or text files), and what content should be shown (all raw data, level of experimental detail)? However, even in the absence of agreed-on standards, authors, reviewers, and editors can begin moving toward full reporting of toxicity data.

REFERENCES

  1. Top of page
  2. Increased data accessibility
  3. A resolution to the NOEC debate
  4. Increased transparency
  5. REFERENCES