Cover image for Vol. 17 Issue 12

Editor-in-Chief: Lorna Stimson, Deputy Editor: Lucie Kalvodova

Impact Factor: 4.041

ISI Journal Citation Reports © Ranking: 2016: 15/77 (BIOCHEMICAL RESEARCH METHODS); 79/286 (Biochemistry & Molecular Biology)

Online ISSN: 1615-9861

Associated Title(s): PROTEOMICS - Clinical Applications

Comments to the Viewpoint article

How specific is my SRM?: The issue of precursor and product ion redundancy
Jamie Sherman, Matthew J. McKay, Keith Ashman, Mark P. Molloy

Published Online: Feb 27 2009
DOI: 10.1002/pmic.200800577

Comment from:Rastislav Sramek
Short affiliation:Institute for Theoretical Computer Science, ETH Zurich    Date: 4 October 2010

Having stumbled upon this paper only now, I'd like to point out related work we have done earlier, that might possibly of some interest:
While being quite technical in nature it tries to find a solution to the problem of overlapping of mass spectra for different peptides in MRM by measuring more than just necessary and most prominent transitions.

Given a set of peptides that should be quantified and a set of peptides that can be possibly present in the sample, we want to find the transitions that should be measured in order to maximize the quantification accuracy.
While this problem seems to be hard to solve in the general case, it seems that one can progress far in practice just by simple algebraic manipulations of the database of transitions for available peptides.

Comment from:Dr. Metodi V. Metodiev
Short affiliation:Proteomics Unit, University of Essex, UK    Date: 20 May 2009

The two viewpoint papers on peptide SRM by Sherman et al. and Duncan et al. are timely and rise very important questions. Simply put, the gravest problems seem to come from the fact that peptide/protein quantitation by SRM relies on the conventional form of the technique that has been established and used for small organic molecules, not proteolytic peptides. With proteome samples the so called analytical space is much greater. Hence there is significant probability that a SRM transition used to measure a peptide with a given sequence would be generated by a peptide with a different sequence. This is clearly illustrated by the occurrence of multiple peaks in the SRM traces when analysing peptides. The argument that retention time would provide additional dimension, thus increasing specificity, is a bit shaky too as peptides with the same amino acid composition but different sequence (scrambled peptides that might still give rise to the same SRM transitions) would be expected to have very similar retention times. It is therefore quite serious analytical problem that may need a radical approach to be solved. In my lab we are trying to overcome this problem by introducing MS3 SRM. We believe that peptides require this extra step for specificity. We use ion traps to do this, which will certainly prompt hard-core triple quad folks to say that this cannot be quantitative enough. However, with the fast scanning ion trap instruments available today, such a statement is not necessarily true. What is more serious as a limitation of this MS3 SRM is the inevitable loss of signal intensity. This would obviously translate into loss of sensitivity, but maybe it is a price we should pay if we want ultimate specificity of our peptide quantitation. There might be an additional advantage to use an extra step in SRM. Since MS3 SRM is much more specific (only single peaks are seen in the SRM trace) there is no need to use labelled internal standards and the requirement for very high reproducibility of the retention times is not essential. An additional rationale to use MS3 is the complicated case of peptide phosphorylation. In fact we never attempt MS2 SRM on peptides phosphorylated on serine or threonine. The yield of useful fragments is very low due to the neutral loss phenomenon. This is now a well established fact and data-dependent MS3 is routinely used to map phosphorylation sites in an automatic way.

Comment from:Dr. Jie Li
Short affiliation:Institute of Genetics and Developmental Biology, Beijing, China    Date: 12 May 2009

Each protein quantification method has its inherent pros and cons. With the widening application of LC-MS/MS in peptide quantification, the pros and cons of the method need to be known and considered when analyzing the results. The articles by Sherman et al. and Duncan et al. describe the various aspects of the SRM.

In the restrictions of the method, high complexity of the sample might be the biggest one, which brings about the every possibility to produce the same m/z value from different precursors and lead to the inaccuracy and imprecision of the result. Through decreasing the sample complexity, the possibility to get the same m/z ion from different precursors can be decreased. Any method that can decrease the sample complexity can decrease the possibility to get the same m/z value from different precursors. Decreasing the elution gradient, adding a separation step like multi-dimensional chromatography, etc. should elevate the reliability of the result at the cost of the time efficiency.

Comment from:Richard Kay
Short affiliation:Quotient Bioresearch Ltd, Cambridgeshire, UK
e-mail:    Date: 08 May 2009

I work in an analytical chemistry laboratory where we develop and validate LC-MS/MS and SRM based methods for peptide and small molecule analytes to GLP standards. I found the articles by Sherman and Duncan to be extremely interesting and posed some interesting problems that researchers looking to validate SRM based protein quantitative methods need to address. I believe these questions can be partly answered by adopting selected procedures from existing bioanalytical method validation processes such as those outlined by Viswanathan et al. (Pharmaceutical Research Vol. 24, No. 10, October 2007). Furthermore, validation of an LC-MS/MS and SRM based method for a tryptic peptide surrogate should also include evidence that the peptide is derived from the intact protein in question (selectivity to intact protein).

Demonstration of selectivity.
In small molecule analyses, selectivity is typically performed by demonstrating that any signal in a blank sample is less than 20% of that seen at the LLOQ. However, in protein biomarker applications, the compound will be endogenous, and therefore the analyst can not demonstrate specificity by simply analysing blank matrix. Therefore, in the case of a digested protein, a standard addition based approach to quantitation may be utilised. This would involve spiking standard curves with the target protein, and generating Quality Control (QC) samples with a pure form of the intact protein (ideally traceable to a WHO standard). This would show the method is selective and also demonstrate a link of the digested peptide to the intact protein.

In the case where a peptide is common to multiple peptides, as highlighted by Duncan et al., this poses a significantly more difficult task. If a peptide unique for a given protein isoform can not be identified, then an SRM based bioanalytical method might not be the most appropriate approach to quantitation in this circumstance, unless specific sample pre-treatment is employed.

SRM specificity.
The identification of the correct retention time for a specific peptide can be performed using multiple SRM's to the same peptide, or best using a stable labelled peptide. However, the presence of multiple peaks within an SRM channel for the analyte or internal standard does not necessarily make it unusable. As long as any interferent peaks are always chromatographically resolved from the target peptide peak, the SRM is suitable for a quantitative method.

Confirming a peak in a SRM chromatogram as being specific for a given target peptide can be performed in a number of ways. Firstly a full scan product ion analysis can be performed to generate an MS/MS spectra to confirm the correct peptide is present. Secondly, over spiking increasing levels of the target peptide (or protein) should result in only one of the peaks increasing. This is the basis behind typical bioanalytical method validations, where calibration lines are generated by spiking increasing quantities of analyte and measuring the instrument response. QC samples are analysed alongside the calibration line and specific limits are assigned to the precision (%CV) and accuracy (%RE) of the back-calculated measurements (these are generally set at +/- 15 or +/- 20 for the LLOQ). If these criteria are met, then this demonstrates that the method, and therefore the SRM are specific for the target analyte.

Regarding the debate on the No. of SRM's per peptide and peptides per protein, we have applied typical bioanalytical approaches (discussed above) to protein quantitation using a single SRM for a single peptide in a protein. We published a paper in RCM (2007, 21:2585-2593) in which APOA1 was quantified in multiple human serum samples using a 5-minute high flow rate UPLC-MS/MS based analysis. Comparing the LC-MS/MS derived concentrations to a clinical analyser gave an R2 of 0.97, and furthermore our method demonstrated almost identical precision and accuracy to the clinical analyser.

We have also used a similar single SRM based method to quantify IGF-I in human serum, which demonstrated similar characteristics to standard ELISA and clinical analyser approaches in the analysis of a large sample dataset with>200 serum samples (poster presented at HUPO 2008). Therefore, I believe that with careful assay design, validation and control, a single SRM to a single peptide from a target protein can generate a highly selective and robust LC-MS/MS method.

Comment from:Dr. Rachel Ogorzalek Loo
Short affiliation:Laboratory for Proteomics and Genomics, LA University of California, USA    Date: 07 May 2009

The excellent articles by Sherman et al. and Duncan et al. raise important points. Additionally, sequence variants present in the population must be considered, and for an individual whether they are homozygous or heterozygous for a particular peptide. A look at Swiss Prot's human albumin entry provides a daunting view of the variants presently characterized. As more and more data is obtained on human haplotypes (variants present in >1% of the population), their impact will have to be considered.

Comment from:Professor Stephen Barnes
Short affiliation:Targeted Metabolomics and Proteomics Laboratory, University of Alabama at Birmingham, Birmingham, AL, USA    Date: 06 May 2009

MRM is of big interest - we've been using this approach for small molecules and now peptides for several years. However, like everything else in analytical biochemistry, it's not a cure all, but as long as you're aware of its limitations, it shouldn't be a problem. The key is to use an instrument like a Sciex 5500 Qtrap, where because of its sensitivity and speed, you can get a meaningful and confirmatory MS/MS spectra during the capture of specific parent ion/daughter ion transitions.

One of the serious in proteomics is that investigators have been too intent in generating volumes of data without good plans to validate what they have found. Mounds of data are considered more important than high quality data. Proteomics will drive itself up a cul-de-sac to nowhere if volume continues to be considered more important than quality. In a talk given to the NIH PROTIG group in March entitled "Moving beyond Cowboy Proteomics", the changing needs in mass spectrometry and proteomics were reviewed. The talk focused on the (unnecessary?) reliance on databases derived from genomic information - surely, mass spectrometry is better than that, on the statistical aspects of biomarker discovery experiments (how to determine the economic cost of follow up once you have a primary proteomics dataset), and the value of the MRM approach in hypothesis-driven research. Proteomics is ready for a shake out of the practices in place in many labs around the world.

Comment from:Dr. Yu-Jen Wu and Chih-Ming Lu
Short affiliation:Meiho Institute of Technology, Pingtung, Taiwan
e-mail:    Date: 06 May 2009
Comment:We mainly agree with the viewpoint presented by Sherman et al. Quantification of a target peptide from a complex sample only using SRM technique is not sufficient. Redundancy on both precursor and product ion may lead to an ambiguous quantification or even a false result. My knowledge about SRM is that usually a target standard (usually a synthetic peptide) is required to fix the retention time of liquid chromatography, to provide the fragmentation details, and to generate a calibration curve. The redundancy in SRM experiments could be narrowed down to a minimum as the possibility to encounter a redundant precursor with several identical product ions (instead of one single product ion) with the same retention time is very low. Nevertheless, we still can not rule out the possible redundancy. Therefore, to obtain a convincing data for a proteome research, we may need at least two independent analytical methods to validate with each other. Alternative methods such as stable isotope labeling or western blotting experiment may be required to confirm the result obtained by SRM.

Comment from:Dr. Richard Unwin
Short affiliation:Stem Cell & Leukaemia Proteomics Laboratory, University of Manchester, UK
e-mail:    Date: 05 May 2009
Comment:The articles by Sherman et al. and Duncan et al. raise valid points about the use of SRM for protein quantification, and both provide warnings of what might go wrong with this kind of analysis.

Sherman et al. are correct in their statement that a peptide SRM is, in fact, defined by three factors, namely peptide m/z, fragment m/z and retention time. Performing SRM-triggered MS/MS of all target peptides to ensure 'specificity' is a prerequisite, as is sufficiently high quality sample preparation and pre-fractionation/chromatography to minin=mise potential problems.

However, many of these issues are solved by careful selection of peptides. Database searching for tryptic peptides of similar mass and determination of their fragment ion masses should enable optimum design of SRM pairs, when coupled with the above.

Duncan et al. do, however, raise a more pressing point regarding peptide post translational modifications, especially as a given protein may exist in many forms, each with different combinations of PTM. Realistically, there is not much that the experimentalist can do about this, other than be aware of known PTMs on their target of interest and try to avoid them. Avoiding all peptides which contain one of M, C, S, T or Y (even before you begin to consider deamidation occurring during sample prep, glycosylation etc) is impractical. Also the number of peptides required to accurately assess the quantity of a protein is important. Good practice dictates that using several peptides from a protein is the best way forward (although to play devils advocate, is using one peptide any worse than the current gold standard, namely using an antibody which recognises a single epitope?). However, the use of several peptides does allow either an 'average' to be determined, or outliers (which may occur as a result of PTM) to be identified.

In summary these articles provide a good assessment of the factors which require consideration while designing good SRM-based experiments, and the results of these considerations should be made available as supplementary data when such an analysis is published.

Comment from:Professor Ian Humphery-Smith
Short affiliation:Newcastle-upon-Tyne, UK    Date: 04 May 2009
Comment:I read both articles and thought the forum of opinions by experts on evolving / emerging techniques was EXCELLENT innovation. It would be nice to see more of same vein....

back to Viewpoint Articles