PROTEOMICS

Cover image for Vol. 14 Issue 23-24

Edited By: Michael J. Dunn

Impact Factor: 3.973

ISI Journal Citation Reports © Ranking: 2013: 14/78 (BIOCHEMICAL RESEARCH METHODS); 87/291 (Biochemistry & Molecular Biology)

Online ISSN: 1615-9861

Associated Title(s): PROTEOMICS - Clinical Applications

Comments to the Viewpoint article

Quantifying proteins by mass spectrometry: The selectivity of SRM is only part of the problem
Mark W. Duncan, Alfred L. Yergey, Scott D. Patterson

Published Online: Feb 27 2009
DOI: 10.1002/pmic.200800739


Comment from:Dr. Metodi V. Metodiev
Short affiliation:Proteomics Unit, University of Essex, UK
e-mail:mmetod@essex.ac.uk    Date: 20 May 2009
Comment:

The two viewpoint papers on peptide SRM by Sherman et al. and Duncan et al. are timely and rise very important questions. Simply put, the gravest problems seem to come from the fact that peptide/protein quantitation by SRM relies on the conventional form of the technique that has been established and used for small organic molecules, not proteolytic peptides. With proteome samples the so called analytical space is much greater. Hence there is significant probability that a SRM transition used to measure a peptide with a given sequence would be generated by a peptide with a different sequence. This is clearly illustrated by the occurrence of multiple peaks in the SRM traces when analysing peptides. The argument that retention time would provide additional dimension, thus increasing specificity, is a bit shaky too as peptides with the same amino acid composition but different sequence (scrambled peptides that might still give rise to the same SRM transitions) would be expected to have very similar retention times. It is therefore quite serious analytical problem that may need a radical approach to be solved. In my lab we are trying to overcome this problem by introducing MS3 SRM. We believe that peptides require this extra step for specificity. We use ion traps to do this, which will certainly prompt hard-core triple quad folks to say that this cannot be quantitative enough. However, with the fast scanning ion trap instruments available today, such a statement is not necessarily true. What is more serious as a limitation of this MS3 SRM is the inevitable loss of signal intensity. This would obviously translate into loss of sensitivity, but maybe it is a price we should pay if we want ultimate specificity of our peptide quantitation. There might be an additional advantage to use an extra step in SRM. Since MS3 SRM is much more specific (only single peaks are seen in the SRM trace) there is no need to use labelled internal standards and the requirement for very high reproducibility of the retention times is not essential. An additional rationale to use MS3 is the complicated case of peptide phosphorylation. In fact we never attempt MS2 SRM on peptides phosphorylated on serine or threonine. The yield of useful fragments is very low due to the neutral loss phenomenon. This is now a well established fact and data-dependent MS3 is routinely used to map phosphorylation sites in an automatic way.


Comment from:Dr. Peter Juhasz
Short affiliation:BG Medicine, Proteomics, Waltham, MA, USA
e-mail: pjuhasz@bg-medicine.com    Date: 19 May 2009
Comment:

Duncan, Yergey, and Patterson take the purists' perspective on highlighting potential shortcomings of protein quantification through MRM or SRM assays of "proteotypic" peptides. While practitioners should be aware of the caveats of these "validation methods" and should be held accountable for the proper qualification of these assays, it is necessary to juxtapose the limitations of MRM-based peptide quantification to those of its alternatives.

Immunoassays quantify available epitope(s) on the surface of target molecules: if you have an ELISA with two monoclonal antibodies, the analogy with quantifying a protein through two of its peptides might be tempting. Many of the limitations of sandwich assays are exactly the same as listed by the authors of this Viewpoint: the assay may not resolve post-translationally modified forms of the target and may not have the specificity to distinguish members of protein families with varying degree of homology. The technical challenges of assay development are abound: protein standards used for immunization or as assay standards - even if expressed in mammalian systems - commonly don't have the same primary structure of the target human protein. Post-translational processing, primarily by proteolysis, may affect the protein standard during the immunization process introducing nearly unpredictable effects on the "absolute accuracy" of the assay.

Unlike the MRM measurements on tryptic peptides, target recognition by antobodies is also affected by competition for the epitopes by endogenous interacting partners of the target protein so different readouts may not translate into similary different analyte concentrations.

All this very complex dynamics can be, and is, characterized and controlled with successful IVD assays but is not with most of the RUO assays. The situation is even worse with the multiplexed immunoassays whether bead-based or array-based.

While acknowledging all the strings attached to the MRM validation approach for proteins emerging as biomarker candidates from a mass-spectrometry-based discovery study, their undeniable advantage over immunoassays is the fact that they could (and should) target exactly the same peptides of which quantification in the discovery phase led to the conclusion of their source protein possibly being a marker. The final call on the utility of these MRM assays will be made by their ability to maintain or improve predictive/differentiating power throughout the validation phase. Once this happens, the enormous work of converting this marker information into a successful diagnostic assay can start.


Comment from:Dr. Jie Li
Short affiliation:Institute of Genetics and Developmental Biology, Beijing, China
e-mail:lijie@genetics.ac.cn    Date: 12 May 2009
Comment:

Each protein quantification method has its inherent pros and cons. With the widening application of LC-MS/MS in peptide quantification, the pros and cons of the method need to be known and considered when analyzing the results. The articles by Sherman et al. and Duncan et al. describe the various aspects of the SRM.

In the restrictions of the method, high complexity of the sample might be the biggest one, which brings about the every possibility to produce the same m/z value from different precursors and lead to the inaccuracy and imprecision of the result. Through decreasing the sample complexity, the possibility to get the same m/z ion from different precursors can be decreased. Any method that can decrease the sample complexity can decrease the possibility to get the same m/z value from different precursors. Decreasing the elution gradient, adding a separation step like multi-dimensional chromatography, etc. should elevate the reliability of the result at the cost of the time efficiency.


Comment from:Richard Kay
Short affiliation:Quotient Bioresearch Ltd, Cambridgeshire, UK
e-mail: richard.kay@quotientbioresearch.com   Date: 08 May 2009
Comment:

I work in an analytical chemistry laboratory where we develop and validate LC-MS/MS and SRM based methods for peptide and small molecule analytes to GLP standards. I found the articles by Sherman and Duncan to be extremely interesting and posed some interesting problems that researchers looking to validate SRM based protein quantitative methods need to address. I believe these questions can be partly answered by adopting selected procedures from existing bioanalytical method validation processes such as those outlined by Viswanathan et al. (Pharmaceutical Research Vol. 24, No. 10, October 2007). Furthermore, validation of an LC-MS/MS and SRM based method for a tryptic peptide surrogate should also include evidence that the peptide is derived from the intact protein in question (selectivity to intact protein).

Demonstration of selectivity.
In small molecule analyses, selectivity is typically performed by demonstrating that any signal in a blank sample is less than 20% of that seen at the LLOQ. However, in protein biomarker applications, the compound will be endogenous, and therefore the analyst can not demonstrate specificity by simply analysing blank matrix. Therefore, in the case of a digested protein, a standard addition based approach to quantitation may be utilised. This would involve spiking standard curves with the target protein, and generating Quality Control (QC) samples with a pure form of the intact protein (ideally traceable to a WHO standard). This would show the method is selective and also demonstrate a link of the digested peptide to the intact protein.

In the case where a peptide is common to multiple peptides, as highlighted by Duncan et al., this poses a significantly more difficult task. If a peptide unique for a given protein isoform can not be identified, then an SRM based bioanalytical method might not be the most appropriate approach to quantitation in this circumstance, unless specific sample pre-treatment is employed.

SRM specificity.
The identification of the correct retention time for a specific peptide can be performed using multiple SRM's to the same peptide, or best using a stable labelled peptide. However, the presence of multiple peaks within an SRM channel for the analyte or internal standard does not necessarily make it unusable. As long as any interferent peaks are always chromatographically resolved from the target peptide peak, the SRM is suitable for a quantitative method.

Confirming a peak in a SRM chromatogram as being specific for a given target peptide can be performed in a number of ways. Firstly a full scan product ion analysis can be performed to generate an MS/MS spectra to confirm the correct peptide is present. Secondly, over spiking increasing levels of the target peptide (or protein) should result in only one of the peaks increasing. This is the basis behind typical bioanalytical method validations, where calibration lines are generated by spiking increasing quantities of analyte and measuring the instrument response. QC samples are analysed alongside the calibration line and specific limits are assigned to the precision (%CV) and accuracy (%RE) of the back-calculated measurements (these are generally set at +/- 15 or +/- 20 for the LLOQ). If these criteria are met, then this demonstrates that the method, and therefore the SRM are specific for the target analyte.

Regarding the debate on the No. of SRM's per peptide and peptides per protein, we have applied typical bioanalytical approaches (discussed above) to protein quantitation using a single SRM for a single peptide in a protein. We published a paper in RCM (2007, 21:2585-2593) in which APOA1 was quantified in multiple human serum samples using a 5-minute high flow rate UPLC-MS/MS based analysis. Comparing the LC-MS/MS derived concentrations to a clinical analyser gave an R2 of 0.97, and furthermore our method demonstrated almost identical precision and accuracy to the clinical analyser.

We have also used a similar single SRM based method to quantify IGF-I in human serum, which demonstrated similar characteristics to standard ELISA and clinical analyser approaches in the analysis of a large sample dataset with>200 serum samples (poster presented at HUPO 2008). Therefore, I believe that with careful assay design, validation and control, a single SRM to a single peptide from a target protein can generate a highly selective and robust LC-MS/MS method.


Comment from:Dr. Rachel Ogorzalek Loo
Short affiliation:Laboratory for Proteomics and Genomics, LA University of California, USA
e-mail:rloo@mednet.ucla.edu    Date: 07 May 2009
Comment:

The excellent articles by Sherman et al. and Duncan et al. raise important points. Additionally, sequence variants present in the population must be considered, and for an individual whether they are homozygous or heterozygous for a particular peptide. A look at Swiss Prot's human albumin entry provides a daunting view of the variants presently characterized. As more and more data is obtained on human haplotypes (variants present in >1% of the population), their impact will have to be considered.


Comment from:Professor Stephen Barnes
Short affiliation:Targeted Metabolomics and Proteomics Laboratory, University of Alabama at Birmingham, Birmingham, AL, USA
e-mail:sbarnes@uab.edu    Date: 06 May 2009
Comment:

MRM is of big interest - we've been using this approach for small molecules and now peptides for several years. However, like everything else in analytical biochemistry, it's not a cure all, but as long as you're aware of its limitations, it shouldn't be a problem. The key is to use an instrument like a Sciex 5500 Qtrap, where because of its sensitivity and speed, you can get a meaningful and confirmatory MS/MS spectra during the capture of specific parent ion/daughter ion transitions.

One of the serious in proteomics is that investigators have been too intent in generating volumes of data without good plans to validate what they have found. Mounds of data are considered more important than high quality data. Proteomics will drive itself up a cul-de-sac to nowhere if volume continues to be considered more important than quality. In a talk given to the NIH PROTIG group in March entitled "Moving beyond Cowboy Proteomics", the changing needs in mass spectrometry and proteomics were reviewed. The talk focused on the (unnecessary?) reliance on databases derived from genomic information - surely, mass spectrometry is better than that, on the statistical aspects of biomarker discovery experiments (how to determine the economic cost of follow up once you have a primary proteomics dataset), and the value of the MRM approach in hypothesis-driven research. Proteomics is ready for a shake out of the practices in place in many labs around the world.


Comment from:Dr. Yu-Jen Wu and Chih-Ming Lu
Short affiliation:Meiho Institute of Technology, Pingtung, Taiwan
e-mail: wyr924@ms24.hinet.net    Date: 06 May 2009
Comment:Duncan et al. pointed out the common limitations for the shotgun proteomic approaches which are incapable to distinguish the protein isoforms. Quantification on peptide level by SRM could have confusing results, since the signal of the target peptide may come from several protein isoforms bearing the same tryptic peptide. It is not a unique problem for SRM but a common one for most bottom-up proteome analysis. When using SRM to quantify a protein expression, a complementary approach such as western blotting is required to validate the result obtained by SRM.

Comment from:Dr. Richard Unwin
Short affiliation:Stem Cell & Leukaemia Proteomics Laboratory, University of Manchester, UK
e-mail: r.unwin@manchester.ac.uk    Date: 05 May 2009
Comment:The articles by Sherman et al. and Duncan et al. raise valid points about the use of SRM for protein quantification, and both provide warnings of what might go wrong with this kind of analysis.

Sherman et al. are correct in their statement that a peptide SRM is, in fact, defined by three factors, namely peptide m/z, fragment m/z and retention time. Performing SRM-triggered MS/MS of all target peptides to ensure 'specificity' is a prerequisite, as is sufficiently high quality sample preparation and pre-fractionation/chromatography to minin=mise potential problems.

However, many of these issues are solved by careful selection of peptides. Database searching for tryptic peptides of similar mass and determination of their fragment ion masses should enable optimum design of SRM pairs, when coupled with the above.

Duncan et al. do, however, raise a more pressing point regarding peptide post translational modifications, especially as a given protein may exist in many forms, each with different combinations of PTM. Realistically, there is not much that the experimentalist can do about this, other than be aware of known PTMs on their target of interest and try to avoid them. Avoiding all peptides which contain one of M, C, S, T or Y (even before you begin to consider deamidation occurring during sample prep, glycosylation etc) is impractical. Also the number of peptides required to accurately assess the quantity of a protein is important. Good practice dictates that using several peptides from a protein is the best way forward (although to play devils advocate, is using one peptide any worse than the current gold standard, namely using an antibody which recognises a single epitope?). However, the use of several peptides does allow either an 'average' to be determined, or outliers (which may occur as a result of PTM) to be identified.

In summary these articles provide a good assessment of the factors which require consideration while designing good SRM-based experiments, and the results of these considerations should be made available as supplementary data when such an analysis is published.

Comment from:Professor Ian Humphery-Smith
Short affiliation:Newcastle-upon-Tyne, UK
e-mail:ianhs@hotmail.com    Date: 04 May 2009
Comment:I read both articles and thought the forum of opinions by experts on evolving / emerging techniques was EXCELLENT innovation. It would be nice to see more of same vein....

back to Viewpoint Articles

 

SEARCH

SEARCH BY CITATION