Tiebreaker: Certification and Multiple Credit Ratings





    Search for more papers by this author
    • Bongaerts is with Finance Group, RSM Erasmus University Rotterdam, and Cremers and Goetzmann are with the International Center for Finance, Yale University. We would like to thank Patrick Behr; Michael Brennan; Mark Carlson; Erwin Charlier; Long Chen; Frank de Jong; Joost Driessen; Frank Fabozzi; Rik Frehen; Gary Gorton; Jean Helwege; Mark Huson; Ron Jongen; Pieter Klaassen; David Lesmond; Hamid Mehran; Catherine Nolan; Frank Packer; Ludovic Phalippou; Paolo Porchia; Jörg Rocholl; Joao Santos; Joel Shapiro; Chester Spatt; Walter Stortelder; Dan Swanson; Anjan Thakor; Laura Veldkamp; Evert de Vries; Jacqueline Yen; Weina Zhang; as well as conference participants at the Financial Crisis conference at Pompeu Fabra University, the European Finance Association annual meetings in Bergen (Norway, 2010), the Texas Finance Festival at UT Austin, the RMI conference at National University of Singapore, the NBER meeting on Credit Ratings in Cambridge, the Conference on Credit Rating Agencies at Humboldt University, the American Finance Association annual meetings in Denver (2011); and seminar participants at the University of Amsterdam, Rotterdam School of Management, and the Dutch National Bank for helpful comments and information. We especially thank Campbell Harvey (the Editor), an anonymous associate editor, and an anonymous referee for many helpful comments and advice.


This paper explores the economic role credit rating agencies play in the corporate bond market. We consider three existing theories about multiple ratings: information production, rating shopping, and regulatory certification. Using differences in rating composition, default prediction, and credit spread changes, our evidence only supports regulatory certification. Marginal, additional credit ratings are more likely to occur because of, and seem to matter primarily for, regulatory purposes. They do not seem to provide significant additional information related to credit quality.

Credit rating agencies (CRAs) report information about the credit risk of fixed income securities. The various ways the information is used by financial, legal, and regulatory entities may potentially influence the nature of the information production process. Bond ratings are used not only for risk assessment, but also for regulatory certification, that is, for classification of securities into investment grade (IG) and high yield (HY, or junk) status. These classifications in turn influence institutional demand and serve as bright-line triggers in corporate credit arrangements and regulatory oversight. Regulations may mandate that insurance companies and banks keep much higher reserve capital for HY issues than for IG corporate bonds. Other institutions such as pension funds and mutual funds are often restricted by their charters with respect to the amount of HY debt they can hold. Taken together, more than half of all corporate bonds are held by institutions that are subject to rating-based restrictions on their holdings of risky credit assets (Campbell and Taksler (2003)). Lower demand for HY bonds can significantly increase the cost of borrowing for those issuers and thus affects capital structure decisions (see Kisgen (2006, 2009), Kisgen and Strahan (2009), Ellul, Jotikasthira, and Lundblad (2010)). The institutional and regulatory importance of credit ratings to issuers and investors has therefore raised questions about whether the current system provides the proper incentives for issuers to fully disclose value-relevant information, and for investors to invest in research about credit risk.

Using a sample from 2000 to 2008, we document that almost all large, liquid U.S. corporate bond issues are rated by both S&P and Moody's. Fitch typically plays the role of a “third opinion” for large bond issues.1 During the sample period, the most prevalent institutional rule for classifying rated bonds was as follows: for issues with two ratings, only the lower rating is used to classify the issue (e.g., into IG or HY); for issues with three ratings, the middle rating should be used (see, e.g., the National Association of Insurance Commissioners (NAIC) guidelines or the Basel II Accord).2 Therefore, if S&P and Moody's ratings are on opposite sides of the HY–IG boundary, the Fitch rating (assuming it is the marginal, third rating) is the “tiebreaker” that decides into which class the issue falls. Notice that this rule directly implies that adding a third rating cannot worsen the regulatory rating classification, but may potentially lead to a higher rating. Consistent with this idea, we find that, in about 25% of Fitch rating additions, the addition leads to a regulatory rating improvement, that is, the resulting middle rating is higher than the lowest rating before the Fitch rating addition. Ex ante, such an improvement is likely to be particularly important when S&P and Moody's ratings are on opposite sides of the HY–IG boundary, as absent an improving third rating, the split between S&P and Moody's would result in an HY classification. Thus, the value of the Fitch rating is that it can push the issue up into the IG category, but it cannot pull it down into the HY category.3

In this paper, we explore the nature of this tiebreaking role in the context of the broader question of why corporate bonds generally have multiple credit ratings. We consider three hypotheses that could lead to demand for multiple credit ratings, namely, an “information production” hypothesis, a “rating shopping” hypothesis, and a “regulatory certification” hypothesis. These hypotheses are not mutually exclusive but they have different empirical implications that we exploit to shed light on their relative importance.

Under the information production hypothesis, investors are averse to uncertainty, which is reduced by adding extra ratings (see, e.g., Güntay and Hackbarth (2010)). Under the rating shopping hypothesis, issuers “shop” for an additional rating in the hope of improving their rating (see, e.g., Poon and Firth (2005), Sangiorgi, Sokobin, and Spatt (2009), Skreta and Veldkamp (2009)). And, under the regulatory certification hypothesis (see, e.g., Brister, Kennedy, and Liu (1994)), market and regulatory forces can naturally arise from a need to credibly separate bond issues into two types: informationally sensitive and non–informationally sensitive (Gorton and Pennacchi (1990) and Boot and Thakor (1993)). These correspond to HY and IG ratings, respectively. If the regulatory certification role of CRAs dominates, only the weaker issuers may need a third rating. This leads us to investigate whether the option of a third rating leads to adverse selection effects. As mentioned before, these hypotheses are not mutually exclusive. For example, rating shopping could be more important and thus more prevalent around the HY–IG boundary.

We find the strongest evidence in favor of the regulatory certification hypothesis. First, we consider what happens if a Fitch rating is added for bond issues at the HY–IG boundary, when Fitch could be a tiebreaker and potentially move the bond issue into the IG class. The yield improves if Fitch rates the issue IG, but there is no change following an HY rating, with a 40 basis point difference between an IG and HY classification. This economically large difference suggests that the certification effect can significantly lower the issuer's cost of capital.

Second, Fitch rating additions or changes for issues that are not close to the HY–IG boundary do not seem to be related to changes in yields. This is the case not only for Fitch rating additions in a sample of bond issues already rated by both Moody's and S&P, but also for the full sample of all bonds using quarterly panel regressions of credit spread changes on rating updates made by Moody's, S&P, and Fitch. In contrast, credit rating changes (especially downgrades) made by both Moody's and S&P are significantly associated with credit spread changes across the whole rating spectrum.

Third, Cox proportional hazard model regressions indicate that getting a Fitch rating is positively associated with the potential to break the tie between Moody's and S&P ratings, but again only around the IG–HY boundary. Fourth and finally, comparing default predictions on a 1-year horizon across CRAs for corporate bonds rated by all three agencies over 2000–2008, we find that Moody's ratings perform best, immediately followed by S&P and then by Fitch. Ratings by Moody's and S&P add significant forecasting power to those of Fitch, whereas the reverse is not the case. This result is consistent with Fitch providing limited additional valuation information relative to that contained in Moody's and S&P ratings.

In sum, the credit spread change regressions provide no support for the information production hypothesis, that is, that the additional Fitch rating provides significant information to investors. However, further tests provide some evidence in support of the rating shopping hypothesis around the HY–IG boundary for possible regulatory arbitrage, but not anywhere else. While the additional Fitch rating tends to be more optimistic than preexisting Moody's and S&P ratings, investors do not appear to incorporate the increased optimism by lowering credit spreads. This would seem to undermine any rationale to engage in rating shopping. However, we find that the additional Fitch rating is “extra” optimistic for issues rated just below IG or for issues where Fitch is the tiebreaker around the HY–IG boundary, that is, more so than elsewhere on the rating spectrum. These issues are exactly the issues for which we would expect rating shopping incentives to matter most. Specifically, if Moody's and S&P ratings are on opposite sides of the HY–IG boundary, the additional Fitch rating is more likely to lead to an improvement in the regulatory rating classification, which, in this particular case, means an improvement from HY to IG classification (using “the worst of two and median of three ratings applies” rule). This evidence is suggestive of rating shopping around the HY–IG boundary, or of the marginal rating being used for regulatory arbitrage.4

Endogeneity is a significant concern in our study, as we rely on controls for confounding variables for identification. We seek to mitigate major selection issues by focusing on credit spread changes and Fitch rating additions. We also directly estimate selection effects using the Cox proportional hazard model to explain the time to add the third rating.

Taken together, our results suggest that a major function of Fitch ratings could be to avoid adverse selection for intermediate quality corporate bonds (Gorton and Pennacchi (1990) and Boot and Thakor (1993)). Relatively uninformed investors may be reluctant to trade bond issues for which they may be at a considerable information disadvantage, that is, HY or junk issues. Investors specialized at producing information might find it too costly to do so for medium quality issues, unless they can generate a profit from trading at an informational advantage with uninformed investors. This could lead to a no-trade region for these intermediate quality bonds. An additional rating that gives a clear signal about whether research will yield relevant information (or whether relatively uninformed investors may be at a disadvantage) could mitigate the existence of such a no-trade region. The generally optimistic Fitch ratings may also be requested as a precautionary measure. For example, issues rated by Fitch ratings are more likely to have subsequent Moody's and S&P rating transitions, suggesting that these issues have relatively less stable ratings.

In the long run, a necessary condition for any CRA's HY–IG classification to be viewed as credible is to use and produce value-relevant information about the firm. A rubber-stamp rating without research will not serve a certification role in the long run. If the regulatory emphasis on credit ratings is reduced or if regulations with respect to which rating should be used are tightened in the aftermath of the recent credit crisis, the certification effect documented in this paper may become less pronounced. In particular, our results suggest that fewer firms may opt for multiple ratings, unless the marginal CRA can convince the market that its ratings are useful not just for meeting regulatory requirements but also provide additional information about credit risk (and particularly about separating information-sensitive from information-insensitive securities). Indeed, less regulatory emphasis on ratings may spur increased competition among CRAs to improve their information production, especially around the HY–IG boundary.

The remainder of the paper is organized as follows. Section II contains the motivation for our empirical tests and develops our various hypotheses in light of the existing literature. In Section III, we discuss sample construction and methodology. Section IV presents the empirical results on the three hypotheses. Section V concludes.

I. Motivation

A. Credit Rating Agencies and Regulation

There are currently three major CRAs in the U.S. market: S&P, Moody's, and Fitch. In addition to these big three, seven smaller CRAs issue credit ratings that qualify for meeting regulatory standards. While the purpose of a credit rating is to reflect the creditworthiness of an issue or issuer, the rating agencies have some discretion in the philosophy underlying their rating system and are not required to make their rating methodology public.5

CRAs are licensed as Nationally Recognized Statistical Rating Organizations (NRSRO) by the Securities and Exchange Commission (SEC). This official designation has a number of effects. First, CRAs are exempt from Regulation FD, allowing corporations to share value-relevant information with the rating agency without disclosing it publicly. Second, the SEC designation allows credit ratings to be used to meet regulatory requirements that call for a minimum or an average rating value. For example, the SEC requires that money market mutual funds hold instruments with a credit rating in one of the two short-term higher credit rating categories.6 This effectively provides a “safe harbor” for money market mutual funds with respect to litigation over fund failures. Kisgen (2006) discusses the strong link between short- and long-term debt ratings and access to the commercial paper market. He concludes that, in order to have access to the commercial paper market, a rating of BBB is typically required.

U.S. insurance companies explicitly rely on NRSRO ratings in determining risk-based capital. In particular, bonds held by insurance companies are assigned capital charges based upon their credit ratings. For example, a U.S. life insurance company needs to hold over three times as much reserve capital for a BB-rated bond compared to a BBB-rated bond. At the time of writing, European insurance companies will soon be subject to comparable regulations with the implementation of “Solvency II.” Banking regulations enacted under the so-called Basel II Accords impose very similar risk-based capital requirements.7 Many pension funds and mutual funds are restricted from investing in HY corporate bonds by their charter. Although there is much discussion about treating bank and insurance assets in the context of their total portfolio, taking into account covariance rather than security-specific risk, as of mid-2010 a large portion of U.S. institutional portfolios are still subject to rules and regulations tied to ratings by a relatively small number of NRSROs. The impact of such rules and regulations on the functioning of the corporate bond market, and in particular in determining supply and demand, is almost certainly nontrivial since a vast majority of this market is dominated by institutions that are subject to rating-related restrictions, either through explicit rules and regulations or through restrictions stated in their charters.8

In June 2008, the SEC proposed eliminating language in many regulations pertaining to NRSROs, and instead allowing an alternative decision-making function, perhaps recognizing that reliance on credit ratings agencies had the potential to distort the information-gathering and investment decision-making process. In addition, several other regulations were implemented to try to make rating agencies more accountable and to increase the transparency of the rating process. The motivation for these (proposed) changes stemmed from the subprime mortgage crisis that began in 2007, and from concerns that the top three CRAs may represent an oligopoly enabled by government regulation. Among other things, critics argue that this oligopoly might not be the optimal mechanism for revealing information related to the risk of fixed income securities, and instead might be used as an artificial safe-harbor to excuse investment managers from exercising business judgment. As such, it could allow CRAs to extract rents from corporations by virtue of serving as “gate-keepers” to the IG rating, especially as the CRAs are paid by the corporations whose bonds are rated. Moreover, competition among CRAs could lead to a so-called “race to the bottom,” with CRAs decreasing standards to attract more customers. This concern is raised about the structured finance market in the context of the (subprime) mortgage crisis.

While all of the aforementioned issues are of a regulatory nature, the wider financial industry has also grown increasingly dependent on CRAs. Financial institutions center self-regulation around credit ratings; for example, some mutual funds state in their charter that they can only invest in IG quality fixed income securities. Further, trading and internal risk management models often take credit ratings as primary inputs or use them for calibration. Many corporate credit arrangements, like collateral requirements and haircuts, are also driven by credit ratings. Moreover, ratings are an important factor in determining whether a bond qualifies for inclusion in prominent corporate bond indices like the Barclays Capital (formerly Lehman Brothers) U.S. Corporate IG Index.9 Inclusion in such an index may greatly improve the liquidity of an issue, since, for example, index tracking institutions will trade in them more. Several papers show that higher liquidity leads in turn to lower credit spreads (see, e.g., Chen, Lesmond, and Wei (2007)). Typically, these procedures incorporate all (multiple) rating information available, extending possible certification effects well beyond those resulting from financial regulation.

B. Why Multiple Ratings Matter

In this subsection, we consider three different explanations for why firms would solicit and pay for multiple ratings. We base these hypotheses on empirical evidence provided by previous literature, as summarized in the next subsection. Specifically, the three hypotheses we consider are (i) the “information production” hypothesis, (ii) the “rating shopping” hypothesis, and (iii) the “regulatory certification” (or clientele, or regulation) hypothesis. Below, we give a short description of each and discuss its testable empirical predictions. As these hypotheses are generally not mutually exclusive, we discuss how they are related as well as differences in empirical predictions that allow us to distinguish which hypothesis may dominate empirically. These empirical predictions are summarized in Table I.

Table I. Empirical Predictions
The various empirical predictions of the three hypotheses we consider are summarized in the table below, where “−” indicates that the implication is not supported and “+” means that it is supported.
Reason for Multiple RatingsInformation ProductionRating ShoppingRegulatory Certification
(i) Additional agreeing rating lowers credit spreads+
(ii) Additional relatively optimistic rating lowers credit spreads (also away from HY–IG boundary)++Only at HY–IG boundary
(iii) Uncertainty uniformly increases # of ratings++possible
(iv) Additional rating more likely if that could push the issue into IG classificationpossiblepossible+
(v) Additional rating more optimistic (especially around HY–IG boundary)+Only for strategic CRA
(vi) Additional rating associated with higher expected time variation in ratings++

First, a firm might apply for multiple ratings due to a need for increased information production. More ratings can reduce uncertainty about the credit quality of the rated bonds. In a setting in which each CRA relies on different kinds of information to rate bonds, multiple perspectives reduce uncertainty about default probability. CRAs may specialize in evaluating particular drivers of default and thus each may have some advantage that justifies its continued existence in the marketplace. Thus, one would expect issuers with greater ex ante uncertainty to be more likely to apply for extra ratings, since the potential reduction of uncertainty is largest for these issuers. Moreover, under the information production hypothesis, an extra rating in agreement with existing ratings would reduce credit quality uncertainty and thereby lower credit spreads.

Second, the rating shopping effect can arise in a setting in which CRAs do not perfectly agree or there is considerable uncertainty about credit quality, while issuers may have better information about their own credit quality. In this case, issuers may seek to maximize their average rating by soliciting multiple bids or following a stopping rule that chooses the first rating agency whose rating equals or exceeds the firm's own assessment of quality. Applying for private ratings and making these public only if favorable, or deciding which CRA to use based on advice from an investment bank that has knowledge about each CRA's precise rating algorithms (gaming), would lead to similar patterns. The rating shopping hypothesis thus predicts that issuers will apply for an additional rating only if they think it will be an improvement. Therefore, additional ratings are on better average. Further, if the issuer applies for an additional rating and this additional rating is an improvement, credit spreads should go down. This can be because the additional rating is actually closer to the firm's true credit quality or because it is not, but the market mistakenly takes the new rating at face value. In the latter case, if the market is not fooled, there would be no incentive to engage in rating shopping.

The third explanation for multiple ratings is regulatory certification. Financial regulation has traditionally relied heavily on credit ratings to determine the suitability and riskiness of fixed income investments. For instance, bond ratings are used to score the quality of bonds in the investment portfolios of insurance companies and banks, with regulatory capital reserve requirements determined by this score. Ratings are also important in the structured finance market, the commercial paper market, and the overnight repo market. They are further used to determine “haircuts” at the discount window of the central bank and to determine whether projects qualify for government assistance (see, e.g., the Basel Committee on Banking Supervision (2000)). They may also be the basis for financial contracting between private parties, as the world witnessed in the case of AIG's rating downgrade that triggered a need for increased collateral in its counterparty arrangements. This event underscores the enormous potential impact of certification.

The most prominent distinction made in financial regulation as it pertains to credit ratings is whether an issue, issuer, or structured product is rated IG or HY. In particular, the most prevalent institutional rule used in classifying rated bonds stipulates that, if an issue has two ratings, only the worse rating is used to classify the issue into IG or HY. However, if an issue has three ratings, the middle rating is used (see, e.g., the NAIC guidelines or the Basel II Accord).10,11 Therefore, if S&P and Moody's ratings are on opposite sides of the HY–IG boundary, the Fitch rating (assuming it is third rater) will decide into which class the issue falls. This classification creates strong incentives for issuers trying to achieve an IG rating. Thus, the HY–IG boundary is associated with a clear discontinuity in institutional demand. Assuming a downward sloping demand curve, the lower demand for HY bonds significantly increases the cost of borrowing for those issuers (see Ellul et al. (2010) and Kisgen and Strahan (2009)).12

Under the regulatory certification hypothesis, the principal value of a CRA that systematically gives better ratings (i.e., in our data, Fitch) than the other CRAs (i.e., Moody's and S&P) is simply that it helps satisfy the bright-line requirements of financial regulation. A rating from this CRA could be requested by issuers for which the extra rating might make them qualify for an IG classification. In addition, issuers that consider themselves likely to experience a future downgrade from IG to HY could seek an extra rating due to precautionary motives. This could lead to adverse selection effects, as relatively weaker firms with higher credit spreads would then be more likely to apply for a Fitch rating. Therefore, under the regulatory certification hypothesis, split ratings at the HY–IG boundary by Moody's and S&P should give an issuer significant incentives to get an additional rating from Fitch. Moreover, an additional rating may provide a hedge against the regulatory and rule-based effects of possible future rating downgrades, while also increasing the probability of reaping regulatory benefits from upgrades. This effect should be more pronounced for issuers expecting to have more volatile ratings over time.

Gorton and Pennacchi (1990) and Boot and Thakor (1993) show that the information and regulatory certification hypotheses can be inherently related in a setting with two types of investors, in which issues with a lower credit quality carry more uncertainty. Type I investors have a time-varying natural demand for bonds and high research costs, and type II investors are without the natural demand but have low research costs.13 Since type I investors are at an informational disadvantage relative to type II investors, they will only invest in high credit quality securities for which the informational gain of type II investors is small, that is, in informationally insensitive assets, to avoid losses due to trading with informed investors (see Gorton and Pennacchi (1990)). Typically, type II investors provide liquidity to this market to accommodate aggregate demand shocks. On the other end of the credit quality spectrum, it is worthwhile for type II investors to generate the information needed.14 The region in the middle could suffer from a market breakdown if type II investors only make money if they profit from informed trading with type I investors (as in Boot and Thakor (1993)).15

The importance of regulatory certification could be in preventing a market breakdown for intermediate quality bonds. In this setting, credit ratings can restore trading by reducing the uncertainty about the value of information. Ratings will yield information not only about credit quality, but also about the profitability of research. If the conclusion is “no substantial information benefit,” then type I investors would invest and type II would not bother to research. If the conclusion is “significant information benefit,” then type I investors would not invest and type II investors would invest to hold the security. The HY–IG boundary is the prime candidate for the location on the credit quality spectrum where the unconditionally expected gains from informational trading offset the costs for acquiring information. This setting thus explains how a certification effect could arise in equilibrium.

The regulatory certification and rating shopping hypotheses also may have similar features. In particular, while a rating shopping effect could be observed across the entire rating scale, rating shopping incentives are likely strongest around the HY–IG boundary. Therefore, the distinction between these two hypotheses merits discussion. The central prediction of rating shopping is that additional ratings are, on average, optimistic relative to existing ratings. Thus, if rating shopping is most important around the HY–IG boundary, the positive bias of the marginal rating should also be largest at this boundary. In contrast, certification would give no reason to expect the additional CRA to be more positive at this boundary as compared to other parts of the rating scale. Specifically, certification predicts that, if Moody's and S&P ratings are on opposite sides of the HY–IG boundary, the issuer is significantly more likely to pay for the (assumed marginal) Fitch rating. However, in contrast to the rating shopping hypothesis, regulatory certification does not imply that this Fitch rating would be relatively more positive (compared to Moody's and S&P ratings) than Fitch ratings at other parts of the rating scale.

Second, the expectation of future rating changes decreases incentives for rating shopping but increases the importance of regulatory certification. Rating shopping is more worthwhile if investors expect that ratings will remain relatively stable, as, in that case, credit rating improvements are less likely to be undone or become redundant. Under the regulatory certification hypothesis, (future expected) rating volatility creates a strong precautionary motive, motivating issuers to get an additional rating to hedge against a possible future downgrade below IG.16 For this reason, additional ratings may be associated with adverse selection, as issuers expecting more negative credit news may be more likely to apply for such precautionary, additional ratings.

Each of the three explanations of multiple ratings (information production, rating shopping, and regulatory certification) thus has distinct empirical predictions, though different explanations can coexist. In particular, there are potential differences in whether we would expect (i) credit spread effects of agreeing ratings, (ii) credit spread effects of relatively optimistic ratings across the rating spectrum, (iii) more uncertainty leading to an increase in the number of ratings, (iv) extra ratings to be more likely when these could push an issue into the IG category, (v) greater optimism of the additional rating around the HY–IG boundary, and (vi) an association between additional ratings and the likelihood of future rating changes (see Table I for a summary).

Under information production, an additional rating that is in agreement with the prior ratings will reduce uncertainty and thereby lower credit spreads, while more uncertainty will make an additional rating more worthwhile and therefore lead to more ratings.

Under rating shopping, more uncertainty will again lead to more ratings since initial ratings will err more often. Additional ratings are likely to be better, and better ratings should lower the credit spread.17 However, time variation in ratings makes shopping less worthwhile since the preferred outcome will be less stable.

Under regulatory certification, a better extra rating will only lead to a lower spread at the boundary but unconditionally an additional rating could reflect adverse selection (only weaker issuers take an extra rating) and consequently lead to higher credit spreads. Higher time variation in ratings will give rise to a rating-hedging incentive and hence increase the probability of having an extra rating even for issues that are not at the boundary.

C. Related Research

As asset pricing relies fundamentally on the production and dissemination of information, and this process is endogenously determined, the related literature is vast. CRAs are only one type of research and information provider to the securities markets. Much of the academic literature on the role of research and information providers has focused on equity analysts rather than CRAs rating corporate debt. Studies on the equity markets address a broad range of questions about research providers, ranging from whether analysts' opinions convey value-relevant information to whether conflicts of interest and personal, strategic considerations influence the nature of the information provided. CRAs present a different institutional structure for analysis. While the same basic principles regarding information production apply, CRAs have become integral to regulation pertaining to the credit market (see also the discussion above).

Research on the role of CRAs is more limited. Theoretical work has asked what role CRAs play in the equilibrium pricing process. Boot, Milbourn, and Schmeits (2006) highlight CRAs as a valuable coordination device whereby CRAs provide little value-relevant information at the HY–IG boundary other than regulatory certification but provide useful valuation information about riskier issues. Carlson and Hale (2005) point out that, when each investor's optimal strategy is dependent on the strategy followed by other investors, the public rating provided by the rating agency can serve to coordinate investor actions. Bannier and Tyrell (2006) introduce reputation and competition among rating agencies. Under certain conditions, these features will stimulate investors to search for private information and thus will not only restore a unique equilibrium, but could even lead to a more efficient one.

Each of the three potential explanations for multiple ratings finds support in existing academic literature. On the subject of information production, a number of papers look at the effects of rating changes on asset prices. For example, Kliger and Sarig (2000) use a refinement in the Moody's ratings system to show that rating changes channel information to the market that changes the value of debt. However, their results also suggest that this information leaves the company's value intact and thus only influences the value of debt relative to the value of equity. Güntay and Hackbarth (2010) investigate the effect of analyst dispersion on credit spreads. They find that higher analyst dispersion is associated with higher credit spreads and conclude that this is probably due to cash flow uncertainty.

Jewell and Livingston (1999) investigate whether ratings differ systematically across rating agencies. They find that the average Fitch rating is much more positive than Moody's and S&P ratings, but that this effect disappears once they restrict their sample to bonds rated by all three CRAs. They also investigate whether rating shopping takes place, but find no supportive evidence. Covitz and Harrison (2003) look at the trade-off that rating agencies face between income resulting from giving out favorable ratings and expected future fees from customers resulting from reputation. They argue that reputation concerns dominate and prevent CRAs from being “bribed” by customers. Bannier, Behr, and Güttler (2010), like Poon (2003) and Poon and Firth (2005), investigate possible adverse selection and holdup in the context of CRA and issuer incentives when CRAs issue ratings on an unsolicited basis.18

Inspired by the financial crisis and the critiques aimed at CRAs, several recent theoretical papers put forward models to motivate rating shopping. Skreta and Veldkamp (2009) develop a model in which incentives for rating shopping increase as product complexity increases. Bolton, Freixas, and Shapiro (2009) show that naive investors in the market may give CRAs incentives to inflate their ratings and that, in a duopoly, this gives extra incentives for rating shopping, which in turn aggravates the problem. Sangiorgi et al. (2009) develop a theoretical model of rating shopping and explore biases in ratings conditional upon heterogeneity across issuers in the extent to which different raters agree.

In research most closely related to this paper, Cantor and Packer (1997) also look for evidence of the information effect, the shopping effect, and the certification effect. They use issuer-level ratings data for the year 1994 to understand the motivation for using a third rating agency, but do not use bond price and yield data to evaluate the market effects and price implications of the third rating. Like our paper, they find that the third CRA rating is systematically more optimistic. However, they fail to find evidence that the use of a third CRA is motivated by information, rating shopping, or certification effects. Since the time of their study, bond price data have become more widely available for research. This allows us to conduct more powerful tests of the market response to an additional rating, and to understand in greater detail how market participants interpret and use ratings.

Another closely related paper is Becker and Milbourn (2009), who consider the impact of the major growth in market share of Fitch since 1989. They find that more “competition leads to lower quality in the ratings market: the incumbent agencies produce more issuer-friendly and less informative ratings when competition is stronger.” They explain this result by applying the reputation model of Klein and Leffler (1981), who consider CRA incentives to invest in information production in order to improve their reputation. First, such incentives would be weaker if future rents are lower as a result of increased competition. Second, if demand is more elastic with greater competition, this may force CRAs to spend less on expensive information production or tempt them to be more responsive to issuer demands, potentially inducing rating shopping.

Brister et al. (1994) find evidence of a “superpremium” in yields of junk bonds due to regulation around the HY–IG boundary. Based on only S&P rating data, they find that yields increase disproportionally from a BBB to BB rating relative to the increase in default risk. Moreover, in a recent paper, Kisgen and Strahan (2009) find that credit spreads change in the direction of a Dominion bond rating after the accreditation of Dominion as an NRSRO. They also find that this effect is much stronger around the HY–IG boundary, indicating the importance of regulatory certification. Finally, Kisgen (2006, 2009) investigates whether discrete rating boundaries influence capital structure decisions before and after rating changes. Kisgen (2006) finds evidence of reduced debt issuance when ratings are close to an up- or downgrade, suggesting that credit ratings directly affect capital structure decisions in a way not captured by traditional capital structure theories. Moreover, this effect is especially pronounced around the HY–IG boundary. Kisgen (2009) finds that managers lower leverage after a rating downgrade, suggesting that managers target credit ratings rather than debt levels or leverage ratios. This effect is again more pronounced around the HY–IG boundary.

With respect to the nature of the certification effect that we find, our research relates to earlier work on security design. Gorton and Penacchi (1990) consider a model in which uninformed investors are incentivized to transform risky assets into information-sensitive and information-insensitive parts, where for the latter category they can avoid losses due to trading with informed investors. Boot and Thakor (1993), on the other hand, develop a model in which security issuers lower funding costs by making informed trading more profitable. Our setup motivating the exploration of the regulatory certification hypothesis uses key insights of both papers. In particular, the nontrading region in our setup is a result of the absence of the uninformed investor, whereas the uninformed investor is needed to make trading profitable for the informed investor.

II. Sample Construction and Methodology

A. Main Measures and Controls

We measure uncertainty or opaqueness by analyst dispersion of the firm's earnings per share (EPS) or by the dispersion between Moody's and S&P ratings (like the HY–IG barrier dummies, we also require stability of the difference over at least one quarter).19 While rating dispersion is also a measure of regulatory relevance, analyst dispersion is not, which gives us the required identification. We consider two measures of rating dispersion: Notches of MSP Rating Dispersion, which is the absolute value of the rating difference between Moody's and S&P, and S&P and Moody's Disagree, which is a dummy equal to one if their ratings are not the same.

Our main measure of the importance of regulations pertaining to the HY–IG boundary is denoted by Fitch Could Push IG, which is a dummy variable equal to one if Moody's and S&P ratings are on opposite sides of this boundary such that a Fitch rating would be decisive for the HY–IG classification. In some regressions, this measure is interacted with the outcome from Fitch.

To avoid spurious results due to omitted variables in our regressions, we correct for several issue and issuer characteristics as well as for business cycle effects. At the issue level, we correct for callability (using a dummy), size (offering size), and term structure effects (duration and convexity). At the issuer level, we correct for credit risk (using the inputs of the Merton (1974) model, namely leverage and volatility), profitability (ROA), systematic risk (equity beta), and tangibility of assets (PPE/total book assets). Tangibility of assets is important since Moody's is the only CRA that incorporates expected recovery in their ratings. We also include R&D intensity (R&D expenditure over book assets) as an additional control. R&D intensity can be associated with several pricing mechanisms in the corporate bond markets. For example, high R&D industries may have higher growth opportunities and therefore lower credit spreads. On the other hand, R&D projects tend to be riskier than normal projects, which may increase credit risk. We control for the aggregate effect. In the credit spread changes regressions, we also include time fixed effects as controls for business cycle effects, since market-wide default probabilities, liquidity, and risk premia are likely to vary with the business cycle.

B. Data and Filters

For our main analysis, we use corporate bond pricing data from the TRACE database and merge it with bond characteristic and ratings data from Mergent Fixed Income Securities Database (FISD), equity data from CRSP, financial data from Compustat Industrial Quarterly and analyst data from I/B/E/S. Our time series ranges from July 1, 2002 to December 31, 2008.20 The TRACE data contain all trades in TRACE-eligible bonds by members of the National Association of Securities Dealers (NASD) that were disseminated to the public. Dissemination to the public happened in phases, resulting in an expanding universe of bonds. A more comprehensive description of the TRACE database as well as the dissemination process is given in Downing, Underwood, and Xing (2005).

We apply several filters to our data set to remove bonds with special features that we do not want to consider and to remove seemingly erroneous entries.21 Next, we use the FISD characteristics to match the trades to bond characteristics using CUSIPs.22 We only use senior unsecured notes and bonds. We discard all bonds that are exchangeable, putable, convertible, pay-in-kind; that have a nonfixed coupon; that are subordinated, secured, or guaranteed; or that are zero coupon bonds. Removing callable bonds would reduce our sample substantially, so we leave those in, but we correct for this feature in our regressions using a dummy variable.

To decrease the impact of remaining data errors, we average the prices of all trades in each bond by trading day. To reduce the effect of overrepresentation of very liquid bonds, we make monthly observations by only recording for each bond the last available daily average credit spread of every month. We then construct quarterly observations by only looking at the last month every quarter. To avoid issues with severe illiquidity and distressed debt, we remove all issues with an average (based on average Moody's, S&P) worse than B− (B3). For all bond trades in our sample, we calculate yields and credit spreads. The benchmark rate that is used to construct credit spreads is based on an interpolation of the yields of the two on-the-run government bonds bracketing the corporate bond with respect to duration.

Ratings data are obtained from FISD as provided by Mergent. The credit ratings data provider confirmed that, due to changes in their data collecting procedures, the ratings data before 2000 are incomplete. This is illustrated by Figure 1, which shows the number of rated bond issues each quarter by Moody's, S&P, and Fitch as well as the proportion of all bond issues in the sample rated by each of these CRAs in a given quarter from 1994 to 2008. While the number of rated bond issues is steadily increasing over time for all three CRAs, the sudden jump in the number of issues rated by S&P strongly suggests that too many bond issues before 2000 have missing S&P ratings (i.e., issues had S&P ratings, but these are missing from the database). Specifically, the percentage of all issues rated by S&P equals 58% at the end of 1999 and jumps to 94% in 2000, and remains above 85% until the end of the sample. Likewise, there is a significant, though smaller, jump in the percentage of bond issues rated by Fitch, from 29% at the end of 1999 to 39% in 2000. As a result, for the analyses that do not require pricing data, we use ratings data from the second quarter of 2000 onwards. For our credit spread regressions, the impact of these coverage patterns will be minor, as TRACE only starts in the middle of 2002 and is dominated by data from 2004 onwards (when the number of bond issues contained in TRACE is greatly expanded).23

Figure 1.

FISD database coverage by CRA. The figure plots the percentage of bonds in FISD covered by S&P, Moody's, and Fitch. While FISD starts earlier, we use ratings starting in 2000, as S&P data appear to be incomplete prior to 2000, as this figure suggests.

We follow convention and use a numerical rating scale to convert ratings. Therefore, for Fitch and S&P ratings (with Moody's ratings in parentheses), the numerical scores corresponding to the rating notches are, respectively, 1 for AAA (Aaa), 2 for AA+ (Aa1), 3 for AA (Aa2), 4 for AA− (Aa3), 5 for A+ (A1), 6 for A (A2), 7 for A− (A3), 8 for BBB+ (Baa1), 9 for BBB (Baa2), 10 for BBB− (Baa3), 11 for BB+ (Ba1), 12 for BB (Ba2), 13 for BB− (Ba3), 14 for B+ (B1), 15 for B (B2), and 16 for B− (B3). However, we still refer to more optimistic ratings, that is, those implying lower bankruptcy likelihood, as “higher” or “better” ratings.

Equity market data are obtained from CRSP. We calculate (rolling window) historical daily idiosyncratic volatility and betas to the CRSP value-weighted index based on half a year of historical trading data. An AR(1) filter is used to filter out bid-ask bounce in daily closing prices. For an observation to be included, we need at least 111 return observations in the previous half year.

Company data are obtained from Compustat Quarterly. We download data on firm size (total book assets), debt (long- and short-term debt), profitability (earnings), tangibility of assets (PPE), R&D spending (obtained from Compustat Annual, since usually reported in the annual file only), and industry (SIC code). From these data, we construct variables for leverage (total debt over total book assets), tangibility of assets (PPE/total book assets), R&D spending (R&D expenses/total book assets and a dummy for missing values), and profitability (total earnings over total assets). We also construct an SIC division variable that is defined as the division that the two-digit SIC belongs to. Observations with SIC codes 9100 to 9999 (Public Administration) are excluded because of possible implicit government guarantees.

Analyst forecast data on annual EPS are obtained from I/B/E/S. We download summary data including number of analysts, standard deviation of forecasts, and minimum and maximum forecasts from the unadjusted file. Following Güntay and Hackbarth (2010), we divide forecast dispersion measured by analyst standard deviation by the share price to end up with dispersion per dollar invested.

We construct two samples with a quarterly frequency: a credit spread sample and a rating sample. Because the rating sample does not require trade observations, this is a more inclusive panel, especially for the less liquid bonds. Moreover, the period for which we have reliable data is also longer. Almost all bonds in our final sample are rated by both Moody's and S&P (see also Figure 1). Specifically, about 95% of all bonds in our database with at least two ratings are rated by both S&P and Moody's. This lack of cross-sectional variation in having an S&P or Moody's rating means that we can only study the implications for having Fitch as a third rating.

Accordingly, we remove from our sample all bond issues that do not have ratings from both S&P and Moody's. Using quarterly observations for 2000 to 2008, we find that about 68% of observations in the sample of bonds rated by both Moody's and S&P have a Fitch rating. As a result, the main focus of our paper is to consider the “marginal” role of Fitch ratings, while controlling for S&P and Moody's ratings. Table II presents summary statistics for the quarterly credit spread sample. For completeness, the Internet Appendix24 presents summary statistics for the quarterly ratings sample. Figure 2 presents the average credit spreads over our sample by rating category. There is very significant variation, especially starting the second half of 2007.

Table II. Summary Statistics for Credit Spreads Sample
The table presents summary statistics and a brief description of the sample of bond issues that have both a Moody's and an S&P rating in the quarterly credit spreads sample for 2002 to 2008.
VariableNMeanStd. Dev.MinMaxExplanation
Fitch Could Break Tie44,3660.0320.1801Moody's and S&P on opposite sides of the HY–IG boundary
Fitch Rated44,3660.680.4701Rated by Fitch
Fitch Rating44,3665.044.17016Fitch rating
Moody's Rating44,3667.333.6118Moody's rating
S&P Rating44,3667.173.55117S&P rating
Fitch Makes IG44,3660.0160.1201Fitch pulls IG
Fitch Denies IG44,3660.00660.08101Fitch denies IG
MSP Rating Dispersion44,3660.430.69012Absolute value of MSP rating difference
Moody's Upgrade44,3660.0270.1601Moody's upgrade (common sample)
Moody's Downgrade44,3660.0340.1801Moody's downgrade (common sample)
S&P Upgrade44,3660.0290.1701S&P upgrade (common sample)
S&P Downgrade44,3660.0380.1901S&P downgrade (common sample)
Fitch Upgrade44,3660.0170.1301Fitch upgrade (common sample)
Fitch Downgrade44,3660.0240.1501Fitch downgrade (common sample)
Fitch Added, Better44,3660.00390.06201Fitch added and < MSP
Fitch Added, Equal44,3660.0050.07101Fitch added and = MSP
Fitch Added, Worse44,3660.00030.01701Fitch added and > MSP
Credit Spread44,366172.71141.720.12999.86Credit spread
Change in Credit Spread44,36618.3968.33−478.36498.42Credit spread change
Log of Offering Amount44,36612.031.88015.42Log of offering amount
Idios. Vol.44,3660.0160.00840.00120.11Idiosyncratic stock volatility
Log of Total Assets44,36610.341.555.3413.65Log of total assets
PPE/Total Assets44,1960.360.2400.95PPE/Total assets
R&D/Total Assets44,3660.0120.02300.23R&D/Total assets
R&D Missing44,3660.430.501R&D missing
Analyst Dispersion43,9230.00330.01301.1Analyst dispersion
Beta44,3660.950.39−0.254.21Equity beta
Turnover43,29213.1515.270.01885.99Trading volume over offering amount, times 1K
Figure 2.

Average credit spreads by rating category.

III. Empirical Results

A. Rating Differences and Rating Information

Consistent with Cantor and Packer (1997), we show that Fitch ratings are on average significantly more optimistic than both Moody's and S&P ratings for the same issue in the same quarter. We present the results in Table III and Figure 3.In general, S&P is also more optimistic than Moody's but the difference is much smaller (both for the full sample and for the Fitch-rated sample alone).

Table III. Average Rating Differences
Average rating differences for issues simultaneously rated by multiple CRAs, measured in rating notches, and split up by rating categories. Rating categories are defined by average Moody's and S&P ratings. We follow convention and use the numerical rating scale to convert ratings. For Fitch and S&P (with Moody's rating in parentheses), the numerical scores corresponding to the rating notches are, respectively, 1 for AAA (Aaa), 2 for AA+ (Aa1), 3 for AA (Aa2), 4 for AA− (Aa3), 5 for A+ (A1), 6 for A (A2), 7 for A− (A3), 8 for BBB+ (Baa1), 9 for BBB (Baa2), 10 for BBB− (Baa3), 11 for BB+ (Ba1), 12 for BB (Ba2), 13 for BB− (Ba3), 14 for B+ (B1), 15 for B (B2), and 16 for B− (B3). Therefore, a negative number means that the first-mentioned rating agency gives on average a better rating than the other CRA in that comparison. Quarterly data for 2000 to 2008 are used. t-statistics based on robust standard errors clustered by issuer are in brackets. *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively.
All BondsFitch vs. Moody'sFitch vs. S&PMoody's vs. S&P (Fitch Rated Sample)Moody's vs. S&P (Full Sample)
 [−5.57]   [−3.99]  [2.79]  [4.41]   
N. Issuers450449452818
AAA to AA−    
Difference−0.111  −0.118  0.0217 0.0415 
 [−1.53]   [−0.98]  [0.14]  [0.47]  
N. Issuers37373757
A+ to A−    
Difference−0.751***−0.599***0.165* 0.203***
 [−5.85]  [−4.63]  [1.80]  [2.68]  
N. Issuers232232233475
BBB+ to BBB−    
Difference−0.294***−0.223***0.072 0.117**
 [−5.03]  [−3.82]  [1.11]  [2.42]  
N. Issuers277278276451
BB+ to BB−    
Difference−0.492***−0.0574 0.478***0.471***
 [−4.76]  [−0.35]  [3.30]  [4.71]  
N. Issuers165163165295
B+ to B−    
Difference−0.800***−0.457**0.369* 0.430***
 [−6.80]  [−2.46]  [1.82]  [3.20]  
N. Issuers989899268
Figure 3.

Rating differences across CRAs. The figure plots rating differences for different pairs of CRAs. We use a numerical scale for ratings, where a lower rating score means a better rating (more optimistic or lower bankruptcy likelihood, see Table III for the full rating scale). In the figure, a negative number thus means that the first-mentioned rating agency gives on average a better rating than the other CRA in that comparison.

Next, we investigate the bond market reaction to the rating updates issued. We are particularly interested in the informational content of Fitch rating changes compared to the informational content of Moody's or S&P rating updates. To minimize issues related to selection, we limit ourselves in this test to the sample of issues that are rated by all three CRAs. Table IV presents the results of regressing end-of-quarter credit spread changes on dummy variables for these up- and downgrades for each CRA. All regressions on credit spread changes in the paper use standard errors clustered by issuer (unless stated otherwise) and include a large number of controls with time fixed effects.

Table IV. Change in Credit Spreads and Rating Changes
Using quarterly panel data between 2002 and 2008, we regress changes in credit spreads for bonds rated AAA to B− that are rated by all three CRAs on rating up- and downgrades for all three CRAs, changes in bond and firm characteristics, dummies for boundary effects and time fixed effects. Up- and downgrades are coded as dummies indicating whether each of the three CRAs upgraded or downgraded its rating. The following firm and bond controls are included but not shown (all in changes): leverage, liquidation/intrinsic value (PPE/total assets), R&D expenses (divided by total assets), ROA (return on assets), daily idiosyncratic equity volatility, historical equity beta (half-year daily corrected for bid-ask-bounces), log total assets (firm size, book value) and log offering size (issue size), redeemable (dummy for callability), duration and convexity. Fitch Upgrade, Breaks Tie is a dummy indicating that a Fitch upgrade made the issue qualify for IG, while Fitch Downgrade, Breaks Tie is a dummy indicating that a Fitch downgrade made the issue lose its IG qualification. Fitch Could Break Tie is a dummy indicating that the S&P and Moody's ratings are on opposite sides of the HY–IG boundary. Column (5) is restricted to issues rated A− or better by Moody's and S&P, whereas column (6) is restricted to issues rated BBB+ or worse by Moody's and S&P. t-statistics are in brackets (using robust standard errors clustered by issuer; N. issuer gives the number of issuers). *, **, and *** indicate significance at the 10%, 5%, and 1% level, respectively. F-test Fup=Fdown (p-value) gives the p-value for the coefficients on Fitch Upgrade and Fitch Downgrade being equal, while F-test Fup, tie=Fdown, tie (p-value) gives the p-value for the F-test of the coefficients on Fitch Upgraded, Breaks Tie, and Fitch Downgraded, Breaks Tie being equal.
Moody's Upgrade−4.501**     −2.369−3.125
 [−1.97]     [−1.11][−1.47]
Moody's Downgrade14.19***     7.385*7.575**
 [4.00]     [1.96][2.10]
S&P Upgrade −5.594**    −4.312−4.423
  [−2.07]    [−1.62][−1.59]
S&P Downgrade 22.34***    18.75***17.32***
  [5.25]    [4.26][4.43]
Fitch Upgrade  −6.463**−5.612*6.975−8.209***−4.807−3.892
   [ −2.10][ −1.83][0.59][ −2.70][ −1.59][ −1.30]
Fitch Downgrade  12.98**10.52**−0.59522.30***4.5962.875
Fitch Upgrade, Breaks Tie   −17.59 −16.26 −14.94
    [−1.18] [−1.11] [−1.03]
Fitch Downgrade, Breaks   31.53* 24.67 26.65
Tie   [1.76] [1.45] [1.49]
Fitch Could Break Tie   13.81** 9.475* 13.33**
    [2.39] [1.75] [2.48]
Lagged Credit Spread−0.253***−0.258***−0.251***−0.253***−0.400***−0.255***−0.259***−0.261***
Adj. R20.5310.5330.5310.5330.450.6050.5340.536
N. Issuers380380380380117313380380
F-test Fup=Fdown  0.15%0.80%56.30%0.00%15.00%30.80%
F-test Fup, tie=Fdown, tie (p-value)   2.82% 5.87% 6.07%

The credit spread change regressions in columns 1 to 3 of Table V indicate that all CRAs appear to be highly informative in single CRA specifications. However, in the joint specification in column 7, only S&P and Moody's rating updates seem to contain relevant information associated with credit spread changes. For example, Moody's and S&P downgrades are related to credit spread increases of 8 and 17 basis points, respectively. However, Fitch rating updates are not statistically significantly associated with changes in credit spreads. In joint significance tests, we reject the hypothesis that Fitch rating downgrade coefficients are equal to S&P or Moody's rating downgrade coefficients, though not for the equivalent rating upgrade coefficients. When we restrict ourselves to the upper end of the rating spectrum (see column 5, where we only use issues with an average rating of A− or better), Fitch seems to contain no information even in the single CRA specification. We cannot reject the hypothesis that the reactions to Fitch upgrades and downgrades are statistically different from each other in the presence of up- and downgrades from the other CRAs, while we can for Moody's and S&P (see the Internet Appendix for these tests on Moody's and S&P).

Table V. Credit Spread Regressions on Fitch Rating Additions
Using quarterly panel data between 2002 and 2008, we regress changes in credit spreads for AAA to B− rated bonds that are rated by Moody's and S&P on rating additions from Fitch, the relative ranking of those additions, whether additions happened at the HY–IG boundary, interactions with uncertainty measures, changes in bond and firm characteristics, and time fixed effects. Fitch Added, Better, Fitch Added, Equal, and Fitch Added, Worse are dummies indicating whether a Fitch rating has been added that is, respectively, better than, equal to, and worse than the average rating by Moody's and S&P. Fitch Added, Makes IG and Fitch Added, Denies IG are dummies that indicate whether the added Fitch rating makes the issue qualify for IG or not, conditional on Moody's and S&P ratings being on opposite sides of the boundary. See Table IV for descriptions of bond- and firm-level control variables. t-statistics are in brackets (using robust standard errors clustered by issuer in all columns except column 5, which uses double clustering by both issuer and time). *, **, and *** indicate significance at the 10%, 5%, and 1% level, respectively. F-test Fadded, IG = Fadded, HY (p-value) is the p-value of the F-test of Fitch Added, Makes IG and Fitch Added, Makes HY being equal. The sample comprises all issues rated by both Moody's and S&P with their average rating better or equal to B−, except in column 4, where their average rating is between BBB+ and BB−.
Fitch Added−2.28      
Fitch Added, Better −5.832−5.254−8.644−5.254−7.243−6.197
Fitch Added, Equal −0.2050.08473.2380.08472.141.598
Fitch Added, Worse 4.4225.0896.3475.0893.0635.383
Fitch Added, Makes IG  −30.47**−23.00**−30.47***  
Fitch Added, Makes HY  10.822.9310.8  
At HY–IG Boundary  15.32***10.93**15.32**  
Fitch Added, Equal*     −949.1 
Analyst Dispersion     [−0.90] 
Analyst Dispersion     700.4*** 
Fitch Added, Equal*      −4.755
Rating Dispersion      [−0.82]
S&P and Moody's      2.417**
Disagree      [2.02]
Lagged Credit Spread−0.257***−0.257***−0.258***−0.277***−0.258***−0.265***−0.258***
SampleAllAllAllBBB+ to BB−AllAllAll
Double ClusteringNoNoNoNoYesNoNo
Adj. R20.5310.5310.5320.6260.5320.5350.531
N. Issuers668668668463668669668
F-test Fadded, IG = Fadded, HY (p-value)  3.23%5.58%9.35%  

However, rating changes of Fitch at the HY–IG boundary do matter, that is, when Moody's and S&P ratings are on opposite sides of the HY–IG boundary and Fitch could be the tiebreaker and change the classification of the bond issue into IG versus HY. Economically, the credit spread change associated with Fitch changing the classification to IG rather than HY is about 49 basis points in the full sample (column 4, p-value of 2.82%), about 41 basis points in a sample of bonds rated BBB+ or worse (column 6, p-value of 5.87%), and again about 41 basis points in the full sample controlling for Moody's and S&P rating updates (column 8, p-value of 6.07%). These results are consistent with a regulatory certification effect and inconsistent with an information effect.

Table V presents regressions of price reactions to Fitch additions after the bond has been in our sample for at least one quarter without a Fitch rating but with both Moody's and S&P ratings. Here, the sample consists of all issues rated by both Moody's and S&P, and thus no longer conditions on also having a Fitch rating as in the sample used for Table IV. Table VI below considers selection directly by modeling the addition of a Fitch rating using Cox proportional hazard model regressions. If adverse selection effects were strong, one would expect the event of a Fitch addition by itself to be associated with a change in credit spreads. For example, if adverse selection leads only firms with poor prospects to request a (generally more optimistic) Fitch rating, we would expect a Fitch rating addition to be associated with an increase in the credit spread. However, column 1 in Table V indicates that a Fitch addition is not related to any change in credit spreads at all (coefficient of −2.28 basis points with a t-statistic of −0.65). This lack of an effect mitigates selection issues, although we do find a mild adverse selection effect in a robustness test in the Internet Appendix, where we restrict the sample to the precrisis period.

Table VI. Cox Regressions for Time to Adding Fitch Rating
Cox proportional hazard model regressions of the time to adding a Fitch rating on rating category dummies based on average Moody's and S&P (MSP) ratings, measures of uncertainty Analyst Dispersion (standard deviation of analyst earning forecasts normalized per dollar share value) and MSP Rating Dispersion (absolute value of the notches difference between Moody's and S&P), and whether the Fitch rating “could push” the issue to IG or A−. F Could Push IG and F Could Push A− are dummies indicating whether the Moody's and S&P ratings are on opposite sides of the IG or A− boundary, respectively. Other controls that are included but not shown are firm beta, leverage, PPE/assets, R&D expenses/assets, ROA, log of offering amount, time to maturity, time to maturity squared and redeemable (see Table IV for descriptions). Quarterly data for 2000 to 2008 are used. The sample consists of all issues with both Moody's and S&P ratings that on average are rated B− or better. Coefficients on the covariates in the partial hazard function are reported, and t-statistics are in brackets (using robust standard errors clustered by issuer). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively. Pseudo R2 refers to McFadden (1973) pseudo R2.
MSP A+ to A− Rating1.553*** 1.548***1.568***1.524***
 [3.04] [3.06][3.08][3.06]
MSP BBB+ to BBB− Rating1.146** 1.156**1.119**1.071**
 [2.28] [2.32][2.22][2.19]
MSP BB+ to BB− Rating1.346** 1.481***1.408***1.292**
 [2.53] [2.79][2.60][2.49]
MSP B+ to B− Rating1.131** 1.114**1.159**1.073**
 [2.06] [2.06][2.12][2.00]
Fitch Could Push IG0.717***0.693***  0.702**
 [2.83][2.83]  [2.44]
Fitch Could Push A0.2480.221  0.231
 [1.23][1.13]  [1.00]
Avg MSP BB+   0.292 
Avg MSP BBB−   0.235 
MSP Rating Dispersion    −0.128
S&P and Moody's Disagree−0.307−0.267−0.254−0.251 
 [ −1.16][ −0.90][ −1.03][ −1.02] 
Analyst Dispersion−4.988−4.262−6.275−6.479−5.979
 [ −0.58][ −0.53][ −0.69][ −0.70][ −0.64]
Idiosyncratic Volatility−26.11***−23.85***−25.62***−26.08***−26.61***
 [ −3.95][ −3.86][ −3.86][ −3.93][ −3.88]
Log of Total Assets0.296***0.200**0.290***0.299***0.289***
Pseudo R20.0290.0190.0290.0290.028
N. Issuers813813813813813

Table V also fails to show any evidence in favor of an information production effect. When a Fitch rating is added that confirms the average Moody's and S&P rating, it does not lead to a significantly lower credit spread (see columns 2 to 7). The interaction of the added Fitch rating with measures of uncertainty also fails to show a significant effect (see the interaction with Analyst Dispersion in column 6 and the interaction with Notches of MSP Rating Dispersion in column 7). The negative (positive) sign of an added Fitch rating that is better (worse) than the average Moody's and S&P rating is consistent with rating shopping and information production, but is not statistically significant. Likewise, the coefficients on Fitch Added, Better and Fitch Added, Worse are not statistically different from each other.

However, columns 3 to 5 provide strong evidence in favor of the regulatory certification hypothesis. In the cases for which Moody's and S&P ratings are on opposite sides of the HY–IG boundary, an added Fitch rating that makes the issue qualify for an IG rating is associated with a substantial drop in credit spread. The difference between Fitch classifying such bond issues as IG rather than HY is associated with a difference of about 41 basis points (p-value of 3.23%) in the credit spread. This result is robust to using either the full sample or only those issues with average Moody's and S&P ratings between BBB+ and BB− (columns 3 and 4, respectively) as well as to double clustering credit spread changes in both issuer and time dimensions.25

B. Adding a Fitch Rating

This subsection considers the selection of Fitch as the third rater (all bond issues in our sample are restricted to be rated by both Moody's and S&P). In Table VI, we model the addition of a Fitch rating using Cox proportional hazard regressions, which include variables that may be related to each of the three hypotheses. In the Cox model, an “exit” is defined as the event of getting a Fitch rating. The Cox model has the convenient property that one can focus on the relative rank of each subject in the cross-section by ignoring the baseline hazard rate and optimizing the partial likelihood function only. The baseline hazard rate can be separated out (as in any proportional hazard model), and therefore needs no specific parametric form that can influence our results.

In our analysis, we employ several proxies for information uncertainty: (i) the absolute difference in the number of notches between Moody's and S&P ratings and a dummy equal to one if S&P and Moody's ratings are different, (ii) idiosyncratic volatility of daily stock returns, and (iii) equity analyst dispersion. We further include variables related to the relative importance of ratings, such as leverage, firm size, and issue offering size. A positive coefficient on the variables related to information uncertainty could be interpreted as evidence for information or rating shopping effects.

We investigate the certification effect by including the Fitch Could Break Tie dummy, which equals one if Moody's and S&P ratings are on opposite sides of the HY–IG boundary. This approach exploits the fact that regulations typically prescribe that, if an issue has three ratings, the median rating should be used to determine the issue's rating, while the worst rating should be used if there are two ratings. Therefore, if Moody's and S&P ratings are on opposite sides of the HY–IG boundary, an additional Fitch rating would be decisive about whether the issue becomes IG. As a robustness check, we also include a dummy variable indicating whether Moody's and S&P ratings are on opposite sides of the A− boundary. The A− boundary obviously does not have the same regulatory importance as the HY–IG boundary, and thus its coefficient is expected to be insignificant.

Finally, we add several other controls that influence bond prices, such as rating group dummies based on the average Moody's and S&P rating, whether the issue is redeemable, maturity, liquidation values (using proxies for fixed assets and R&D expenses), time to maturity, the square of time to maturity, and industry dummies. Standard errors are again clustered by issuer. The sample consists of all issues that are rated by both Moody's and S&P over 2000 to 2008.

Empirically, we find that the coefficients on variables related to uncertainty (i.e., analyst dispersion, idiosyncratic equity volatility, and a dummy indicating Moody's and S&P rating differences as well as a variable measuring the size of the dispersion in notches) are insignificant or have the wrong (i.e., negative) sign for an information or rating shopping effect. For example, we find that issues with greater idiosyncratic volatility are less likely to get a Fitch rating, even though further information production may be relatively useful for those issues. We therefore find no support in the data for either the information or the rating shopping effects.

On the other hand, columns 1, 2, and 5 show that, if an issue has Moody's and S&P ratings on opposite sides of the HY–IG boundary, the (conditional) likelihood that the issue gets a Fitch rating increases considerably. The economic significance of the coefficient on Fitch Could Push IG is considerable. For example, the coefficient of 0.717 in column 1 implies that issues in which Fitch is the tiebreaker have about twice (2.05 = exp(0.717)) the hazard rate, that is, are about twice as likely to get a Fitch rating. We interpret this result as strong evidence in favor of a certification effect: it is precisely in those cases in which the marginal rating (i.e., Fitch) is decisive for the critical regulation classification of the bond issue into IG and HY, that Fitch is much more likely to (be asked to) give a rating.

The downside of the Cox regression is that it basically discards any observations that already have a Fitch rating, thereby ignoring some potentially useful information. We therefore try to corroborate the result that regulatory certification is an important explanation for having a Fitch rating by directly modeling the existence of a Fitch rating using a logistic regression. Estimates of this regression can be found in the Internet Appendix, which confirms that Fitch Could Push IG is strongly positively associated with having a Fitch rating, while none of the three main measures of uncertainty provide any evidence in support of the information production hypothesis. With a 12% higher probability of having a Fitch rating if the other two CRAs split at the HY–IG boundary, these results are also economically large. The Internet Appendix shows that these results are robust to double clustering.

C. CRA Performance

This subsection investigates the general performance of each CRA in default prediction. The main purpose is to corroborate our previous finding that, in general, Fitch rating changes or Fitch rating additions are not associated with credit spread changes unless Fitch is the tiebreaker around the HY–IG boundary. This finding predicts that Fitch rating differences to Moody's and S&P ratings do not significantly improve default prediction. To evaluate this conjecture, we perform two tests. First, we run logistic regressions of issue defaults on 1-year lagged credit ratings. Second, we calculate accuracy ratios of the 1-year-ahead default prediction (i.e., Gini coefficients) to measure the rating performance of all three CRAs. This method is also employed by the CRAs for self-evaluation in their annual default study; however, since sample periods and rated populations typically differ among CRAs, the self-reported results are not useful for comparative purposes.

The sample that we use in this analysis is different from the samples that we used for our other analysis. Since defaults are relatively rare events, we maximize the size of our sample by incorporating as many issues as possible. Therefore, we include bonds from issuers for which we have no Compustat, CRSP, or I/B/E/S data as well as bonds with ratings worse than B−/B3. We still restrict our attention to only senior unsecured U.S. bonds in U.S. dollars. As before, we exclude bonds that are putable, exchangeable, convertible, perpetual, asset backed, or floating rate. We collect ratings for all these bonds in FISD that are rated by all three CRAs between 2000 and 2008 and can be matched with the Moody's Default Risk Services Corporate database containing issuer default events. To avoid overweighting issuers with many bonds outstanding that are likely to default at the same time, we weight each bond at each point in time with the inverse of the number of bonds outstanding for its issuer.26

Table VII reports the results of our default prediction study. Panel A shows that default prediction is best for Moody's and worst for Fitch. This ranking holds true in terms of the pseudo R2 as well as how much each CRA adds in default prediction relative to the others. First, we can compare the pseudo R2 in flexible specifications with dummies for the various notched rating categories (i.e., AAA, AA+, AA, AA−, and up to B−).27 The pseudo R2 for Moody's is highest (37%, column 1), followed by S&P (33%, column 3), and lowest for Fitch (32%, column 5). In columns 2, 4, and 6, we instead use a linear rating specification of the CRA ratings in notches, that is, 1 corresponds to AAA, 2 to AAA−, etc. The resulting pseudo R2s are quite similar, with pseudo R2s of 37%, 33%, and 31% for Moody's, S&P, and Fitch, respectively, which suggests that the linear specification is quite reasonable.

Table VII. CRA Default Prediction
Using yearly panel data between 2000 and 2008, we compare CRA performance with respect to default prediction on a 1-year horizon. Panel A shows results from a logit regression of issuer default events on rating scales, rating scale dummies, and rating differences for the complete universe of bonds that are rated by all three CRAs. The rating variables are in notches, where 1 corresponds to AAA, 2 to AAA−, etc. The difference variables measure the difference between the first minus the second CRA in notches. Only marginal effects are reported (multiplied by 10,000), and t-statistics are in brackets (using robust standard errors clustered by issuer). Panel B shows accuracy ratios and their differences for all three CRAs for a 1- and 2-year forecasting horizon. Standard errors are constructed using the Jackknife method with resampling at the issuer level (equivalent to clustering by issuer). *, **, and *** indicate significance at the 10%, 5%, and 1% level, respectively.
Panel A: Default Prediction Logit Regressions
Moody's Rating 4.23***     04.25***   4.22***
  [4.60]     [4.59]   [4.55]
S&P Rating   5.02***     5.08***4.22*** 
    [4.98]     [4.97][4.55] 
Fitch Rating     5.98***4.25*** 05.08***   
      [5.30][4.59] [4.97]   
Difference Moody's      3.94***−0.306    
and Fitch Ratings      [4.27][ −0.75]    
Difference S&P and        3.84***−1.24  
Fitch Ratings        [3.72][ −1.49]  
Difference Moody's          3.57***−0.646
and S&P Ratings          [4.06][−1.40]
Moody's Rat. FEYesNoNoNoNoNoNoNoNoNoNoNo
Fitch Rat. FENoNoNoNoYesNoNoNoNoNoNoNo
S&P Rat. FENoNoYesNoNoNoNoNoNoNoNoNo
Pseudo R20.3710.3660.3330.3330.3230.3080.3670.3670.3370.3370.3670.367
N. Issuers1,9942,0661,9492,0661,9842,0662,0662,0662,0662,0662,0662,066
Panel B: Accuracy Ratios
 1-year horizon2-year horizon
Moody's Rating0.779***25.380.713***21.05
Fitch Rating0.718***18.50.657***15.97
S&P Rating0.765***24.120.704***20.15
Moody's−Fitch Rating0.062**2.460.057**2.25
S&P−Fitch Rating0.047*1.830.047**1.97
Moody's−S&P Rating0.0151.310.0090.8

In subsequent columns, we compare different pairs of ratings: Moody's and Fitch ratings in columns 7 and 8, S&P and Fitch ratings in columns 9 and 10, and Moody's and S&P ratings in columns 11 and 12. For each comparison, we find that the pseudo R2 is not increased by adding the CRA with a lower pseudo R2 in columns 1 to 6. For example, the pseudo R2 in columns 7 and 8 combining Moody's and Fitch ratings equals 36.7%, basically identical to the pseudo R2 of Moody's ratings by themselves of 36.6% in column 2 but higher than the pseudo R2 of Fitch ratings by themselves of 30.8% in column 6. The difference between Fitch and Moody's ratings is insignificant in column 8, while it is significant in column 7. Thus, a Moody's rating adds predictive power to a Fitch rating while the reverse is not the case. We show the same pattern in columns 9 and 10 for the comparison of Fitch with S&P. Taken together, these results suggest that, conditional on a Fitch rating, Moody's or S&P ratings provide significant additional information for 1-year-ahead default prediction, while the reverse is not the case.

Consistent with Panel A, the accuracy ratios (i.e., Gini coefficients) are highest for Moody's and lowest for Fitch. In Figure 4,we plot the cumulative fraction of default over the next year against the cumulative fraction of ratings (from worst to best); this curve is also called a Cumulative Accuracy Power (CAP) curve. Here, a smaller area in the upper left-hand corner of the graph implies greater prediction accuracy. The accuracy ratios are the fractions of the areas underneath the plotted lines minus the area under the 45° line multiplied by two, such that the accuracy ratio converges to one as prediction accuracy improves and is equal to zero if ratings are assigned in a completely random fashion. The graph shows that the Fitch line is clearly below (and is thus worse than) the other two over most of the rating spectrum.

Figure 4.

Default prediction accuracy. The figure plots the cumulative fraction of defaults on U.S. corporate bonds over a 1-year horizon against the cumulative fraction of ratings (from worst to best) for Moody's, S&P, and Fitch based on data from 2000 to 2008. The accuracy ratios that rating agencies use for self-evaluation and that we report in Table VII, Panel B are based on the areas under the graphs. A larger area under the graph corresponds to better accuracy. This type of graph is also known as a CAP curve.

More formally, we find that accuracy ratios of Moody's (77.9%) and S&P (76.5%) outperform that of Fitch (71.8%), and that these differences are statistically significant. There is no statistically significant difference between the accuracy ratios of Moody's and S&P (with standard errors calculated using a Jackknife with re-sampling based on issuer). We conclude that the results based on accuracy ratios and default prediction confirm the lack of support for the information production hypothesis in our data.28

D. Further Explorations of Regulatory Certification and Rating Shopping

Arguably, rating shopping is more worthwhile around the HY–IG boundary, thus leading to more rating shopping at the boundary. On the other hand, under a certification effect, even for issuers at the boundary that should not qualify for IG and thus expect a worse Fitch rating, the possibility of achieving IG status by sheer luck may be a motivation to apply for an additional Fitch rating. Thus, at the boundary, one would expect more positive (optimistic) added Fitch ratings than elsewhere in the rating spectrum if there is rating shopping.

As a proxy for the level of optimism in the additional, third Fitch rating, we consider whether the additional Fitch rating leads to a regulatory gain, defined as the difference between “worst of two” and “medium of three” ratings.

Table VIII presents results of logistic regressions of regulatory gain on dummies indicating the location of a bond in the rating spectrum. Observations are conditioned to have split ratings from Moody's and S&P as otherwise a regulatory gain is impossible. The table provides suggestive evidence that the (additional) Fitch rating is more optimistic around the HY–IG boundary and thus that some rating shopping might be going on around the boundary.29

Table VIII. Relative Rating Levels
The table shows logit regressions of achieving a regulatory gain on rating category dummies, Analyst Dispersion (standard deviation of analyst earning forecasts normalized per dollar share value) as a measure of uncertainty, and whether the Fitch rating “could push” the issue to IG. F Could Push IG is a dummy indicating whether Moody's and S&P ratings are on opposite sides of the HY–IG boundary. Other controls that are included but not shown are firm beta, leverage, PPE/assets, R&D expenses/assets, ROA, log of offering amount, time to maturity, time to maturity squared, and redeemable (see Table IV for descriptions). Quarterly data for 2000 to 2008 are used. The sample consists of all issues rated by Moody's, S&P, and Fitch that on average are rated B− or better and for which Moody's and S&P disagree. Marginal effects are reported for the regressions and all t-statistics are in brackets (using robust standard errors clustered by issuer). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively.
A+ to A−0.0691 0.0698
 [0.43] [0.42]
BBB+ to BBB−−0.155 −0.155
 [−0.76] [−0.75]
BB+ to BB−−0.0823 −0.016
 [−0.34] [−0.08]
B+ to B−−0.0516 −0.052
 [−0.19] [−0.19]
Fitch Could Push IG0.187***0.196*** 
Analyst Dispersion5.4615.3665.192
Idiosyncratic Volatility−1.432−2.132−1.735
Log of Total Assets0.03510.04760.0316
Other Controls IncludedYesYesYes
Only with MSP DisagreementYesYesYes
Fitch AddedYesYesYes
Industry FEYesYesYes
Pseudo R20.150.1140.131
N. Issuers170170170

Moreover, the regulatory gain from the third, additional Fitch rating is largest if the Fitch rating breaks the tie at the HY–IG boundary. If the Moody's and S&P ratings are on opposite sides of the boundary, a regulatory gain is about 20% more likely (see columns 1 and 2). Fitch ratings being generally more optimistic, the likelihood of a regulatory gain due to a Fitch rating addition in the event of a Moody's and S&P ratings disagreement is about 65% on average. As a result, this likelihood of a regulatory gain climbs to about 85% if the Fitch rating could change the HY–IG regulatory classification (controlling for all else).

Next, one of the predictions in Table I is that of a precautionary extra rating. Given the demand shock of being below the HY–IG boundary, issuers may want to hedge the risk of becoming HY by taking an extra rating due to a precautionary motive. Unfortunately, ratings are too persistent to empirically estimate the frequency of rating changes reliably from the rating history. However, due to data constraints and the forward-looking nature of this effect, we can analyze this precautionary effect the other way around, that is, by estimating the association between the probability of observing a future rating change and a dummy for having a Fitch rating and several controls for opacity and volatility. The results are in Table IX. Indeed, we find that a future rating change is positively related to having a Fitch rating, over and beyond the usual measures of volatility and opacity. Having a Fitch rating is associated with a quarterly transition probability that is 1.0% to 1.28% higher, which is economically sizable (Moody's and S&P average transition frequencies are 5.2% and 5.5%, respectively).

Table IX. Logistic Regressions for Having a Rating Transition
Logit regressions of having a rating transition next quarter on rating category dummies (AAA and AA+ are merged to avoid singularities), a dummy indicating whether the issue has a Fitch rating, measures of uncertainty as idiosyncratic volatility, beta, Analyst Dispersion (standard deviation of analyst earning forecasts normalized per dollar share value) and MSP Rating Dispersion (absolute value of the notches difference between Moody's and S&P). See Table IV for descriptions of bond- and firm-level control variables. Quarterly data for 2000 to 2008 are used. The sample consists of all issues with both Moody's and S&P ratings. Only marginal effects are reported, and t-statistics are in brackets (using robust standard errors clustered by issuer). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively.
 Moody's Rating ChangeS&P Rating Change
Fitch Rated0.0128***0.0102***
Idiosyncratic Volatility1.509***0.712***
Analyst Dispersion0.04590.0549
MSP Rating Dispersion0.0128***0.00611**
MSP Rating FEYesYes
Year FEYesYes
Pseudo R20.0820.067
N. Issuers818818

If the certification effect arises naturally in a setting with information-sensitive and -insensitive investors, one would expect a very liquid IG market and an illiquid HY market. Moreover, issues around the middle region should have low liquidity that can be restored if Fitch signals that an issue is information-insensitive. However, if Fitch gives an HY rating, an issue at the boundary will truly fall into the no-trade region and have exceptionally low liquidity. Table X confirms these predictions empirically. Bonds that qualify for HY based on their Moody's and S&P ratings have substantially lower turnover than those that qualify for IG. However, if Fitch pulls them into the IG category, this effect is compensated. On the other hand, if Fitch could pull them into the IG category but instead gives an HY rating, liquidity drops dramatically, even after correcting for issue and time fixed effects as well as the on-the-run effect (corrected for by age).

Table X. Turnover Regressions
OLS regressions of quarterly bond turnover, measured as aggregated trading volume over total value of outstanding bonds on rating category dummies, a dummy indicating whether the issue has a Fitch rating, and controls for off-the-run vs. on-the-run effects (Age). F Makes (Denies) IG is a dummy equal to one if Moody's and S&P ratings are on opposite sides of the HY–IG boundary and the Fitch rating is IG (HY). All other control variables are dropped due to the use of both time and issue fixed effects. Monthly data for July 2002 to December 2008 are used. The sample consists of all issues with both Moody's and S&P ratings. t-statistics are in brackets (using robust standard errors clustered by issuer). *, **, and *** indicate statistical significance at the 10%, 5%, and 1% level, respectively.
MSP AAA to AA− Rating−2.889***−2.887***−2.891***
MSP BBB+ to BBB− Rating0.7220.7490.745
MSP BB+ to BB− Rating−2.434*−2.543*−2.554*
MSP B+ to B− Rating−5.368***−5.270***−5.280***
F Makes IG 4.208***4.213***
F Denies IG −5.801**−5.767**
Fitch Rated  0.952
Time FEYesYesYes
Issue FEYesYesYes
Adj. R20.2780.2790.279
N. Issuers739739739

IV. Conclusion

Credit ratings play an important role in the capital markets. They are used by regulators and market participants to establish capital requirements and, in a legal setting, to provide safe harbor for fiduciaries. This widespread dependency upon credit ratings has the potential to influence how CRAs are used by issuers and how their ratings are evaluated by the market. A number of theories have been proposed regarding how such dependency will affect the use of multiple CRAs, how the type of rating issued by CRAs depends upon their strategic position, and how the market interprets the informational output of rating agencies though the price formation process.

In this paper, we use bond issue credit ratings, characteristics, and market prices to empirically evaluate some of these proposed theories. We test three hypotheses: (i) “information production,” which posits that the third rater adds value-relevant information, (ii) “rating shopping” which proposes that issuers shop for a better rating conditional on receiving a disappointing one, and (iii) “regulatory certification,” which conjectures that a third agency plays the role of tiebreaker at the boundary of being classified as IG versus HY. The certification effect could arise naturally as an equilibrium outcome in a setting with information-sensitive and -insensitive investors and assets along the lines of Gorton and Pennacchi (1990) and Boot and Thakor (1993). An extra rating indicating the potential value to be gained from research could (partially) resolve a no-trade region around the HY–IG boundary.

Our empirical work contains several results. First, we find that significant differences exist across multiple credit ratings of the same bond issue at the same point in time, with Fitch ratings on average clearly more positive than Moody's and S&P ratings. This is consistent with Fitch playing a strategic role that reduces the threat that the other two CRAs could withhold IG ratings and extract compensation for regulatory certification, that is, Fitch being available to push bonds into the IG classification when the other two firms may disagree.

Bond price data reveal how the market regards a rating by the third agency. In general, CRAs provide useful information to the market about credit risk. However, we find no robust evidence that Fitch ratings provide additional information incorporated in bond prices, relative to the information already contained in Moody's and S&P ratings. Thus, even though Fitch ratings are on average clearly better (i.e., more optimistic) than Moody's and S&P ratings, there appears to be little information contained in these ratings that the bond market incorporates. This result is inconsistent with both the information and rating shopping hypotheses.

We find strong evidence that Fitch ratings have a regulatory certification effect. The likelihood of getting a Fitch rating is strongly associated with Moody's and S&P ratings being on opposite sides of the HY–IG boundary. This suggests that, in equilibrium, Fitch ratings are sought as a kind of “tiebreaker” in these cases. We find some suggestive evidence that Fitch ratings are relatively better if the Fitch rating is decisive for the IG classification, as compared to all other Fitch ratings. In particular, we find evidence that, if Moody's and S&P ratings are on opposite sides of the HY–IG boundary, the additional Fitch rating is more likely than otherwise to lead to an improvement in the regulatory rating classification (in this particular case to the IG classification). Overall, this result provides some evidence of rating shopping around the HY–IG boundary, or alternatively of the marginal rating being used for regulatory arbitrage.

In the cross-section of bond prices, we find that the certification effect is strongly associated with credit spreads. Controlling for the average Moody's and S&P rating, for issues where Moody's and S&P ratings are on opposite sides of the HY–IG boundary, a Fitch rating pushing the issue into the IG category has credit spreads that are about 41 basis points lower than if the Fitch rating would push the issue into the HY category. Moreover, bond issues experiencing relatively many rating changes by Moody's and S&P are more likely to have a Fitch rating, suggesting a precautionary motive of getting a Fitch rating. These results combined with additional results, for example, on the liquidity of the bonds, are consistent with a third CRA arising in equilibrium as a tiebreaker that resolves a no-trade region in a setting with information-sensitive and -insensitive investors and assets.


  • 1

    For smaller corporate bond issues, Fitch is occasionally one of two raters. However, almost all bonds in our sample are rated by both Moody's and S&P (see also Figure 1). Specifically, about 95% of all bonds in our database with at least two ratings are rated by both S&P and Moody's. This lack of cross-sectional variation in having an S&P or Moody's rating means that we can only study the implications for having Fitch as a third rating. We remove from our sample all bond issues that do not have ratings from both S&P and Moody's. For this sample of bond issues rated by both Moody's and S&P and using quarterly observations for 2000–2008, about 60% of observations have a Fitch rating. As a result, the main focus of our paper is to consider the “marginal” role of Fitch ratings, while controlling for S&P and Moody's ratings. Throughout the paper, we only consider the three major CRAs, ignoring all others, as they are much smaller at this point.

  • 2

    NAIC is the organization of state insurance regulators.

  • 3

    For firms with split Moody's and S&P ratings, 13% of Fitch additions are such boundary cases.

  • 4

    There is some evidence that regulators are concerned about such “ratings arbitrage.” See, for example, proposals for the new Basel II Accord made in July 2008: “If [an issue] has multiple ratings, the applicable rating would be the lowest rating. This approach for determining the applicable rating differs from the New Accord. In the New Accord, if an exposure has two ratings, a banking organization would apply the lower rating to the exposure to determine the risk weight. If an exposure has three or more ratings, the banking organization would use the second lowest rating to risk weight the exposure. The agencies believe that the proposed approach, which is designed to mitigate the potential for ratings arbitrage, more reliably promotes safe and sound banking practices.” Source: http://www.occ.treas.gov/fr/fedregister/73fr43982.pdf.

  • 5

    Indeed, some ratings have a point-in-time perspective, whereas others (including the three major CRAs) employ a through-the-cycle perspective. Similarly, while some rating agencies aim to reflect cross-sectional variation in default probabilities (like S&P and Fitch), others aim to also incorporate loss given default and reflect dispersion in expected loss (like Moody's).

  • 6

    This rule is likely to be revised in the future.

  • 7

    See http://www.occ.treas.gov/law/basel.htm for an overview of legal and regulatory news pertaining to the Basel Accords from the Office of the Comptroller of the Currency (OCC).

  • 8

    Campbell and Taksler (2003) report that about one third of all corporate bonds are held by insurance companies, about 15% by pension and retirement funds, 5% to 10% by mutual funds, and 5% by commercial banks; thus, approximately 60% of this market is held by institutions that qualify for ratings-based constraints.

  • 9

    The Barclays U.S. Corporate IG Index Factsheet is available at https://ecommerce.barcap.com/indices/index.dxml.

  • 10

    Quoting the NAIC report: “A security rated and monitored by two NRSROs is assigned the lowest of the two ratings. A security rated by three or more NRSROs is ordered according to their NAIC equivalents and the rating falling second lowest is selected, even if that rating is equal to that of the first lowest.” This report can be found at http://www.naic.org/documents/committees_e_rating_agency_comdoc_naic_staff_report_use_of_ratings.doc. See also Basel Committee on Banking Supervision (2000). If an issue has only one rating, that rating will be used. However, several regulations prohibit institutional investors from investing in issues with only one rating.

  • 11

    There have been some time series changes in NAIC regulations, but these changes do not significantly affect the validity of our tiebreaking assumption at any point in time, that is, that the worst of two ratings or the medium of three ratings is used for NAIC classifications. First, the NAIC issues its own ratings. From 1994 to 2001, the Securities Valuation Office (SVO) of the NAIC assigned an NAIC rating to each security. Anecdotal evidence suggests that the ratings from CRAs were critical, but that the final decision was at the NAIC analyst's discretion. In 2001, a Provisional Exemption rule was introduced under which bonds with standard features would be assigned an NAIC 1 or NAIC 2 rating (i.e., allowing smaller capital charges than HY) automatically if at least one CRA rated it A− or higher, or if at least two CRAs rated it BBB− or higher, without the interference of an SVO analyst. Effectively, this came down to a middle rating rule (see http://www.naic.org/documents/svo_research_SVO_jan01cc.pdf). Second, on January 1, 2004, the NAIC implemented a Filing Exemption rule, stating that any issue rated by one or more CRAs would be assigned an NAIC rating based on the CRA-equivalent rating. In the case of split ratings, the “second best” rating would be taken (see http://www.naic.org/documents/svo_FE_FAQ.pdf). Third, this second best rule was changed to a “second worst” rule in 2007. However, both the second best and the second worst rule effectively boil down to a “worst of two if only two and medium of three ratings” rule in view of the low market share of the other CRAs besides the big three. Our contact within the NAIC SVO argued that these guidelines were generally well followed by the individual state regulators.

  • 12

    See also Chernenko and Sunderam (2010) on the effects of the market segmentation due to credit ratings on bond issuance and investments.

  • 13

    For simplicity, one could think about type I investors as commercial banks, insurance companies, and pension funds, where the natural demand for bonds stems from the random flow of deposits and claims, and type II investors as hedge funds and proprietary trading desks.

  • 14

    Type II investors do not suffer from the negative effect to utility due to uncertainty; if they need to trade due to liquidity shocks, they trade among themselves on an equally informed basis.

  • 15

    For type I investors, the losses due to informed trading prevent them from investing in this region; they realize that they are at an information disadvantage and thus do not enter this market, while the limited gains for type II investors do not make it worthwhile for them to produce costly information in this intermediate region.

  • 16

    An alternative way to hedge is by increasing the average maturity of the debt. However, this is costly since, in this region of the rating spectrum, the term structure of credit spreads is typically upward sloping.

  • 17

    This is not necessarily true when a rating agency rates too optimistically, but if credit spreads do not decrease, there seems to be no benefit and thus no reason for rating shopping.

  • 18

    Adverse selection may explain why unsolicited ratings are on average worse than solicited ones. That is, firms that receive a favorable unsolicited rating do not apply for a solicited rating anymore, whereas firms with an unfavorable unsolicited rating pay for another (solicited) opinion. On the other hand, CRAs could create a holdup problem by underestimating the creditworthiness of companies in their unsolicited ratings to prompt those companies to seek (improved) paid-for, solicited ratings subsequently. The general conclusion of this literature is that for industrials unsolicited ratings are lower than solicited ratings, and that this difference is largely due to adverse selection of debt issuers. There seems to be some evidence for holdup by CRAs, but this is concentrated mainly among financials. Our data set does not include information on unsolicited ratings for U.S. corporate bonds, so this paper does not address these findings directly. However, several papers report a low incidence rate of unsolicited ratings. For example, Partnoy (2006) estimates an incidence rate of approximately 1%.

  • 19

    To avoid capturing timing mismatches between (multiple) rating transitions, we require that any particular ratings situation exists for at least one quarter. This will also mitigate concerns about not correcting for credit watches and credit outlooks (for these variables our data are too sparse to be useful). Time variation in ratings is hard to measure since ratings are rather persistent. Therefore, we do not explicitly include time variation in ratings as a variable in our regressions but rather analyze the correlation between having a Fitch rating and the likelihood of experiencing rating changes.

  • 20

    The TRACE database starts in July 2002.

  • 21

    We remove all trades that include a commission, have a settlement period of more than 5 days, or are canceled. Trades that are labeled as “corrected,” we correct. Moreover, we remove all trades for which we have a negative reported yield, since these will be mainly driven by implicit option premia in the yield. We also identify trades with a settlement date later than or equal to the maturity date and remove those trades. Furthermore, we find several records that we suspect to be duplicates, resulting from both parties involved in a trade reporting to the system. We filter out duplicate trades that have identical prices, trading time, and volume. Moreover, some of the yield changes are extremely high or low. We remove trades with credit spreads of more than 1,000 bps and credit spread changes of more than 500 bps. Finally, we delete all issues with a duration of less than 1 year.

  • 22

    CUSIPs of nonexchange-traded bonds do not change in case of mergers, acquisitions, etc. (see http://www.cusip.com).

  • 23

    FISD confirmed that, as of mid-2003, they have been using automated rating feeds from the CRAs, whereas before that time these ratings were collected by hand, increasing the potential for data errors in the earlier period.

  • 24

    The Internet Appendix for this article is available online in the “Supplements and Datasets” section at http://www.afajof.org/supplements.asp.

  • 25

    As suggested by Cantor and Packer (1997), ratings by Fitch could be inflated. To address this issue, we repeat the analysis of Table V, where we correct all Fitch ratings by one notch (except for the tiebreaking at the boundary). The results can be found in the Internet Appendix. These results are consistent with the results reported in Table V. Furthermore, we show similar results in levels in the Internet Appendix, exploiting also observations that already had a Fitch rating when they entered the sample. If anything, the results are even stronger since an agreeing Fitch rating is associated with a higher credit spread. The effect of certification is statistically and economically very similar. Finally, one might be concerned that the large movements in credit spreads at the onset of the crisis might drive some of our results. Results in the Internet Appendix for the sample ending in June 2007 indicate that this is not the case.

  • 26

    That is, the weighting is done in a weighted least-squared sense.

  • 27

    For all nonlinear regressions in the paper, we report McFadden (1973) pseudo R2s.

  • 28

    As a robustness check, we also investigate the predictive power of Fitch in the event that (i) Moody's and S&P disagree and (ii) it breaks a tie at the HY–IG boundary. Results can be found in the Internet Appendix. For disagreements, the additional predictive power of Fitch has the correct sign, but is not significant at conventional levels. Also, the tiebreaking role around the HY–IG boundary does not yield a significant effect and has coefficients in the same direction regardless of whether Fitch pulls over or under the boundary.

  • 29

    We thank our NBER discussant, Michael Brennan, for suggesting this test.