"[It] shall be unlawful for a registered public accounting firm [...] that performs for any issuer any audit [...] to provide to that issuer [...] any non-audit service..." Sarbanes Oxley Act of 2002, Section 201 (g)
That overly optimistic credit ratings contributed significantly to the Great Financial Crisis is now widely acknowledged (see, for example, here and here). One welcome result has been a wave of research that highlights the influence of biased credit ratings on the real economy and identifies potential remedies. In this note, we examine stylized facts about ratings performance that emerge from this new work; discuss the economic impact of ratings; and, finally, consider remedies for conflicts of interest that contribute to the problem.
Starting with the facts, in a recent study, Cornaggia et al examine all of Moody’s ratings on non-callable bonds from 1980 to 2010. The authors find a link between credit rating agency (CRA) revenues from different asset classes and the differential performance of comparably rated instruments. For example, from 2000 to 2008, ratings of structured financial assets (SF)—including commercial and residential mortgage-backed securities, asset-backed securities and collateralized debt obligations—constituted Moody’s largest revenue source. These also exhibited outsized default rates across all ratings buckets. By contrast, public finance (PF) bonds, which consistently provided the smallest share of revenues, defaulted at a much lower rate (see chart below).
Five-year default rates of Moody's A-rated and investment-grade rated debt instruments by asset class, 1980-2010
The pattern of Moody’s downgrades and upgrades also is consistent with a revenue-driven upward ratings bias. Were initial ratings symmetrical—so that the probability of upgrades and downgrades were similar—one would expect little difference over the long run between the frequency of ratings upgrades and downgrades for assets that were initially rated comparably. However, the differences have been large, with SF issues having been downgraded more than 10 times as frequently as they were upgraded. By contrast, municipal and sovereign ratings may be biased downward, as they were upgraded slightly more frequently than they are downgraded (see Table 3 in Cornaggia et. al).
What made initial SF ratings so vulnerable? Most likely, the optimistic assumptions built into the SF ratings models that CRAs employed. Together with the obvious conflict of interest, it is precisely these assumptions that have undermined investor and regulator confidence in CRAs. No less important, the enormous performance differences across asset classes undercut the assertion of CRAs that comparable ratings imply similar default implications.
As a result, regulators are right to remain skeptical about reliance on ratings in portfolio management. As Cornaggia et al demonstrate, portfolios using comparably rated instruments from different asset classes would result in vastly different outcomes for banks that otherwise meet international capital standards. For example, in a range of simulations where a bank employing the Basel standardized approach to computing capital adequacy uses default data from the corporate asset class to model a similarly-rated structured bond portfolio, losses averaged 99% of the bank’s capital (see table 9 in the paper)! (Keep in mind that portfolio managers wishing to take greater risk in a concealed fashion also may prefer inflated ratings, so that conflicts of interest are not limited to CRAs themselves.)
Turning to economic impact, market participants often question the value and informativeness of ratings, noting that they have frequently been lagging indicators of default risk premia. For similar reasons, critics question whether ratings influence the real economy, or merely reflect it. Unfortunately, the complex relationship between ratings and the economy (and the existence of numerous third factors that can affect both) makes it difficult to distinguish cause and effect.
Yet, ratings can influence the cost of and access to capital and other resources. One channel is that certain intermediaries have various quality hurdles for including assets in their portfolios. Ratings also affect capital requirements: for example, looking at sovereign claims, the Basel standardized approach imposes a weight ranging from zero (for AAA to AA-) to 150 percent (below B-). Finally, rating downgrades can trigger contractual changes, ranging from bond covenants to margin requirements, as well as influence a firm’s ability to attract workers and form supplier relationships.
Recent research by Almeida et al (forthcoming in the Journal of Finance) has gone a long way to establish that ratings do matter. These authors show that downgrades can prompt firms to diminish leverage and investment, with negative spillovers to the economy.
To demonstrate causality, Almeida et al exploit a familiar feature of credit ratings: the sovereign ratings ceiling. While CRAs do make exceptions, firms rarely enjoy a credit rating higher than that of their home country. Not only is sovereign distress generally associated with broader economic weakness (and, in some cases, with fragility in the financial system) that can affect corporate creditworthiness, but a government in need can always tax firms within its jurisdiction. Because of the sovereign ceiling, when a sovereign is downgraded, firms that are rated at the upper bound (“bound firms”) are far more likely to be downgraded than firms that are lower rated (“control firms”).
For their sample of sovereign downgrades since 1997, Almeida et al find that the subsequent downgrades of bound firms were 0.7 notches greater than for control firms (a notch is the gap between two neighboring credit ratings, such as A and A-). Furthermore, that differential impact is associated with a larger decline of investment and a larger increase in bond yields for bound firms. The implication is that neither investors nor policymakers (including sovereign debt managers) can be indifferent to the impact of ratings on general economic conditions.
Finally, are there remedies to the conflict of interest that seems to pervade our current system of ratings? More than a decade ago, in the aftermath of the Enron and WorldCom disasters, and the subsequent collapse of Arthur Andersen, the Sarbanes Oxley Act (SOX) severely restricted auditors’ provision of non-auditing services to their clients (see the opening quote). The goal was to insure that auditors would act independently to prevent massive corporate fraud. And, at least so far, this aspect of the system seems to be working.
But, as the evidence suggests, credit rating agencies are a different matter. As Baghai and Becker (BB) highlight, non-ratings revenues remain a source of conflict of interest. In 2015, for example, Moody’s reported $2.3 billion in ratings-related revenues for Moody’s Investor Services, and an additional $1.2 billion from other services from Moody’s Analytics.
To what extent do non-rating revenues influence ratings? To answer this question, BB exploit recent data from India, where regulators began in 2010 to require that CRAs disclose the existence of non-ratings business with rated issuers, and where some CRAs voluntarily disclose their non-ratings revenues. This unusual data set allows the researchers to filter multiple ratings on a specific debt instrument according to whether the CRA provides non-ratings services.
BB draw three conclusions:
- CRAs that have non-ratings business with an issuer provide a higher rating (by 0.3 notches) than CRAs that don't.
- The larger the revenue from non-ratings business, the larger the ratings gap between the two types of CRAs (with a one-standard deviation increase in non-rating fees adding 0.3 notches to the rating).
- Within each broad rating category, issuers that pay for non-ratings business have higher one-year default rates (see chart below).
One-year default rates by ratings category, 2010-2015
It is possible that the higher ratings provided by CRAs that receive non-ratings revenues reflect greater knowledge acquired about the issuer. Were that the case, the higher ratings would be a more accurate indicator of prospective creditworthiness. However, the sizable differences in default performance point decisively the other way—especially for firms that are rated a bit shy of investment grade (see BB category in the chart).
So, how should regulators respond to this conflict of interest? Should they go as far as SOX did for auditors and restrict CRAs from providing non-ratings services? In our view, the case for isolating rating agencies from non-ratings business is much less compelling than the one for isolating audit services from non-audit revenues. Most important, while there exists only one auditor per firm, debt instruments can be, and frequently are, rated by multiple CRAs. This means that, so long as investors have access to multiple ratings and are informed of the existence and scale of non-ratings revenues, they can appropriately discount ratings that may be tainted.
These considerations imply that greater transparency and competition could be effective in addressing CRA conflict. As BB suggest, transparency should include disclosure of ratings and non-rating services revenues received from specific issuers. Other forms of disclosure—for example, regarding “indicative ratings”—would be helpful in hindering ratings shopping, while greater disclosure by issuers could limit the pressure on CRAs to cater. Finally, greater competition among CRAs would facilitate multiple ratings.
The bottom line: Credit ratings matter, but they are subject to a range of conflicts of interest. To help investors make more informed decisions, regulators should shine a bright light on what CRAs do, requiring more detailed disclosure even as they reduce barriers to entry in the ratings business.