Skip to main content
U.S. flag
An official website of the United States government
Dot gov
The .gov means it’s official. 
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
Https
The site is secure. 
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

FIL-62-2003 Attachment

[Federal Register: August 4, 2003 (Volume 68, Number 149)]
[Notices]               
[Page 45949-45988]
From the Federal Register Online via GPO Access [wais.access.gpo.gov]
[DOCID:fr04au03-137]                        

 

[[Page 45949]]


-----------------------------------------------------------------------

DEPARTMENT OF THE TREASURY

Office of the Comptroller of the Currency

[Docket No. 03-15]

FEDERAL RESERVE SYSTEM

[Docket No. OP-1153]

FEDERAL DEPOSIT INSURANCE CORPORATION

DEPARTMENT OF THE TREASURY

Office of Thrift Supervision

[No. 2003-28]


Internal Ratings-Based Systems for Corporate Credit and 
Operational Risk Advanced Measurement Approaches for Regulatory Capital

AGENCIES: Office of the Comptroller of the Currency (OCC), Treasury; 
Board of Governors of the Federal Reserve System (Board); Federal 
Deposit Insurance Corporation (FDIC); and Office of Thrift Supervision 
(OTS), Treasury.

ACTION: Draft supervisory guidance with request for comment.

-----------------------------------------------------------------------

SUMMARY: The OCC, Board, FDIC, and OTS (the Agencies) are publishing 
for industry comment two documents that set forth draft supervisory 
guidance for implementing proposed revisions to the risk-based capital 
standards in the United States. These proposed revisions, which would 
implement the New Basel Capital Accord in the United States, are 
published as an advance notice of proposed rulemaking (ANPR) elsewhere 
in today's Federal Register. Under the advanced approaches for credit 
and operational risk described in the ANPR, banking organizations would 
use internal estimates of certain risk components as key inputs in the 
determination of their regulatory capital requirements. The Agencies 
believe that supervisory guidance is necessary to balance the 
flexibility inherent in the advanced approaches with high standards 
that promote safety and soundness and encourage comparability across 
institutions.
   The first document sets forth Draft Supervisory Guidance on 
Internal Ratings-Based Systems for Corporate Credit (corporate IRB 
guidance). This document describes supervisory expectations for 
institutions that intend to adopt the advanced internal ratings-based 
approach (A-IRB) for credit risk as set forth in today's ANPR. The 
corporate IRB guidance is intended to provide supervisors and 
institutions with a clear description of the essential components and 
characteristics of an acceptable A-IRB framework. The guidance focuses 
specifically on corporate credit portfolios; further guidance is 
expected at a later date on other credit portfolios (including, for 
example, retail and commercial real estate portfolios).
   The second document sets forth Draft Supervisory Guidance on 
Operational Risk Advanced Measurement Approaches for Operational Risk 
(AMA guidance). This document outlines supervisory expectations for 
institutions that intend to adopt an advanced measurement approach 
(AMA) for operational risk as set forth in today's ANPR.
   The Agencies are seeking comments on the supervisory standards set 
forth in both documents. In addition to seeking comment on specific 
aspects of the supervisory guidance set forth in the documents, the 
Agencies are seeking comment on the extent to which the supervisory 
guidance strikes the appropriate balance between flexibility and 
specificity. Likewise, the Agencies are seeking comment on whether an 
appropriate balance has been struck between the regulatory requirements 
set forth in the ANPR and the supervisory standards set forth in these 
documents.

DATES: Comments must be received no later than November 3, 2003.

ADDRESSES: Comments should be directed to:
   OCC: Please direct your comments to: Office of the Comptroller of 
the Currency, 250 E Street, SW., Public Information Room, Mailstop 1-5, 
Washington, DC 20219, Attention: Docket No. 03-15; fax number (202) 
874-4448; or Internet address: regs.comments@occ.treas.gov. Due to 
delays in paper mail delivery in the Washington area, we encourage the 
submission of comments by fax or e-mail whenever possible. Comments may 
be inspected and photocopied at the OCC's Public Information Room, 250 
E Street, SW., Washington, DC. You may make an appointment to inspect 
comments by calling (202) 874-5043.
   Board: Comments should refer to Docket No. OP-1153 and may be 
mailed to Ms. Jennifer J. Johnson, Secretary, Board of Governors of the 
Federal Reserve System, 20th Street and Constitution Avenue, NW., 
Washington, DC, 20551. However, because paper mail in the Washington 
area and at the Board of Governors is subject to delay, please consider 
submitting your comments by e-mail to regs.comments@federalreserve.gov, 
or faxing them to the Office of the Secretary at 202/452-3819 or 202/
452-3102. Members of the public may inspect comments in Room MP-500 of 
the Martin Building between 9 a.m. and 5 p.m. on weekdays pursuant to 
Sec.  261.12, except as provided in Sec.  261.14, of the Board's Rules 
Regarding Availability of Information, 12 CFR 261.12 and 261.14.
   FDIC: Written comments should be addressed to Robert E. Feldman, 
Executive Secretary, Attention: Comments, Federal Deposit Insurance 
Corporation, 550 17th Street, NW., Washington, DC, 20429. Commenters 
are encouraged to submit comments by facsimile transmission to (202) 
898-3838 or by electronic mail to Comments @FDIC.gov. Comments also may 
be hand-delivered to the guard station at the rear of the 550 17th 
Street Building (located on F Street) on business days between 8:30 
a.m. and 5 p.m. Comments may be inspected and photocopied at the FDIC's 
Public Information Center, Room 100, 801 17th Street, NW., Washington, 
DC between 9 a.m. and 4:30 p.m. on business days.
   OTS: Send comments to Regulation Comments, Chief Counsel's Office, 
Office of Thrift Supervision, 1700 G Street, NW., Washington, DC 20552, 
Attention: No. 2003-28. Delivery: Hand deliver comments to the Guard's 
desk, east lobby entrance, 1700 G Street, NW., from 9 a.m. to 4 p.m. on 
business days, Attention: Regulation Comments, Chief Counsel's Office, 
Attention: No. 2003-28. Facsimiles: Send facsimile transmissions to FAX 
Number (202) 906-6518, Attention: No 2003-28. e-mail: Send e-mails to 
regs.comments@ots.treas.gov, Attention: No. 2003-28, and include your 
name and telephone number. Due to temporary disruptions in mail service 
in the Washington, DC area, commenters are encouraged to send comments 
by fax or e-mail, if possible.

FOR FURTHER INFORMATION CONTACT:
   OCC: Corporate IRB guidance: Jim Vesely, National Bank Examiner, 
Large Bank Supervision (202/874-5170 or james.vesely@occ.treas.gov); 
AMA guidance: Tanya Smith, Senior International Advisor, International 
Banking & Finance (202/874-4735 or tanya.smith@occ.treas.gov).
   Board: Corporate IRB guidance: David Palmer, Supervisory Financial 
Analyst, Division of Banking Supervision and Regulation (202/452-2904 
or david.e.palmer@frb.gov); AMA guidance: T. Kirk Odegard, Supervisory 
Financial Analyst, Division of Banking Supervision and Regulation (202/
530-6225 or thomas.k.odegard@frb.gov). For users of Telecommunications 
Device for

[[Page 45950]]

the Deaf (``TDD'') only, contact 202/263-4869.
   FDIC: Corporate IRB guidance and AMA guidance: Pete D. Hirsch, 
Basel Project Manager, Division of Supervision and Consumer Protection 
(202/898-6751 or phirsch@fdic.gov).
   OTS: Corporate IRB guidance and AMA guidance: Michael D. Solomon, 
Senior Program Manager for Capital Policy (202/906-5654); David W. 
Riley, Project Manager (202/906-6669), Supervision Policy; Teresa A. 
Scott, Counsel (Banking and Finance) (202/906-6478); or Eric 
Hirschhorn, Principal Financial Economist (202/906-7350), Regulations 
and Legislation Division, Office of the Chief Counsel, Office of Thrift 
Supervision, 1700 G Street, NW., Washington, DC 20552.

Document 1: Draft Supervisory Guidance on Internal Ratings-Based 
Systems for Corporate Credit

Table of Contents

I. Introduction
   A. Purpose
   B. Overview of Supervisory Expectations
   1. Ratings Assignment
   2. Quantification
   3. Data Maintenance
   4. Control and Oversight Mechanisms
   C. Scope of Guidance
   D. Timing
II. Ratings for IRB Systems
   A. Overview
   B. Credit Ratings
   1. Rating Assignment Techniques
   a. Expert Judgment
   b. Models
   c. Constrained Judgment
   C. IRB Ratings System Architecture
   1. Two-Dimensional Rating System
   a. Definition of Default
   b. Obligor Ratings
   c. Loss Severity Ratings
   2. Other Considerations of IRB Rating System Architecture
   a. Timeliness of Ratings
   b. Multiple Ratings Systems
   c. Recognition of the Risk Mitigation Benefits of Guarantees
   3. Validation Process
   a. Ratings System Developmental Evidence
   b. Ratings System Ongoing Validation
   c. Back Testing
III. Quantification of IRB Systems
   A. Introduction
   1. Stages of the Quantification Process
   2. General Principles for Sound IRB Quantification
   B. Probability of Default (PD)
   1. Data
   2. Estimation
   3. Mapping
   4. Application
   C. Loss Given Default (LGD)
   1. Data
   2. Estimation
   3. Mapping
   4. Application
   D. Exposure at Default (EAD)
   1. Data
   2. Estimation
   3. Mapping
   4. Application
   E. Maturity (M)
   F. Validation
Appendix to Part III: Illustrations of the Quantification Process
IV. Data Maintenance
   A. Overview
   B. Data Maintenance Framework
   1. Life Cycle Tracking
   2. Rating Assignment Data
   3. Example Data Elements
   C. Data Element Functions
   1. Validation and Refinement
   2. Developing Parameter Estimates
   3. Applying Rating System Improvements Historically
   4. Calculating Capital Ratios and Reporting to the Public
   5. Supporting Risk Management
   D. Managing data quality and integrity
   1. Documentation and Definitions
   2. Electronic Storage
   3. Data Gaps
V. Control and Oversight Mechanisms
   A. Overview
   B. Independence in the Rating Approval Process
   C. Transparency
   D. Accountability
   1. Responsibility for Assigning Ratings
   2. Responsibility for Rating System Performance
   E. Use of Ratings
   F. Rating System Review (RSR)
   G. Internal Audit
   1. External Audit
   H. Corporate Oversight

I. Introduction

A. Purpose

   This document describes supervisory expectations for banking 
organizations (institutions) adopting the advanced internal ratings-
based approach (IRB) for the determination of minimum regulatory risk-
based capital requirements. The focus of this guidance is corporate 
credit portfolios. Retail, commercial real estate, securitizations, and 
other portfolios will be the focus of later guidance. This draft 
guidance should be considered with the advance notice of proposed 
rulemaking (ANPR) on revisions to the risk-based capital standard 
published elsewhere in today's Federal Register.
   The primary objective of IRB is to enhance the sensitivity of 
regulatory capital requirements to credit risk. To accomplish that 
objective, IRB harnesses a bank's own risk rating and quantification 
capabilities. In general, the IRB approach reflects and extends recent 
developments in risk management and banking supervision. However, the 
degree to which any individual bank will need to modify its own credit 
risk management practices to deliver accurate and consistent IRB risk 
parameters will vary from institution to institution.
   This guidance is intended to provide supervisors and institutions 
with a clear description of the essential components and 
characteristics of an acceptable IRB framework. Toward that end, this 
document sets forth IRB system supervisory standards that are 
highlighted in bold and designated by the prefix ``S.'' Whenever 
possible, these supervisory standards are principle-based to enable 
institutions to implement the framework flexibly. However, when 
prudential concerns or the need for standardization override the desire 
for flexibility, the supervisory standards are more detailed. 
Ultimately, institutions must have credit risk management practices 
that are consistent with the substance and spirit of the standards in 
this guidance.
   The IRB conceptual framework outlined in this document is intended 
neither to dictate the precise manner by which institutions should seek 
to meet supervisory expectations, nor to provide technical guidance on 
how to develop such a framework. As institutions develop their IRB 
systems in anticipation of adopting them for regulatory capital 
purposes, supervisors will be evaluating, on an individual bank basis, 
the extent to which institutions meet the standards outlined in this 
document. In evaluating institutions, supervisors will rely on this 
supervisory guidance as well as examination procedures, which will be 
developed separately. This document assumes that readers are familiar 
with the proposed IRB approach to calculating minimum regulatory 
capital articulated in the ANPR.

B. Overview of Supervisory Expectations

   Rigorous credit risk measurement is a necessary element of advanced 
risk management. Qualifying institutions will use their internal rating 
systems to associate a probability of default (PD) with each obligor 
grade, as well as a loss given default (LGD) with each credit facility. 
In addition, institutions will estimate exposure at default (EAD) and 
will calculate the effective remaining maturity (M) of credit 
facilities.
   Qualifying institutions will be expected to have an IRB system 
consisting of four interdependent components:
   [sbull] A system that assigns ratings and validates their accuracy 
(Chapter 1),
   [sbull] A quantification process that translates risk ratings into 
IRB parameters (Chapter 2),
   [sbull] A data maintenance system that supports the IRB system 
(Chapter 3), and,

[[Page 45951]]

   [sbull] Oversight and control mechanisms that ensure the system is 
functioning as intended and producing accurate ratings (Chapter 4).
   Together these rating, quantification, data, and oversight 
mechanisms present a framework for defining and improving the 
evaluation of credit risk.
   It is expected that rating systems will operate dynamically. As 
ratings are assigned, quantified and used, estimates will be compared 
with actual results and data will be maintained and updated to support 
oversight and validation efforts and to better inform future estimates. 
The rating system review and internal audit functions will serve as 
control mechanisms that ensure that the process of ratings assignment 
and quantification function according to policy and design and that 
noncompliance and weaknesses are identified, communicated to senior 
management and the board, and addressed. Rating systems with 
appropriate data and oversight feedback mechanisms foster a learning 
environment that promotes integrity in the rating system and continuing 
refinement.
   IRB systems need the support and oversight of the board and senior 
management to ensure that the various components fit together 
seamlessly and that incentives to make the system rigorous extend 
across line, risk management, and other control groups. Without strong 
board and senior management support and involvement, rating systems are 
unlikely to provide accurate and consistent risk estimates during both 
good and bad times.
   The new regulatory minimum capital requirement is predicated on an 
institution's internal systems being sufficiently advanced to allow a 
full and accurate assessment of its risk exposures. Under the new 
framework, an institution could experience a considerable capital 
shortfall in the most difficult of times if its risk estimates are 
materially understated. Consequently, the IRB framework demands a 
greater level of validation work and controls than supervisors have 
required in the past. When properly implemented, the new framework 
holds the potential for better aligning minimum capital requirements 
with the risk taken, pushing capital requirements higher for 
institutions that specialize in riskier types of lending, and lower for 
those that specialize in safer risk exposures.
   Supervisors will evaluate compliance with the supervisory standards 
for each of the four components of an IRB system. However, evaluating 
compliance with each of the standards individually will not be 
sufficient to determine an institution's overall compliance. Rather, 
supervisors and institutions must also evaluate how well the various 
components of an institution's IRB system complement and reinforce one 
another to achieve the overall objective of accurate measures of risk. 
In performing their evaluation, supervisors will need to exercise 
considerable supervisory judgment, both in evaluating the individual 
components and the overall IRB framework. A summary of the key 
supervisory expectations for each of the IRB components follows.
Ratings Assignment
   The first component of an IRB system involves the assignment and 
validation of ratings (see Chapter 1). Ratings must be accurately and 
consistently applied to all corporate credit exposures and be subject 
to initial and ongoing validation. Institutions will have latitude in 
designing and operating IRB rating systems subject to five broad 
standards:
   Two-dimensional risk-rating system--IRB institutions must be able 
to make meaningful and consistent differentiations among credit 
exposures along two dimensions--obligor default risk and loss severity 
in the event of a default.
   Rank order risks--IRB institutions must rank obligors by their 
likelihood of default, and facilities by the loss severity expected in 
default.
   Calibration--IRB obligor ratings must be calibrated to values of 
the probability of default (PD) parameter and loss severity ratings 
must be calibrated to values of the loss given default (LGD) parameter.
   Accuracy--Actual long-run actual default frequencies for obligor 
rating grades must closely approximate the PDs assigned to those grades 
and realized loss rates on loss severity grades must closely 
approximate the LGDs assigned to those grades.
   Validation process--IRB institutions must have ongoing validation 
processes for rating systems that include the evaluation of 
developmental evidence, process verification, benchmarking, and the 
comparison of predicted parameter values to actual outcomes (back-
testing).
Quantification
   The second component of an IRB system is a quantification process 
(see Chapter 2). Since obligor and facility ratings may be assigned 
separately from the quantification of the associated PD and LGD 
parameters, quantification is addressed as a separate process. The 
quantification process must produce values not only for PD and LGD but 
also for EAD and for the effective remaining maturity (M). The 
quantification of those four parameters is expected to be the result of 
a disciplined process. The key considerations for effective 
quantification are as follows:
   Process--IRB institutions must have a fully specified process 
covering all aspects of quantification (reference data, estimation, 
mapping, and application).
   Documentation--The quantification process, including the role and 
scope of expert judgment, must be fully documented and updated 
periodically.
   Updating--Parameter estimates and related documentation must be 
updated regularly.
   Review--A bank must subject all aspects of the quantification 
process, including design and implementation, to an appropriate degree 
of independent review and validation.
   Constraints on Judgment--Judgmental adjustments may be an 
appropriate part of the quantification process, but must not be biased 
toward lower risk estimates.
   Conservatism--Parameter estimates must incorporate a degree of 
conservatism that is appropriate for the overall robustness of the 
quantification process.
Data Maintenance
   The third component of an IRB system is an advanced data management 
system that produces credible and reliable risk estimates (see Chapter 
3). The broad standard governing an IRB data maintenance system is that 
it supports the requirements for the other IRB system components, as 
well as the institution's broader risk management and reporting needs. 
Institutions will have latitude in managing their data, subject to the 
following key data maintenance standards:
   Life Cycle Tracking--Institutions must collect, maintain, and 
analyze essential data for obligors and facilities throughout the life 
and disposition of the credit exposure.
   Rating Assignment Data--Institutions must capture all significant 
quantitative and qualitative factors used to assign the obligor and 
loss severity rating.
   Support of IRB System--Data collected by institutions must be of 
sufficient depth, scope, and reliability to:
   [sbull] Validate IRB system processes,
   [sbull] Validate parameters,
   [sbull] Refine the IRB system,
   [sbull] Develop internal parameter estimates,
   [sbull] Apply improvements historically,
   [sbull] Calculate capital ratios,
   [sbull] Produce internal and public reports, and

[[Page 45952]]

   [sbull] Support risk management.
Control and Oversight Mechanisms
   The fourth component of an IRB system is comprised of control and 
oversight mechanisms that ensure that the various components of the IRB 
system are functioning as intended (see Chapter 4). Given the various 
uses of internal risk ratings, including their direct link to 
regulatory capital requirements, there is enormous, sometimes 
conflicting, pressure on banks' internal rating systems. Control 
structures are subject to the following broad standards:
   Interdependent System of Controls--IRB institutions must implement 
a system of interdependent controls that include the following 
elements:
   [sbull] Independence,
   [sbull] Transparency,
   [sbull] Accountability,
   [sbull] Use of ratings,
   [sbull] Rating system review,
   [sbull] Internal audit, and
   [sbull] Board and senior management oversight.
   Checks and Balances--Institutions must combine the various control 
mechanisms in a way that provides checks and balances for ensuring IRB 
system integrity.
   The system of oversight and controls required for an effective IRB 
system may operate in various ways within individual institutions. This 
guidance does not prescribe any particular organizational structure for 
IRB oversight and control mechanisms. Banks have broad latitude to 
implement structures that are most effective for their individual 
circumstances, as long as those structures support and enhance the 
institution's ability to satisfy the supervisory standards expressed in 
this document.

C. Scope of Guidance

   This draft guidance reflects work performed by supervisors to 
evaluate and compare current practices at institutions with the 
concepts and requirements for an IRB framework. For instances in which 
a range of practice was observable, examples are provided on how 
certain practices may or may not qualify. However, in many other 
instances, practices were at such an early stage of development that it 
was not feasible to describe specific examples. In those cases, 
requirements tend to be principle-based and without examples. Given 
that institutions are still in the early stages of developing 
qualifying IRB systems, it is expected that this guidance will evolve 
over time to more explicitly take into account new and improving 
practices.

D. Timing

   S. An IRB system must be operating fully at least one year prior to 
the institution's intended start date for the advanced approach.
   As noted in the ANPR, the significant challenge of implementing a 
fully complying IRB system requires that institutions and supervisors 
have sufficient time to observe whether the IRB system is delivering 
risk-based capital figures with a high level of integrity. The ability 
to observe the institution's ratings architecture, validation, data 
maintenance and control functions in a fully operating environment 
prior to implementation will help identify how well the IRB system 
design functions in practice. This will be particularly important given 
that in the first year of implementation institutions will not only be 
subject to the new minimum capital requirements, but will also be 
disclosing risk-based capital ratios for the public to rely upon in the 
assessment of the institution's financial health.

II. Ratings for IRB Systems

A. Overview

   This chapter describes the design and operation of risk-rating 
systems that will be acceptable in an internal ratings-based (IRB) 
framework. Banks will have latitude in designing and operating IRB 
rating systems, subject to five broad standards:
   Two-dimensional risk-rating system--IRB institutions must be able 
to make meaningful and consistent differentiations among credit 
exposures along two dimensions--obligor default risk and loss severity 
in the event of a default.
   Rank order risks--IRB institutions must rank obligors by their 
likelihood of default, and facilities by the loss severity expected in 
default.
   Calibration--IRB obligor ratings must be calibrated to values of 
the probability of default (PD) parameter and loss severity ratings 
must be calibrated to values of the loss given default (LGD) parameter.
   Accuracy--Actual long-run actual default frequencies for obligor 
rating grades must closely approximate the PDs assigned to those grades 
and actual loss rates on loss severity grades must closely approximate 
the LGDs assigned to those grades.
   Validation process--IRB institutions must have ongoing validation 
processes for rating systems that include the evaluation of 
developmental evidence, process verification, benchmarking, and the 
comparison of predicted parameter values to actual outcomes (back-
testing).

B. Credit Ratings

   In general, a credit rating is a summary indicator of the relative 
risk on a credit exposure. Credit ratings can take many forms. The most 
widely known credit ratings are the public agency ratings, which are 
expressed as letters; bank internal ratings tend to be expressed as 
whole numbers--for example, 1 through 10. Some rating model outputs are 
expressed in terms of probability of default or expected default 
frequency, in which case they may be more than relative measures of 
risk. Regardless of the form, meaningful credit ratings share two 
characteristics:
   [sbull] They group credits to discriminate among possible outcomes.
   [sbull] They rank the perceived levels of credit risk.
   Banks have used credit ratings of various types for a variety of 
purposes. Some ratings are intended to rank obligors by risk of default 
and some are intended to rank facilities\1\ by expected loss, which 
incorporates risk of default and loss severity. Bank rating systems 
that are geared solely to expected loss will need to be amended to meet 
the two-dimensional requirements of the IRB approach.
Rating Assignment Techniques
   Banks use different techniques, such as expert judgment and models, 
to assign credit risk ratings. For banks using the IRB approach, how 
ratings are assigned is important because different techniques will 
require different validation processes and control mechanisms to ensure 
the integrity of the rating system. To assist the discussion of rating 
architecture requirements, described below are some of the current 
rating assignment techniques. Any of these techniques--expert judgment, 
models, constrained judgment, or a combination thereof--could be 
acceptable within an IRB system, provided the bank meets the standards 
outlined in this document.
---------------------------------------------------------------------------

   \1\ Facilities--loans, lines, or other separate extensions of 
credit to an obligor.
---------------------------------------------------------------------------

Expert Judgment
   Historically, banks have used expert judgment to assign ratings to 
commercial credits. With this technique, an individual weighs relevant 
information and reaches a conclusion about the appropriate risk rating. 
Presumably, the rater makes informed judgments based on knowledge 
gained through experience and training.

[[Page 45953]]

   The key feature of expert-judgment systems is flexibility. The 
prevalence of judgmental rating systems reflects the view that the 
determinants of default are too complicated to be captured by a single 
quantitative model. The quality of management is often cited as an 
example of a risk determinant that is difficult to assess through a 
quantitative model. In order to foster internal consistency, banks 
employing expert judgment rating systems typically provide narrative 
guidelines that set out ratings criteria. However, the expert must 
decide how narrative guidelines apply to a given set of circumstances.
   The flexibility possible in the assignment of judgmental ratings 
has implications for the types of ratings review that are feasible. As 
part of the ratings validation process, banks will attempt to confirm 
that raters follow bank policy. However, two individuals exercising 
judgment can use the same information to support different ratings. 
Thus, the review of an expert judgment rating system will require an 
expert who can identify the impact of policy and the impact of judgment 
on a rating.
Models
   In recent years, models have been developed for use in rating 
commercial credits. In a model-based approach, inputs are numeric and 
provide quantitative and qualitative information about an obligor. The 
inputs are combined using mathematical equations to produce a number 
that is translated into a categorical rating. An important feature of 
models is that the rating is perfectly replicable by another party, 
given the same inputs.
   The models used in credit rating can be distinguished by the 
techniques used to develop them. Some models may rely on statistical 
techniques while others rely on expert-judgment techniques.
   Statistical models. Statistically developed models are the result 
of statistical optimization, in which well-defined mathematical 
criteria are used to choose the model that has the closest fit to the 
observed data. Numerous techniques can be used to build statistical 
models; regression is one widely recognized example. Regardless of the 
specific statistical technique, a knowledgeable independent reviewer 
will have to exercise judgment in evaluating the reasonableness of a 
model's development, including its underlying logic, the techniques 
used to handle the data, and the statistical model building techniques.
   Expert-derived models.\2\ Several banks have built rating models by 
asking their experts to decide what weights to assign to critical 
variables in the models. Drawing on their experience, the experts first 
identify the observable variables that affect the likelihood of 
default. They then reach agreement on the weights to be assigned to 
each of the variables. Unlike statistical optimization, the experts are 
not necessarily using clear, consistent criteria to select the weights 
attached to the variables. Indeed, expert-judgment model building is 
often a practical choice when there is not enough data to support a 
statistical model building. Despite its dependence on expert judgment, 
this method can be called model-based as long as the result--the 
equation, most likely with linear weights--is used as the basis to rate 
the credits. Once the equation is set, the model shares the feature of 
replicability with statistically derived models. Generally, independent 
credit experts use judgment to evaluate the reasonableness of the 
development of these models.
---------------------------------------------------------------------------

   \2\ Some banks have developed credit rating models that they 
refer to as ``scorecards,'' but they have used expert judgment to 
derive the weights. While they are models, they are not scoring 
models in the now conventional use of the term. In its conventional 
use, the term scoring model is reserved for a rating model derived 
using statistical techniques.
---------------------------------------------------------------------------

Constrained Judgment
   The alternatives just described present the extremes, but in 
practice, many banks use rating systems that combine models with 
judgment. Two approaches are common.
   Judgmental systems with quantitative guidelines or model results as 
inputs. Historically, the most common approach to rating has involved 
individuals exercising judgment about risks, subject to policy 
guidelines containing quantitative criteria such as minimum values for 
particular financial ratios. Banks develop quantitative criteria to 
guide individuals in assigning ratings, but often believe that those 
criteria do not adequately reflect the information needed to assign a 
rating.
   One version of this constrained judgment approach features a model 
output as one among several criteria that an individual may consider in 
assigning ratings. The individual assigning the rating is responsible 
for prioritizing the criteria, reconciling conflicts between criteria, 
and if warranted, overriding some criteria. Even if individuals 
incorporate model results as one of the factors in their ratings, they 
will exercise judgment in deciding what weight to attach to the model 
result. The appeal of this approach is that the model combines many 
pieces of information into a single output, which simplifies analysis, 
while the rater retains flexibility regarding the use of the model 
output.
   Model-based ratings with judgmental overrides. When banks use 
rating models, individuals are generally permitted to override the 
results under certain conditions and within tolerance levels for 
frequency. Credit-rating systems in which individuals can override 
models raise many of the same issues presented separately by pure 
judgment and model-based systems. If overrides are rare, the system can 
be evaluated largely as if it is a model-based system. If, however, 
overrides are prevalent, the system will be evaluated more like a 
judgmental system.
   Since constrained judgment systems combine features of both expert 
judgment and model-based systems, their evaluation will require the 
skills required to evaluate both of these other systems.

C. IRB Ratings System Architecture

Two-Dimensional Rating System
   S. IRB risk rating systems must have two rating dimensions--obligor 
and loss severity ratings.
   S. IRB obligor and loss severity ratings must be calibrated to 
values of the probability of default (PD) and the loss given default 
(LGD), respectively.
   Regardless of the type of rating system(s) used by an institution, 
the IRB approach imposes some specific requirements. The first 
requirement is that an IRB rating system must be two-dimensional. Banks 
will assign obligor ratings, which will be associated with a PD. They 
will also either assign a loss severity rating, which will be 
associated with LGD values, or directly assign LGD values to each 
facility. The process of assigning the obligor and loss severity 
ratings--hereafter referred to as the rating system--is discussed 
below, and the process of calibrating obligor and loss severity ratings 
to PD and LGD parameters is discussed in Chapter 2.
   S. Banks must record obligor defaults in accordance with the IRB 
definition of default.
Definition of Default
   The consistent identification of defaults is fundamental to any IRB 
rating system. For IRB purposes, a default is considered to have 
occurred with regard to a particular obligor when either or both of the 
two following events have taken place:
   [sbull] The obligor is past due more than 90 days on any material 
credit

[[Page 45954]]

obligation to the banking group. Overdrafts will be considered as being 
past due once the customer has breached an advised limit or been 
advised of a limit smaller than current outstandings.
   [sbull] The bank considers that the obligor is unlikely to pay its 
credit obligations to the banking group in full, without recourse by 
the bank to actions such as liquidating collateral (if held).
   Any obligor (or its underlying credit facilities) that meets one or 
more of the following conditions is considered unlikely to pay and 
therefore in default:
   [sbull] The bank puts the credit obligation on non-accrual status.
   [sbull] The bank makes a charge-off or account-specific provision 
resulting from a significant perceived decline in credit quality 
subsequent to the bank taking on the exposure.
   [sbull] The bank sells the credit obligation at a material credit-
related economic loss.
   [sbull] The bank consents to a distressed restructuring of the 
credit obligation where this is likely to result in a diminished 
financial obligation caused by the material forgiveness, or 
postponement, of principal, interest or (where relevant) fees.
   [sbull] The bank has filed for the obligor's bankruptcy or a 
similar order in respect of the obligor's credit obligation to the 
banking group.
   [sbull] The obligor has sought or has been placed in bankruptcy or 
similar protection where this would avoid or delay repayment of the 
credit obligation to the banking group.
   While most conditions of default currently are identified by bank 
reporting systems, institutions will need to augment data capture 
systems to collect those default circumstances that may not have been 
traditionally identified. These include facilities that are current and 
still accruing but where the obligor declared or was placed in 
bankruptcy. They must also capture so called ``silent defaults''--
defaults when the loss on a facility was avoided by liquidating 
collateral.
   Loan sales on which a bank experiences a material loss due to 
credit deterioration are considered a default. Material credit related 
losses are defined as XX. (The agencies seek comment on how to define 
``material'' loss in the case of loans sold at a discount). Banks 
should ensure that they have adequate systems to identify such 
transactions and to maintain adequate records so that reviewers can 
assess the adequacy of the institution's decision-making process in 
this area.
Obligor Ratings
   S. Banks must assign discrete obligor grades.
   While banks may use models to estimate probabilities of default for 
individual obligors, the IRB approach requires banks to group the 
obligors into discrete grades. Each obligor grade, in turn, must be 
associated with a single PD.
   S. The obligor-rating system must result in a ranking of obligors 
by likelihood of default.
   The proper operation of the obligor-rating system will feature a 
ranking of obligors by likelihood of default. For example, if a bank 
uses a rating system based on a 10-point scale, with 1 representing 
obligors of highest financial strength and 10 representing defaulted 
obligors, grades 2 through 9 should represent groups of ever-increasing 
risk. In a rating system in which risk increases with the grade, an 
obligor with a grade 4 is riskier than an obligor with a grade 2, but 
need not be twice as risky.
   S. Separate exposures to the same obligor must be assigned to the 
same obligor rating grade.
   As noted above, the IRB framework requires that the obligor rating 
be distinct from the loss severity rating, which is assigned to the 
facility. Collateral and other facility characteristics should not 
influence the obligor rating. For example, in a 1-to-10 rating system, 
where risk increases with the number grade, a defaulted borrower with a 
fully cash-secured transaction should be rated a 10--defaulted--
regardless of the remote expectation of loss. Likewise, a borrower 
whose financial condition warrants the highest investment grade rating 
should be rated a 1 even if the bank's transactions are subordinate to 
other creditors and unsecured. Since the rating is assigned to the 
obligor and not the facility, separate exposures to the same obligor 
must be assigned to the same obligor rating grade.
   At the bottom of any IRB system rating scale is a default grade. 
Once an obligor is considered to be in default for IRB purposes, that 
obligor must be assigned a default grade until such time as its 
financial condition and performance improve sufficiently to clearly 
meet the bank's internal rating definition for one of its non-default 
grades. Once an obligor is in default on any material credit obligation 
to the subject bank, all of its facilities at that institution are 
considered to be in default.
   S. In assigning an obligor to a rating category, the bank must 
assess the risk of obligor default over a period of at least one year.
   S. Obligor ratings must reflect the impact of financial distress.
   In assigning an obligor to a rating category, the bank must assess 
the risk of obligor default over a period of at least one year. This 
use of a one-year assessment horizon does not mean that a bank should 
limit its consideration to outcomes for that obligor that are most 
likely over that year; the rating must take into account possible 
adverse events that might increase an obligor's likelihood of default.
Rating Philosophy--Decisions Underlying Ratings Architecture
   S. Banks must adopt a ratings philosophy. Policy guidelines should 
describe the ratings philosophy, particularly how quickly ratings are 
expected to migrate in response to economic cycles.
   S. A bank's capital management policy must be consistent with its 
ratings philosophy in order to avoid capital shortfalls in times of 
systematic economic stress.
   In the IRB framework, banks assign obligors to groups that are 
expected to share common default frequencies. That general description, 
however, still leaves open different possible implementations, 
depending on how the bank defines the set of possible adverse events 
that the obligor might face. A bank must decide whether obligors are 
grouped by expected common default frequency over the next year (a so-
called point-in-time rating system) or by an expected common default 
frequency over a wider range of possible stress outcomes (a so-called 
through-the-cycle rating system). Choosing between a point-in-time 
system and a through-the-cycle system yields a rating philosophy.
   In point in time rating systems, obligors are assigned to groups 
that are expected to share a common default frequency in a particular 
year. Point-in-time ratings change from year to year as borrowers' 
circumstances change, including changes due to the economic 
possibilities faced by the borrowers. Since the economic circumstances 
of many borrowers reflect the common impact of the general economic 
environment, the transitions in point-in-time ratings will reflect that 
systematic influence. A Merton-style probability of default prediction 
model is commonly believed to be an example of a point-in-time approach 
to rating (although that may depend on the specific implementation of 
the model).
   Through-the-cycle rating systems do not ask the question, what is 
the probability of default over the next year.

[[Page 45955]]

Instead, they assign obligors to groups that would be expected to share 
a common default frequency if the borrowers in them were to experience 
distress, regardless of whether that distress is in the next year. 
Thus, as the descriptive title suggests, this rating philosophy 
abstracts from the near-term economic possibilities and considers a 
richer assessment of the possibilities. Like point-in-time ratings, 
through the cycle ratings will change from year to year due to changes 
in borrower circumstance. However, since this rating philosophy 
abstracts from the immediate economic circumstance and considers the 
implications of hypothetical stress circumstances, year to year 
transitions in ratings will be less influenced by changes in the actual 
economic environment. The ratings agencies are commonly believed to use 
through-the-cycle rating approaches.
   Current practice in many banks in the U.S. is to rate obligors 
using an approach that combines aspects of both point-in-time and 
through the cycle approaches. The explanation provided by banks that 
combine those approaches is that they want rating transitions to 
reflect the directional impact of changes in the economic environment, 
but that they do not want all of the volatility in ratings associated 
with a point-in-time approach.
   Regardless of which ratings philosophy a bank chooses, an IRB bank 
must articulate clearly its approach and the implications of that 
choice. As part of the choice of rating philosophy, the bank must 
decide whether the same ratings philosophy will be employed for all of 
the bank's portfolios. And management must articulate the implications 
that the bank's ratings philosophy has on the bank's capital planning 
process. If a bank chooses a ratings philosophy that is likely to 
result in ratings transitions that reflect the impact of the economic 
cycle, its capital management policy must be designed to avoid capital 
shortfalls in times of systematic economic stress.
Obligor-Rating Granularity
   S. An institution must have at least seven obligor grades that 
contain only non-defaulted borrowers and at least one grade to which 
only defaulted borrowers are assigned.
   The number of grades used in a rating system should be sufficient 
to reasonably ensure that management can meaningfully differentiate 
risk in the portfolio, without being so large that it limits the 
practical use of the rating system. To determine the appropriate number 
of grades beyond the minimum seven non-default grades, each institution 
must perform its own internal analysis.
   S. An institution must justify the number of obligor grades used in 
its rating system and the distribution of obligors across those grades.
   The mere existence of an exposure concentration in a grade (or 
grades) does not, by itself, reflect weakness in a rating system. For 
example, banks may focus on a particular type of lending, such as 
asset-based lending, in which the borrowers may have similar default 
risk. Banks with such focused lending activities may use close to the 
minimum number of obligor grades, while banks with a broad range of 
lending activities should have more grades. However, banks with a high 
concentration of obligors in a particular grade are expected to perform 
a thorough analysis that supports such a concentration.
   A significant concentration within an obligor grade may be 
suspected if the financial strength of the borrowers within that grade 
varies considerably. If obligors seem unduly concentrated, then 
management should ask themselves the following questions:
   [sbull] Are the criteria for each grade clear? Those rating 
criteria may be too vague to allow raters to make clear distinctions. 
Ambiguity may be an issue throughout the rating scale or it may be 
limited to the most commonly used ratings.
   [sbull] How diverse are the obligors? That is how many market 
segments (for example, large commercial, middle market, private 
banking, small business, geography, etc.) are significantly represented 
in the bank's borrower population? If a bank's commercial loan 
portfolio is not concentrated in one market segment, its risk rating 
distribution is not likely to be concentrated.
   [sbull] How broad are the bank's internal rating categories 
compared to those of other lenders? The bank may be able to learn 
enough from publicly available information to adjust its rating 
criteria.
   Some banks use ``modifiers'' to provide more risk differentiation 
to a given rating system. A risk rating modified with a plus, minus or 
other indicator does not constitute a separate grade unless the bank 
has developed a distinct rating definition and criteria for the 
modified grade. In the absence of such distinctions, grades such as 5, 
5+, and 5- are viewed as a single grade for regulatory capital purposes 
regardless of the existence of the modifiers.
Loss Severity Ratings
   S. Banks must rank facilities by the expected severity of the loss 
upon default.
   The second dimension of an IRB system is the loss severity rating, 
which is calibrated to LGD. A facility's LGD estimate is the loss the 
bank is likely to incur in the event that the obligor defaults, and is 
expressed as a percentage of exposure at the time of default. LGD 
estimates can be assigned either through the use of a loss severity 
rating system or they can be directly assigned to each facility.
   LGD analysis is still in very early stages of development relative 
to default risk modeling. Academic research in this area is relatively 
sparse, data are not abundant, and industry practice is still widely 
varying and evolving. Given the lack of data and the lack of research 
into LGD modeling, some banks are likely, as a first step, to segment 
their portfolios by a handful of available characteristics and 
determine the appropriate LGDs for those segments. Over time, banks' 
LGD methodologies are expected to evolve. Long-standing banking 
experience and existing research on LGD, while preliminary, suggests 
that collateral values, seniority, industry, etc. are predictive of 
loss severity.
   S. Banks must have empirical support for LGD rating systems 
regardless of whether they use an LGD grading system or directly assign 
LGD estimates.
   Whether a bank chooses to assign LGD values directly or, 
alternatively, to rate facilities and then quantify the LGD for the 
rating grades, the key requirement is that it will need to identify 
facility characteristics that influence LGD. Each of the loss severity 
rating categories must be associated with an empirically supported LGD 
estimate. In much the same way an obligor-rating system ranks exposures 
by the probability of default, a facility rating system must rank 
facilities by the likely loss severity.
   Regardless of the method used to assign LGDs (loss severity grades 
or direct LGD estimation), data used to support the methodology must be 
gathered systematically. For many banks, the quality and quantity of 
data available to support the LGD estimation process will have an 
influence on the method they choose.
Stress Condition LGDs
   S. Loss severity ratings must reflect losses expected during 
periods with a relatively high number of defaults.
   Like obligor ratings, which group obligors by expected default 
frequency, loss severity ratings assign facilities to groups that are 
expected to experience a common loss severity. However, the different 
treatment accorded to PD and LGD in the model used to calculate IRB 
capital requirements mandates an

[[Page 45956]]

asymmetric treatment of obligor and loss severity ratings. Obligor 
ratings assign obligors to groups that are expected to experience 
common default frequencies across a number of years, some of which are 
years of general economic stress and some of which are not. In 
contrast, loss severity ratings (or estimates) must pertain to losses 
expected during periods with a high number of defaults--particular 
years that can be called stress conditions. For cases in which loss 
severities do not have a material degree of cyclical variability, use 
of a long-run default weighted average is appropriate, although stress 
condition LGD generally exceeds these averages.
Loss Severity Rating/LGD Granularity
   S. Banks must have a sufficiently fine loss severity grading system 
or prediction model to avoid grouping facilities with widely varying 
LGDs together.
   While there is no stated minimum number of loss severity grades, 
the systems that provide LGD estimates must be flexible enough to 
adequately segment facilities with significantly varying LGDs. Banks 
should have a sufficiently fine LGD grading system or LGD prediction 
model to avoid grouping facilities with widely varying LGDs together. 
For example, a bank using a loss severity rating-scale approach that 
has credit products with a variety of collateral packages or financing 
structures would be expected to have more LGD grades than those 
institutions with fewer options in their credit products.
Other Considerations of IRB Rating System Architecture
Timeliness of Ratings
   S. All risk ratings must be updated whenever new relevant 
information is received, but must be updated at least annually.
   A bank must have a policy that requires a dynamic ratings approach 
ensuring that obligor and loss severity ratings reflect current 
information. That policy must also specify minimum financial reporting 
and collateral valuation requirements. For example, at the time of 
servicing events, banks typically receive updated financial information 
on obligors. For cases in which loss severity grades or estimates are 
dependent on collateral values or other factors that change 
periodically, that policy must take into account the need to update 
these factors.
   Banks' policies may include an alternative rating update timetable 
for exposures below a de minimus amount that is justified by the lack 
of materiality of the potential impact on capital. For example, some 
banks use triggering events to prompt an update of their ratings on de 
minimus exposures rather than adhering to a specific timetable.
Multiple Ratings Systems
   Some banks may develop one risk-rating system that can be used 
across the entire commercial loan portfolio. However, a bank can choose 
to deploy any number of rating systems as long as all exposures are 
assigned PD and LGD values. A different rating system could be used for 
each business line and each rating system could use a different rating 
scale. A bank could also use a different rating system for each 
business line with each system using a common rating scale. Rating 
models could be used for some portfolios and expert judgment systems 
for others. An institution's complexity and sophistication, as well as 
the size and range of products offered, will affect the types and 
numbers of rating systems employed.
   While using a number of rating systems is feasible, such a practice 
might make it more difficult to meet supervisory standards. Each rating 
system must conform to the standards in this guidance and must be 
validated for accuracy and consistency. The requirement that each 
rating systems be calibrated to parameter values imposes the ultimate 
constraint, which is that ratings be applied consistently.
Recognition of the Risk Mitigation Benefits of Guarantees
   S. Banks reflecting the risk-mitigating effect of guarantees must 
do so by either adjusting PDs or LGDs, but not both.
   S. To recognize the risk-mitigating effects of guarantees, 
institutions must ensure that the written guarantee is evidenced by an 
unconditional and legally enforceable commitment to pay that remains in 
force until the debt is satisfied in full.
   Adjustments for guarantees must be made in accordance with specific 
criteria contained in the bank's credit policy. The criteria should be 
plausible and intuitive, and should address the guarantor's ability and 
willingness to meet its obligations. Banks are expected to gather 
evidence that confirms the risk-mitigating effect of guarantees.
   Other forms of written third-party support (for example, comfort 
letters or letters of awareness) that are not legally binding should 
not be used to adjust PD or LGD unless a bank can demonstrate through 
analysis of internal data the risk-mitigating effect of such support. 
Banks may not adjust PDs or LGDs to reflect implied support or verbal 
assurances.
   Regardless of the method used to recognize the risk-mitigating 
effects of guarantees, a bank must adopt an approach that is applied 
consistently over time and across the portfolio. Moreover, the onus is 
on the bank to demonstrate that its approach is supported by logic and 
empirical results. While guarantees may provide grounds for adjusting 
PD or LGD, they cannot result in a lower risk weight than that assigned 
to a similar direct obligation of the guarantor.\3\
---------------------------------------------------------------------------

   \3\ The probability that an obligor and a guarantor (who 
supports the obligor's debt) will both default on a debt is lower 
than the probability that either the obligor or the guarantor will 
default. This favorable risk-mitigation effect is known as the 
reduced likelihood of ``double default.'' In determining their 
rating criteria and procedures, banks are not permitted to consider 
possible favorable effects of imperfect expected correlation between 
default events for the borrower and guarantor for purposes of 
regulatory capital requirements. Thus, the adjusted risk weight 
cannot reflect the risk mitigation of double default. The ANPR 
solicits public comment on the double-default issues.
---------------------------------------------------------------------------

Validation Process
   S. IRB rating system architecture must be designed to ensure rating 
system accuracy.
   As part of their IRB rating system architecture, banks must 
implement a process to ensure the accuracy of their rating systems. 
Rating system accuracy is defined as the combination of the following 
outcomes:
   [sbull] The actual long-run average default frequency for each 
rating grade is not significantly greater than the PD assigned to that 
grade.
   [sbull] The actual stress-condition loss rates experienced on 
defaulted facilities are not significantly greater than the LGD 
estimates assigned to those facilities.
   Some differences across individual grades between observed outcomes 
and the estimated parameter inputs to the IRB equations can be 
expected. But if systematic differences suggest a bias toward lowering 
regulatory capital requirements, the integrity of the rating system (of 
either the PD or LGD dimensions or of both) becomes suspect. Validation 
is the set of activities designed to give the greatest possible 
assurances of ratings system accuracy.
   S. Banks must have ongoing validation processes that include the 
review of developmental evidence, ongoing monitoring, and the 
comparison of predicted parameter values to actual outcomes (back-
testing).
   Validation is an integral part of the rating system architecture. 
Banks must have processes designed to give

[[Page 45957]]

reasonable assurances of their rating systems' accuracy. The ongoing 
process to confirm and ensure rating system accuracy consists of:
   [sbull] The evaluation of developmental evidence,
   [sbull] Ongoing monitoring of system implementation and 
reasonableness (verification and benchmarking), and
   [sbull] Back-testing (comparing actual to predicted outcomes).
   IRB institutions are expected to employ all of the components of 
this process. However, the data to perform comprehensive back-testing 
will not be available in the early stages of implementing an IRB rating 
system. Therefore, banks will have to rely more heavily on 
developmental evidence, quality control tests, and benchmarking to 
assure themselves and other interested parties that their rating 
systems are likely to be accurate. Since the time delay before rating 
systems can be back-tested is likely to be an important issue--because 
of the rarity of defaults in most years and the bunching of defaults in 
a few years--the other parts of the validation process will assume 
greater importance. If rating processes are developed in a learning 
environment in which banks attempt to change and improve ratings, back 
testing may be delayed even further. Validation in its early stages 
will depend on bank management's exercising informed judgment about the 
likelihood of the rating system working--not simply on empirical tests.
Ratings System Developmental Evidence
   The first source of support for the validity of a bank's rating 
system is developmental evidence. Evaluating developmental evidence 
involves making a reasonable assessment of the quality of the rating 
system by analyzing its design and construction. Developmental evidence 
is intended to answer the question, Could the rating system be expected 
to work reasonably if it is implemented as designed? That evidence will 
have to be revisited whenever the bank makes a change to its rating 
system. If a bank adopts a rating system and does not make changes, 
this step will not have to be revisited. However, since rating systems 
are likely to change over time as the bank learns about the 
effectiveness of the system and incorporates the results of those 
analyses, the evaluation of developmental evidence is likely to be an 
ongoing part of the process. The particular steps taken in evaluating 
developmental evidence will depend on the type of rating system.
   Generally, the evaluation of developmental evidence will include a 
body of expert opinion. For example, developmental evidence in support 
of a statistical rating model must include information on the logic 
that supports the model and an analysis of the statistical model-
building techniques. In contrast, developmental evidence in support of 
a constrained-judgment system that features guidance values of 
financial ratios might include a description of the logic and evidence 
relating the values of the ratios to past default and loss outcomes.
   Regardless of the type of rating system, the developmental evidence 
will be more persuasive when it includes empirical evidence on how well 
the ratings might have worked in the past. This evidence should be 
available for a statistical model since such models are chosen to 
maximize the fit to outcomes in the development sample. In addition, 
statistical models should be supported by evidence that they work well 
outside the development sample. Use of ``holdout'' sample evidence is a 
good model-building practice to ensure that the model is not merely a 
statistical quirk of the particular data set used to build the model.
   Empirical developmental evidence of rating effectiveness will be 
more difficult to produce for a judgmental rating system. Such evidence 
would require asking raters how they would have rated past credits for 
which they did not know the outcomes. Those retrospective ratings could 
then be compared to the outcomes to determine whether the ratings were 
correct on average. Conducting such tests, however, will be difficult 
because historical data sets may not include all of the information 
that an individual would have actually used in making a judgment about 
a rating.
   The sufficiency of the developmental evidence will itself be a 
matter of informed expert opinion. Even if the rating system is model-
based, an evaluation of developmental evidence will entail judging the 
merits of the model-building technique. Although no bright line tests 
are feasible because expert judgment is essential to the evaluation of 
rating system development, experts will be able to draw conclusions 
about whether a well-implemented system would be likely to perform 
satisfactorily.
Ratings System Ongoing Validation
   The second source of analytical support for the validity of a bank 
rating system is the ongoing analysis intended to confirm that the 
rating system is being implemented and continues to perform as 
intended. Such analysis involves process verification and benchmarking.
Process Verification
   Verification activities address the question, Are the ratings being 
assigned as intended? Specific verification activities will depend on 
the rating approach. If a model is used for rating, verification 
analysis begins by confirming that the computer code used to deploy the 
model is correct. The computer code can be verified in a number of 
established ways. For example, a qualified expert can duplicate the 
code or check the code line by line. Process verification for a model 
will also include confirmation that the correct data are being used in 
the model.
   For expert-judgment and constrained-judgment systems, verification 
requires other individual reviewers to evaluate whether the rater 
followed rating policy. The primary requirements for verification of 
ratings assigned by individuals are:
   [sbull] A transparent rating process,
   [sbull] A database with information used by the rater, and
   [sbull] Documentation of how the decisions were made.
   The specific steps will depend on how much the process incorporates 
specific guidelines and how much the exercise of judgment is allowed. 
As the dependence on specific guidelines increases, other individuals 
can more easily confirm that guidelines were followed by reference to 
sufficient documentation. As the dependence on judgment rises, the 
ratings review function will have to be staffed increasingly by experts 
with appropriate skills and knowledge about the rating policies of the 
bank.
   Ratings process verification also includes override monitoring. If 
individuals have the ability to override either models or policies in a 
constrained-judgment system, the bank should have both a policy stating 
the tolerance for overrides and a monitoring system for identifying the 
occurrence of overrides. A reporting system capturing data on reasons 
for overrides will facilitate learning about whether overrides improve 
accuracy.
Benchmarking
   S. Banks must benchmark their internal ratings against internal, 
market and other third-party ratings.
   Benchmarking is the set of activities that uses alternative tools 
to draw inferences about the correctness of ratings before outcomes are 
actually

[[Page 45958]]

known. The most important type of benchmarking of a rating system is to 
ask whether another rater or rating method attaches the same rating to 
a particular obligor or facility. Regardless of the rating approach, 
the benchmark can be either a judgmental or a model-based rating. 
Examples of such benchmarking include:
   [sbull] Ratings reviewers who completely re-rate a sample of 
credits rated by individuals in a judgmental system.
   [sbull] An internally developed model is used to rate credits rated 
earlier in a judgmental system.
   [sbull] Individuals rate a sample of credits rated by a model.
   [sbull] Internal ratings are compared against results from external 
agencies or external models.
   Because it will take considerable time before outcomes will be 
available, using alternative ratings as benchmarks will be a very 
important validation device. Such benchmarking must be applied to all 
rating approaches, and the benchmark can be either a model or judgment. 
At a minimum, banks must establish a process in which a representative 
sample of its internal ratings is compared to third-party ratings 
(e.g., independent internal raters, external rating agencies, models, 
or other market data sources) of the same credits.
   Benchmarking also includes activities designed to draw broader 
inferences about whether the rating system--as opposed to individual 
ratings--is working as expected. The bank can look for consistency in 
ranking or consistency in the values of rating characteristics for 
similarly rated credits. Examples of such benchmarking activities 
include:
   [sbull] Analyzing the characteristics of obligors that have 
received common ratings.
   [sbull] Monitoring changes in the distribution of ratings over 
time.
   [sbull] Calculating a transition matrix calculated from changes in 
ratings in a bank's portfolio and comparing it to historical transition 
matrices from internal bank data or publicly available ratings.
   While benchmarking activities allow for inferences about the 
correctness of the ratings system, they are the not same thing as back-
testing. The benchmark itself is a prediction and may be in error. If 
benchmarking evidence suggests a pattern of rating differences, it 
should lead the bank to investigate the source of the differences. 
Thus, the benchmarking process illustrates the possibility of feedback 
from ongoing validation to model development, underscoring the 
characterization of validation as a process.
Back Testing
   S. Banks must develop statistical tests to back-test their IRB 
rating systems.
   S. Banks must establish internal tolerance limits for differences 
between expected and actual outcomes.
   S. Banks must have a policy that requires remedial actions be taken 
when policy tolerances are exceeded.
   The third component of a validation process is back-testing, which 
is the comparison of predictions with actual outcomes. Back-testing of 
IRB systems is the empirical test of the accuracy of the parameter 
values, PD and LGD, associated with obligor and loss severity ratings, 
respectively. For IRB rating systems, back-testing addresses the 
combined effectiveness of the assignment of obligor and loss severity 
ratings and the calibration of the parameters PD and LGD attached to 
those ratings.
   At this time, there is no generally agreed-upon statistical test of 
the accuracy of IRB systems. Banks must develop statistical tests to 
back-test their IRB rating systems. In addition, banks must have a 
policy that specifies internal tolerance limits for comparing back-
testing results. Importantly, that policy must outline the actions that 
would be taken whenever policy limits are exceeded.
   As a combined test of ratings effectiveness, back-testing is a 
conceptual bridge between the ratings system architecture discussed in 
this chapter and the quantification of parameters, discussed in Chapter 
2. The final section of Chapter 2 discusses back-testing as one type of 
quantitative test required to validate the quantification of parameter 
values.

III. Quantification of IRB Systems

   Ratings quantification is the process of assigning numerical values 
to the four key components for internal ratings-based assessments of 
credit-risk capital: probability of default (PD), the expected loss 
given default (LGD), the expected exposure at default (EAD), and 
maturity (M). Section I establishes an organizing framework for 
considering IRB quantification and develops general principles that 
apply to the entire process. Sections II through IV cover specific 
principles or supervisory standards that apply to PD, LGD, and EAD 
respectively. The maturity component, which is much less dependent on 
statistical estimates and the use of data, receives somewhat different 
treatment in section V. Validation of the quantification process is 
covered in section VI.

A. Introduction

Stages of the Quantification Process
   With the exception of maturity, the risk components are 
unobservable and must be estimated. The estimation must be consistent 
with sound practice and supervisory standards. In addition, a bank must 
have processes to ensure that these estimates remain valid.
   Calculation of risk components for IRB involves two sets of data: 
the bank's actual portfolio data, consisting of current credit 
exposures assigned to internal grades, and a ``reference data set,'' 
consisting of a set of defaulted credits (in the case of LGD and EAD 
estimation) or both defaulted and non-defaulted credits (in the case of 
PD estimation). The bank estimates a relationship between the reference 
data set and probability of default, loss severity, or exposure; then 
this estimated relationship is applied to the actual portfolio data for 
which capital is being assessed.
   Quantification proceeds through four logical stages: obtaining 
reference data; estimating the reference data's relationship to the 
parameters; mapping the correspondence between the reference data and 
the portfolio's data; and applying the relationship between reference 
data and parameters to the portfolio's data. (Readers may find it 
helpful to refer to the appendix to this chapter, which illustrates how 
this four-stage framework can be applied to ratings quantification 
approaches in practice.) An evaluation of any bank's IRB quantification 
process focuses on understanding how the bank implements each stage for 
each of the key parameters, and on assessing the adequacy of the bank's 
approach.
   Data--First, the bank constructs a reference data set, or source of 
data, from which parameters can be estimated.
   Reference data sets include internal data, external data, and 
pooled internal/external data. Important considerations include the 
comparability of the reference data to the current credit portfolio, 
whether the sample period ``appropriately'' includes periods of stress, 
and the definition of default used in the reference data. The reference 
data must be described using a set of observed characteristics; 
consequently, the data set must contain variables that can be used for 
this characterization. Relevant characteristics might include external 
debt ratings, financial measures, geographic regions, or any other 
factors that are believed to be

[[Page 45959]]

related in some way to PD, LGD, or EAD. More than one reference data 
set may be used.
   Estimation--Second, the bank applies statistical techniques to the 
reference data to determine a relationship between characteristics of 
the reference data and the parameters (PD, LGD, or EAD).
   The result of this step is a model that ties descriptive 
characteristics of the obligor or facility in the reference data set to 
PD, LGD, or EAD estimates. In this context, the term `models' is used 
in the most general sense; a model may be simple, such as the 
calculation of averages, or more complicated, such as an approach based 
on advanced regression techniques. This step may include adjustments 
for differences between the IRB definition of default and the default 
definition in the reference data set, or adjustments for data 
limitations. More than one estimation technique may be used to generate 
estimates of the risk components, especially if there are multiple sets 
of reference data or multiple sample periods.
   Mapping--Third, the bank creates a link between its portfolio data 
and the reference data based on common characteristics.
   Variables or characteristics that are available for the current 
portfolio must be mapped to the variables used in the default, loss-
severity, or exposure model. (In some cases, the bank constructs the 
link for a representative exposure in each internal grade, and the 
mapping is then applied to all credits within a grade.) An important 
element of mapping is making adjustments for differences between 
reference data sets and the bank's portfolio. The bank must create a 
mapping for each reference data set and for each combination of 
variables used in any estimation model.
   Application--Fourth, the bank applies the relationship estimated 
for the reference data to the actual portfolio data.
   The ultimate aim of quantification is to attribute a PD, LGD, or 
EAD to each exposure within the portfolio, or to each internal grade if 
the mapping was done at the grade level. This step may include 
adjustments to default frequencies or loss rates to ``smooth'' the 
final parameter estimates. If the estimates are applied to individual 
transactions, the bank must in some way aggregate the estimates at the 
grade level. In addition, if multiple data sets or estimation methods 
are used, the bank must adopt a means of combining the various 
estimates.
   A number of examples are given in this chapter to aid exposition 
and interpretation. None of the examples is sufficiently detailed to 
incorporate all the considerations discussed in this chapter. Moreover, 
technical progress in the area of quantification is rapid. Thus, banks 
should not interpret an example that is consistent with the standard 
being discussed, and that resembles the bank's current practice, as 
creation of a ``safe harbor'' or as an indication that the bank's 
practice will be approved as-is. Banks should consider this guidance in 
its entirety when determining whether systems and practices are 
adequate.
General Principles for Sound IRB Quantification
   Several core principles apply to all elements of the overall 
ratings quantification process; those general principles are discussed 
in this introductory section. Each of these principles is, in effect, a 
supervisory standard for IRB systems. Other supervisory standards, 
specific to particular elements or parameters, are discussed in the 
relevant sections.
   Supervisory evaluation of IRB quantification requires consideration 
of all of these principles and standards, both general and specific. 
Particular practical approaches to ratings quantification may be highly 
consistent with some standards, and less so with others. In any 
particular case, an ultimate assessment relies on the judgment of 
supervisors to weigh the strengths and weaknesses of a bank's chosen 
approach, using these supervisory standards as a guide.
   S. IRB institutions must have a fully specified process covering 
all aspects of quantification (reference data, estimation, mapping, and 
application). The quantification process, including the role and scope 
of expert judgment, must be fully documented and updated periodically.
   A fully specified quantification process must describe how all four 
stages (data, estimation, mapping, and application) are implemented for 
each parameter. Documentation promotes consistency and allows third 
parties to review and replicate the entire process. Examples of third 
parties that might use the documentation include rating-system 
reviewers, auditors, and bank supervisors. Periodic updates to the 
process must be conducted to ensure that new data, analytical 
techniques, and evolving industry practice are incorporated into the 
quantification process.
   S. Parameter estimates and related documentation must be updated 
regularly.
   The parameter estimates must be updated at least annually, and the 
process for doing so must be documented in bank policy. The update 
should also evaluate the judgmental adjustments embedded in the 
estimates; new data or techniques may suggest a need to modify those 
adjustments. Particular attention should be given to new business lines 
or portfolios in which the mix of obligors is believed to have changed 
substantially. A material merger, acquisition, divestiture, or exit 
clearly raises questions about the continued applicability of the 
process and should trigger an intensive review and updating.
   The updating process is particularly relevant for the reference 
data stage because new data become available all the time. New data 
must be incorporated, into the PD, LGD, and EAD estimates, using a 
well-defined process.
   S. A bank must subject all aspects of the quantification process, 
including design and implementation, to an appropriate degree of 
independent review and validation.
   An independent review is an assessment conducted by persons not 
accountable for the work being reviewed. The reviewers may be either 
internal or external parties. The review serves as a check that the 
quantification process is sound and works as intended; it should be 
broad-based, and must include all of the elements of the quantification 
process that lead to the ultimate estimates of PD, LGD, and EAD. The 
review must cover the full scope of validation: evaluation of the 
integrity of data inputs, analysis of the internal logic and 
consistency of the process, comparison with relevant benchmarks, and 
appropriate back-testing based on actual outcomes.
   S. Judgmental adjustments may be an appropriate part of the 
quantification process, but must not be biased toward lower estimates 
of risk.
   Judgment will inevitably play a role in the quantification process 
and may materially affect the estimates. Judgmental adjustments to 
estimates are often necessary because of some limitations on available 
reference data or because of inherent differences between the reference 
data and the bank's portfolio data. The bank must ensure that 
adjustments are not biased toward optimistically low parameter 
estimates for PD, LGD, and EAD. Individual assumptions are less 
important than broad patterns; consistent signs of judgmental decisions 
that lower parameter estimates materially may be evidence of bias.

[[Page 45960]]

   The reasoning and empirical support for any adjustments, as well as 
the mechanics of the calculation, must be documented. The bank should 
conduct sensitivity analysis to demonstrate that the adjustment 
procedure is not biased toward reducing capital requirements. The 
analysis must consider the impact of any judgmental adjustments on 
estimates and risk weights, and must be fully documented.
   S. Parameter estimates must incorporate a degree of conservatism 
that is appropriate for the overall robustness of the quantification 
process.
   In estimating values of PD, LGD, and EAD should be as precise and 
accurate as possible. However, estimates of PD, LGD and EAD are 
statistics, and thus inherently subject to uncertainty and potential 
error. It is often possible to be reasonably confident that a risk 
component or other parameter lies within a particular range, but 
greater precision is difficult to achieve. Aspects of the ratings 
quantification process that are apt to introduce uncertainty and 
potential error include the following:
   The estimation of coefficients of particular variables in a 
regression-based statistical default or severity model.
   [sbull] The calculation of average default or loss rates for 
particular categories of credits in external default databases.
   [sbull] The mapping between portfolio obligors or facilities and 
reference data when the set of common characteristics does not align 
exactly.
   A general principle of the IRB approach is that a bank must adjust 
estimates conservatively in the presence of uncertainty or potential 
error. In many cases this corresponds to assigning a final parameter 
estimate that increases required capital relative to the best estimate 
produced through sound-practice estimation techniques. The extent of 
this conservative adjustment should be related to factors such as the 
relevance of the reference data, the quality of the mapping, the 
precision of the statistical estimates, and the amount of judgment used 
throughout the process. Margins of conservatism need not be added at 
each step; indeed, that could produce an excessively conservative 
result. The overall margin of conservatism should adequately account 
for all uncertainties and weaknesses; this is the general 
interpretation of requirements to incorporate appropriate degrees of 
conservatism. Improvements in the quantification process (use of better 
data, estimation techniques, and so on) may reduce the appropriate 
degree of conservatism over time.
   Estimates of PD, LGD, EAD, or other parameters or coefficients 
should be presented with an accompanying sense of the statistical 
precision of the estimates; this facilitates an assessment of the 
appropriate degree of conservatism.

B. Probability of Default (PD)

Data
   To estimate PD accurately, a bank must have a comprehensive 
reference data set with observations that are comparable to the bank's 
current portfolio of obligors. Clearly, the data set used for 
estimation should be similar to the portfolio to which such estimates 
will be applied. The same comparability standard applies to both 
internal and external data sets.
   To ensure ongoing applicability of the reference data, a bank must 
assess the characteristics of its current obligors relative to the 
characteristics of obligors in the reference data. Such variables might 
include qualitative and quantitative obligor information, internal and 
external rating, rating dates, and line of business or geography. To 
this end, a bank must maintain documentation that fully describes all 
explanatory variables in the data set, including any changes to those 
variables over time. A well-defined and documented process must be in 
place to ensure that the reference data are updated as frequently as is 
practical, as fresh data become available or portfolio changes make 
necessary.
   S. The sample for the reference data must be at least five years, 
and must include periods of economic stress during which default rates 
were relatively high.
   To foster more robust estimation, banks should use longer time 
series when more than five years of data are available. However, the 
benefits of using a longer time series (longer than five years) may 
have to be weighed against a possible loss of data comparability. The 
older the reference data, the less similar they are likely to be to the 
bank's current portfolio; striking the correct balance is a matter of 
judgment. Reference obligors must not differ from the current portfolio 
obligors systematically in ways that seem likely to be related to 
obligor default risk. Otherwise, the derived PD estimates may not be 
applicable to the current portfolio.
   Note that this principle does not simply restate the requirement 
for five years of data: periods of stress during which default rates 
are relatively high must be included in the data sample. Exclusion of 
such periods biases PD estimates downward and unjustifiably lowers 
regulatory capital requirements.

   Example. A bank's reference data set covers the years 1987 
through 2001. Each year includes identical data elements, and each 
year is similarly populated. For its grade PD estimates, the bank 
relies upon data from a sub-sample covering 1992 through 2001. The 
bank provides no justification for dropping the years from 1987 
through 1991. The bank contends that it is not necessary to include 
those data, as the reference sample they use for estimation 
satisfies the five-year requirement. This practice is not consistent 
with the standard because the bank has not supported its decision to 
ignore available data. The fact that the excluded years include a 
recession would raise particular concerns.

   S. The definition of default within the reference data must be 
reasonably consistent with the IRB definition of default.
   Regardless of the source of the reference data, a bank must apply 
the same default definition throughout the quantification processes. 
This fosters consistent estimation across parameters and reduces the 
potential for undesired bias. In addition, consistent application of 
the same definition across banks will permit true horizontal analysis 
by supervisors and engaged market participants.
   This standard applies to both internal and external reference data. 
For internal data, a bank's default definition is expected to be 
consistent with the IRB definition going forward. Banks will be 
expected to make appropriate adjustments to their data systems such 
that all defaults as defined for IRB are captured by the time a bank 
fully implements its IRB system. For any historical or external data 
that do not fully comply with the IRB definition of default, a bank 
must make conservative adjustments to reflect such discrepancies. 
Larger discrepancies require larger adjustments for conservatism.

   Example. To identify defaults in its historical data, a bank 
applies a consistent definition of ``placed on nonaccrual.'' This 
definition is used in the bank's quantification exercises to 
estimate PD, LGD, and EAD. The bank recognizes that use of the 
nonaccrual definition fails to capture certain defaults as 
identified in the IRB rules. Specifically, the bank indicates that 
the following kinds of defaulted facilities would not have been 
placed on nonaccrual: (1) Credit obligations that were sold at a 
material credit-related economic loss, and (2) distressed 
restructurings. To be consistent with the standard, the bank must 
make a well-supported adjustment to its grade PD estimates to 
reflect the difference in the default definitions.
Estimation
   Estimation of PD is the process by which characteristics of the 
reference

[[Page 45961]]

data are related to default frequencies.\4\ The relevant 
characteristics that help to determine the likelihood of default are 
referred to as ``drivers of default''. Drivers might include variables 
such as financial ratios, management expertise, industry, and 
geography.
---------------------------------------------------------------------------

   \4\ The New Basel Capital Accord produced by the Basel Committee 
on Banking Supervision discusses three techniques for PD estimation. 
IRB banks are not constrained to select from among these three 
techniques; they have broad flexibility to implement appropriate 
approaches to quantification. The three Basel techniques are best 
regarded not as a complete taxonomy of the possible approaches to PD 
estimation, but rather as illustrations of a few of the many 
possible approaches.
---------------------------------------------------------------------------

   S. Estimates of default rates must be empirically based and must 
represent a long-run average.
   Estimates must capture average default experience over a reasonable 
mix of high-default and low-default years of the economic cycle. The 
average is labeled ``long-run'' because a long observation period would 
span both peaks and valleys of the economic cycle. The emphasis should 
not be on time-span; the long-run average concept captures the breadth, 
not the length, of experience.
   If the reference data are characterized by internal or external 
rating grades, one estimation approach is to calculate the mean of one-
year realized default rates for each grade, giving equal weight to each 
year's realized default rate. PD estimates generally should be 
calculated in this manner.
   Another approach is to pool obligors in a given grade over a number 
of years and then calculate the mean default rate. In this case, each 
year's default rate is weighted by the number of obligors. This 
approach may underestimate default rates. For example, if lending 
declines in recessions so that obligors are fewer in those years than 
in others, weighting by number of obligors would dilute the effect of 
the recession year on the overall mean. The obligor-weighted 
calculation, or another approach, will be allowed only if the bank can 
demonstrate that this approach provides a better estimate of the long-
run average PD. At a minimum, this would involve comparing the results 
of both methods.
   Statistical default prediction models may also play a role in PD 
estimation. For example, the characteristics of the reference data 
might include financial ratios or a distance-to-default measure, as 
defined by a specific implementation of a Merton-style structural 
model.
   For a model-based approach to meet the requirement that ultimate 
grade PD estimates be long-run averages, the reference data used in the 
default model must meet the long-run requirement. For example, a model 
can be used to relate financial ratios to likelihood of default based 
on the outcome for the firms--default or non-default. Such a model must 
be calibrated to capture the default experience over a reasonable mix 
of good and bad years of the economic cycle. The same requirement would 
hold for a structural model; distance to default must be calibrated to 
default frequency using long-run experience. This applies to both 
internal and vendor models, and a bank must verify that this 
requirement is met.

   Example 1. A bank uses external data from a rating agency to 
estimate PD. The PD estimate for each agency grade is calculated as 
the mean of yearly realized default rates over a time period (1980 
through 2001) that includes several recessions and high-default 
years. The bank provides support that this time period adequately 
represents long-run experience. This illustrates an estimation 
method that is consistent with the standard.
   Example 2a. Like the institution in example 1, a bank maps 
internal ratings to agency grades. The estimates for the agency 
grades are set indirectly, using the default probabilities from a 
default prediction model. The bank does so because although it links 
internal and agency grades, the bank views the default model's 
results as more predictive than the historical agency default 
experience. For each agency grade, the bank calculates a PD estimate 
as the mean of the model-based default probabilities for the agency-
rated obligors. In order to meet the long-run requirement, the bank 
calculates the estimates over the seven years from 1995 through 
2001. The bank demonstrates that this time period includes a 
reasonable mix of high-default and low-default experience. This 
estimation method is consistent with the standard.
   Example 2b. In a variant of example 2a, a bank uses the mean 
default frequency per agency rating grade for a single year, such as 
2001. Empirical evidence shows that the mean default frequency for 
agency grades varies substantially from year to year. A single year 
thus does not reflect the full range of experience, because a long-
run average should be relatively stable year to year. Such 
instability makes this estimation method unacceptable.
   Example 2c. Another bank calculates the agency grade PD 
estimates as the median default probability of companies in that 
grade. The bank does so without demonstrating that the median is a 
better statistical estimator than the mean. This estimation method 
is not consistent with the standard. A median gives less weight to 
obligors with high estimated default probabilities than a simple 
mean does. The difference between mean and median can be material 
because distributions of credits within grades often are 
substantially skewed toward higher default probabilities: the 
riskier obligors within a grade tend to have individual default 
probabilities that are substantially worse than the median, while 
the least risky have default probabilities only somewhat better than 
the median.

   S. Judgmental adjustments may play an appropriate role in PD 
estimation, but must not be biased toward lower estimates.
   The following examples illustrate how supervisors will evaluate 
adjustments:

   Example 1. A bank uses the last five years of internal default 
history to estimate grade PDs. However, they recognize that the 
internal experience does not include any high-default years. In 
order to remedy this and still take advantage of its experience, the 
bank uses external agency data to adjust the estimates upward. Using 
the agency data, the bank calculates the ratio between the long-run 
average and the mean default rate per grade over the last five 
years. The bank assumes that the relationship observed in the agency 
data applies to its portfolio, and adjusts the estimates for the 
internal data accordingly. This practice is consistent with the 
standard.
   Example 2. A bank uses internal default experience to estimate 
grade PDs. However, the bank has historically failed to recognize 
defaults when the loss on the default obligation was avoided by 
seizing collateral. The bank makes no adjustment for such missing 
defaults. The realized default rate using the more inclusive 
definition would be higher than that observed by the bank (and loss 
severity rates would be correspondingly lower). This practice would 
not be consistent with the standard, unless the bank demonstrates 
that the necessary adjustment is immaterial.
Mapping
   Mapping is the process of establishing a correspondence between the 
bank's current obligors and the reference obligor data used in the 
default model. Hence, mapping involves identifying how default-related 
characteristics of the current portfolio correspond to the 
characteristics of reference obligors. Such characteristics might 
include financial and nonfinancial variables, and assigned ratings or 
grades.
   Mapping can be thought of as taking each obligor in the bank's 
portfolio and characterizing it as if it were part of the reference 
data. There are two broad approaches to the mapping process:
   Obligor mapping: Each portfolio obligor is mapped to the reference 
data based on its individual characteristics. For example, if a bank 
applies a default model, a default probability will be generated for 
each obligor. That individual default probability is then used to 
assign each obligor to a particular internal grade, based on the bank's 
established criteria. To obtain a final estimate of the grade PD in the 
subsequent application stage, the bank averages the default 
probabilities of individual obligors within each grade.
   Grade mapping: Characteristics of the obligors within an internal 
grade are

[[Page 45962]]

averaged or otherwise summarized to construct a ``typical'' or 
representative obligor for each grade. Then, the bank maps that 
representative obligor to the reference data. For example, if the bank 
uses a default model, the default probability associated with that 
typical obligor will serve as the grade PD in the application stage. 
Alternatively, the bank may map the typical obligor to a particular 
external rating grade based on quantitative and qualitative 
characteristics, and assign the long-run default rate for that rating 
to the internal grade in the application stage.
   Either grade mapping or obligor mapping can be part of the 
quantification process; either method can produce a single PD estimate 
for each grade in the application stage. However, in the absence of 
other compelling considerations, banks should use obligor mapping for 
two reasons:
   [sbull] First, default probabilities are nonlinear under many 
estimation approaches. As a result, the default probability of the 
typical obligor--the result of a grade mapping approach--is often lower 
than the mean of the individual obligor default probabilities from the 
obligor mapping approach. For example, consider a bank that maps to the 
S&P scale and uses historical S&P bond default rates. For ease of 
illustration, suppose that one internal grade contains only three 
obligors that individually map to BB, BB-, and B+. The historical 
default rates for these three grades are 1.07, 1.76, and 3.24 percent, 
respectively (based on 1981-2001 data). Using obligor mapping, those 
rates would be assigned directly to the three obligors, yielding a mean 
PD of 2.02 percent for the grade. Using grade mapping, the grade PD 
would be only 1.76, because the grade's typical obligor is rated BB-.
   [sbull] Second, a hypothetical obligor with a grade's average 
characteristics may not represent well the risks presented by the 
grade's typical obligor. For example, a bank might observe that 
obligors with high leverage and low earnings variability have about the 
same default risk as obligors with low leverage and high earnings 
variability. These two types of obligors might both end up in the same 
grade, for example, Grade 6. If so, the typical obligor in Grade 6 
would have moderate leverage and moderate earnings variability--a 
combination that might fail to reflect any of the individual obligors 
in Grade 6, and that could easily result in a PD for the grade that is 
too low.
   A bank electing to use grade mapping instead of obligor mapping 
should be especially careful in choosing a ``typical'' obligor for each 
grade. Doing so typically requires that the bank examine the actual 
distribution of obligors within each grade, as well as the 
characteristics of those obligors. Banks should be aware that different 
measures of central tendency (such as mean, median, or mode) will give 
different results, and that these different results may have a material 
effect on a grade's PD; they must be able to justify their choice of a 
measure. Banks must have a clear and consistent policy toward the 
calculation.
   S. The mapping must be based on a robust comparison of available 
data elements that are common to the portfolio and the reference data.
   Sound mapping practice uses all common elements that are available 
in the data as the basis for mapping. If a bank chooses to ignore 
certain common variables or to weight some variables more heavily than 
others, those choices must be supported. Mapping should also take into 
account differences in rating philosophy (for example, point-in-time or 
through-the-cycle) between any ratings embedded in the reference data 
set and the bank's own rating regime.
   A mapping should be plausible, and should be consistent with the 
rating philosophy established by the bank as part of its obligor rating 
policy. For a bank that uses grade mapping, levels and ranges of key 
variables within each internal grade should be close to values of 
similar variables for corresponding obligors within the reference data.
   The standard allows for use of a limited set of common variables 
that are predictive of default risk, in part to permit flexibility in 
early years when data may be far from ideal. Nevertheless, banks will 
eventually be expected to use variables that are widely recognized as 
the most reliable predictors of default risk in mapping exercises. In 
the meantime, banks relying on data elements that are weak predictors 
must compensate by making their estimates more conservative. For 
example, leverage and cash flow are widely recognized to be reliable 
predictors of corporate default risk. Borrower size is also predictive, 
but less so. A mapping based solely on size is by nature less reliable 
than one based on leverage, cash flow, and size.

   Example 1. In estimating PD, a bank relies on observed default 
rates on bonds in various agency grades for PD quantification. To 
map its internal grades to the agency grades, the bank identifies 
variables that together explain much of the rating variation in the 
bond sample. The bank then conducts a statistical analysis of those 
same variables within its portfolio of obligors, using a 
multivariate distance calculation to assign each portfolio obligor 
to the external rating whose characteristics it matches most closely 
(for example, assigning obligors to ratings so that the sum of 
squared differences between the external grade averages and the 
obligor's characteristics is minimized). This practice is broadly 
consistent with the standard.
   Example 2. A bank uses grade mapping to link portfolio obligors 
to the reference data set described by agency ratings. The bank 
looks at publicly rated portfolio obligors within an internal grade 
to determine the most common external rating, does the same for all 
grades, and creates a correspondence between internal and external 
ratings. The strength of the correspondence is a function of the 
number of externally rated obligors within each grade, the 
distribution of those external ratings within each grade and the 
similarity of externally rated obligors in the grade to those not 
externally rated. This practice is broadly consistent with this 
standard, but would require a comparison of rating philosophies and 
may require adjustments and the addition of margins of conservatism.
   S. A mapping process must be established for each reference data 
set and for each estimation model.
   Banks should never assume that a mapping is self-evident. Even a 
rating system that has been explicitly designed to replicate external 
agency ratings may or may not be effective in producing a replica; 
formal mapping is still necessary. Indeed, in such a system the kind of 
analysis involved in mapping may help identify inconsistencies in the 
rating process itself.
   A mapping process is needed even where the reference obligors come 
from internal historical experience. Banks must not assume that 
internal data do not require mapping, because changes in bank strategy 
or external economic forces may alter the composition of internal 
grades or the nature of the obligors in those grades over time. 
Mappings must be reaffirmed regardless of whether rating criteria or 
other aspects of the ratings system have undergone explicit changes 
during the period covered by the reference data set.
   Banks often use multiple reference data sets, and then combine the 
resulting estimates to get a grade PD. A bank that does that must 
conduct a rigorous mapping process for each data set.
   Supervisors expect all meaningful characteristics of obligors to be 
factored directly into the rating process; this should include 
characteristics like the obligor's industry or physical location. But 
in some circumstances, certain effects related to industry, geography, 
or other factors are not reflected in rating assignments or default 
estimates. In such cases, it may be appropriate for banks to capture 
the impact of the

[[Page 45963]]

omissions by using different mappings for different business lines or 
types of obligors. Supervisors expect this practice to be transitional; 
banks will eventually be required to incorporate the omitted effects 
into the rating system and the estimation process as they are uncovered 
and documented, rather than adjusting the mapping.

   Example 1. The bank maps its internal grades carefully to one 
rating agency, and then assumes a correspondence to another agency's 
scale despite known differences in the rating methods of the two 
agencies. The bank then applies a mean of the grade default rates 
from these two public debt-rating agencies to its internal grades. 
This practice is not consistent with the standard, because the bank 
should map to each agency's scale separately.
   Example 2. A bank uses internal historical data as its reference 
data. The bank computes a mean default rate for each grade as the 
grade PD for capital purposes, and asserts that mapping is 
unnecessary because ``its strong credit culture ensures that a 4 is 
always a 4.'' This practice is not consistent with the standard, 
because no mapping has been done; there is no assurance that a 
representative obligor in a grade today is comparable to an obligor 
in that same grade in the past.

   S. The mapping must be updated and independently validated 
regularly.
   The appropriate mapping between a bank's portfolio and the 
reference data may change over time. For example, relationships between 
internal grades and external agency grades may change during the 
economic cycle because of differences in rating philosophy. Similarly, 
distance-to-default measures for obligors in a given grade may not be 
constant over time. These likely changes make it imperative that the 
bank update all mappings regularly.
   Sound validation practices may include tests for internal 
consistency such as ``reverse mapping.'' Using this technique, a bank 
evaluates obligors from the reference data set as if they were subject 
to the bank's rating system (that is, part of the bank's current 
portfolio). The bank's mapping is then applied to these reverse-mapped 
obligors to see whether the mapped characterization of the reference 
obligor is consistent with that of the initial evaluation.\5\ Another 
valuable technique is to apply different mapping methods and compare 
the results. For example, mappings based on financial ratio comparisons 
can be rechecked using mappings based on available external ratings.
---------------------------------------------------------------------------

   \5\ For example, suppose a bank asserts that its Grade 3 
corresponds to an S&P rating of A. Applying reverse mapping, the 
bank would take a sample of A-rated obligors from the reference 
data, run them through the bank's rating process (perhaps a 
simplified version), and check to see that those obligors usually 
receive a grade of 3 on the bank's internal scale.

   Example. A bank mapped its internal grades to the rating scale 
of one public debt-rating agency in 1992. Since then, the bank has 
completed a major acquisition of another large bank and 
significantly changed its business mix in other ways. The bank 
continues to use the same mapping, without reassessing its validity. 
This practice is not consistent with the standard.
Application
   In the application stage, the bank applies the PD estimation method 
to the current portfolio of obligors using the mapping process. It 
obtains final PD estimates for each rating grade, which will be used to 
calculate minimum regulatory capital. To arrive at those estimates, a 
bank may adjust the raw results derived from the estimation stage. For 
example, it might aggregate individual obligor default probabilities to 
the rating grade level, or smooth results because a rating grade's PD 
estimate was higher than a lower quality grade. The bank must explain 
and support all adjustments when documenting its quantification 
process.

   Example. A bank uses external data to estimate long-run average 
PDs for each grade. The resulting PD estimate for Grade 2 is 
slightly higher than the estimate for Grade 3, even though Grade 2 
is supposedly of higher credit quality. The bank uses statistics to 
demonstrate that this anomaly occurred because defaults are rare in 
the highest quality rating grades. The bank judgmentally adjusts the 
PD estimates for grades 2 and 3 to preserve the expected 
relationship between obligor grade and PD, but requires that total 
risk-weighted assets across both grades using the adjusted PD 
estimates be no less than total risk-weighted assets based on the 
unadjusted estimates, using a typical distribution of obligors 
across the two grades. Such an adjustment during the application 
stage is consistent with this guidance.

   S. IRB institutions that aggregate the default probabilities of 
individual portfolio obligors when calculating PD estimates for 
internal grades must have a clear policy governing the aggregation 
process.
   As noted above, mapping may be grade-based or obligor-based. Grade-
based mappings naturally provide a single PD per grade, because the 
estimated default model is applied to the representative obligor for 
each grade. In contrast, obligor-based mappings must aggregate in some 
manner the individual PD estimates to the grade level. The expectation 
is that the grade PD estimate will be calculated as the mean. The bank 
will be allowed to calculate this estimate differently only if it can 
demonstrate that the alternative method provides a better estimate of 
the long-run average PD. To obtain this evidence, the bank must at 
least compare the results of both methods.
   S. IRB institutions that combine estimates from multiple sets of 
reference data must have a clear policy governing the combination 
process, and must examine the sensitivity of the results to alternative 
combinations.
   Because a bank should make use of as much information as possible 
when mapping, it will usually use multiple data sets. The manner in 
which the data or the estimates from those multiple data sets are 
combined is extremely important. A bank must document its justification 
for the particular combination methods selected. Those methods must be 
subject to appropriate approval and oversight.
   The data may come from the same basic data source but from 
different time periods or from different data sources altogether. For 
example, banks often combine internal data with external data, use 
external data from different sample periods, or combine results from 
corporate-bond default databases with results from equity-based models 
of obligor default. Different combinations will produce different PD 
estimates. The bank should investigate alternative combinations and 
document the impact on the estimates. When ultimate results are highly 
sensitive to how estimates from different data sources are combined, 
the bank must choose among the alternatives conservatively.

C. Loss Given Default (LGD)

   The LGD estimation process is similar to the PD estimation process. 
The bank identifies a reference data set of defaulted credits and 
relevant descriptive characteristics. Once the bank obtains these data 
sets (with the facility characteristics), it must select a technique to 
estimate the economic loss per dollar of exposure at default, for a 
defaulted exposure with a given array of characteristics. The bank's 
portfolio must then be mapped, so that the model can be applied to 
generate an estimate of LGD for each portfolio transaction or severity 
grade.
Data
   Unlike reference data sets used for PD estimation, data sets for 
severity estimation contain only exposures to defaulting obligors. At 
least two broad categories of data are necessary to produce LGD 
estimates.
   First, data must be available to calculate the actual economic loss 
experienced for each defaulted facility. Such data may include the 
market value of the facility at default, which can be

[[Page 45964]]

used to proxy a recovery rate. Alternatively, economic loss may be 
calculated using the exposure at the time of default, loss of 
principal, interest, and fees, the present value of subsequent 
recoveries and related expenses (or the costs as calculated using an 
approved allocation method), and the appropriate discount rate.
   Second, factors must be available to group the defaulted facilities 
in meaningful ways. Characteristics that are likely to be important in 
predicting loss rates include whether or not the facility is secured 
and the type and coverage of collateral if the facility is secured, 
seniority of the claim, general economic conditions, and obligor's 
industry. Although these factors have been found to be significant in 
existing academic and industry studies, a bank's quantification of LGD 
certainly need not be limited to these variables. For example, a bank 
might expand its loss severity research by examining many other 
potential drivers of severity (characteristics of an obligor that might 
help the bank predict the severity of a loss), including obligor size, 
line of business, geographic location, facility type, obligor ratings 
(internal or external), historical internal severity grade, or tenor of 
the relationship.
   A bank must ensure that the reference data remains applicable to 
its current portfolio of facilities. It must implement established 
processes to ensure that reference data sets are updated when new data 
become available. All data sources, variables, and the overall 
processes concerning data collection and maintenance must be fully 
documented, and that documentation should be readily available for 
review.
   S. The sample period for the reference data must be at least seven 
years, and must include periods of economic stress during which 
defaults were relatively high.
   Seven years is the minimum sample period for the LGD reference 
data. A longer sample period is desirable, because more default 
observations will be available for analysis and may serve to refine 
severity estimates. In any case, a bank must select a sample period 
that includes episodes of economic stress, which are defined as periods 
with a relatively high number of defaults. Inclusion of stress periods 
increases the size and potentially the breadth of the reference data 
set. According to some empirical studies, the average loss rate is 
higher during periods of stress.

   Example. A bank intends to rely primarily on internal data when 
quantifying all parameter estimates, including LGD. Its internal 
data cover the period 1994 through 2000. The bank will continue to 
extend its data set as time progresses. Its current policy mandates 
that credits be resolved within two years of default, and the data 
set contains the most recent data available. Although the current 
data set satisfies the seven-year requirement, the bank is aware 
that it does not include stress periods. In comparing its loss 
estimates with rates published in external studies for similarly 
stratified data, the bank observes that its estimates are 
systematically lower. To be consistent with the standard, the bank 
must take steps to include stress periods in its estimates.

   S. The definition of default within the reference data must be 
reasonably consistent with the IRB definition of default.
   This standard parallels a similar standard in the section on PD. 
The following examples illustrate how it applies in the case of LGD.

   Example 1. For LGD estimation, a bank includes in its default 
data base only defaulted facilities that actually experience a loss, 
and excludes credits for which no loss was recorded because 
liquidated collateral covered the loss (effectively applying a 
``loss given loss'' concept). This practice is not consistent with 
the standard because the bank's default definition for LGD is 
narrower than the IRB definition.
   Example 2. A bank relies on external data sources to estimate 
LGD because it lacks sufficient internal data. One source uses 
``bankruptcy filing'' to indicate default while another uses 
``missed principal or interest payment,'' and the two sources result 
in significantly different loss estimates for the severity grades 
defined by the bank. The bank's practice is not consistent with the 
standard, and the bank should determine whether the definitions used 
in the reference data sets differ substantially from the IRB 
definition. If so, and the differences are difficult to quantify, 
the bank should seek other sources of reference data. For more minor 
differences, the bank may be able to make appropriate adjustments 
during the estimation stage.
Estimation
   Estimation of LGD is the process by which characteristics of the 
reference data are related to loss severity. The relevant 
characteristics that help explain how severe losses tend to be upon 
default might include variables such as seniority, collateral, facility 
type, or business line.
   S. The estimates of loss severity must be empirically based and 
must reflect the concept of ``economic loss.''
   Loss severity is defined as economic loss, which is different from 
accounting measures of loss. Economic loss captures the value of 
recoveries and direct and indirect costs discounted to the time of 
default, and it should be measured for each defaulted facility. The 
scope of the cash flows included in recoveries and costs is meant to be 
broad. Workout costs that can be clearly attributed to certain 
facilities or types of facilities must be reflected in the bank's LGD 
assignments for those exposures. When such allocation is not practical, 
the bank may assign those costs using factors based on broad averages.
   A bank must establish a discount rate that reflects the time value 
of money and the opportunity cost of funds to apply to recoveries and 
costs. The discount rate must be no less than the contract interest 
rate on new originations of a type similar to the transaction in 
question, for the lowest-quality grade in which a bank originates such 
transactions.\6\ Where possible, the rate should reflect the fixed rate 
on newly originated exposures with term corresponding to the average 
resolution period of defaulting assets.
---------------------------------------------------------------------------

   \6\ The appropriate discount rate for IRB purposes may differ 
from the contract rate required under FAS 114 for accounting 
purposes.
---------------------------------------------------------------------------

   Ideally, severity should be measured once all recoveries and costs 
have been realized. However, a bank may not resolve a defaulted 
obligation for many years following default. For practical purposes, 
banks may choose to close the period of observation before this final 
resolution occurs--that is, at a point in time when most costs have 
been incurred and when recoveries are substantially complete. Banks 
that do so should estimate the additional costs and recoveries that 
would likely occur beyond this period and include them in the LGD 
estimates. A bank must document its choice of the period of 
observation, and how it estimated additional costs and recoveries 
beyond this period.
   LGD for each type of exposure must be the loss per default 
(expressed as a percentage of exposure at default) expected during 
periods when default rates are relatively high. This expected loss rate 
is referred to as ``stress-condition LGD.'' For cases in which loss 
severities do not have a material degree of cyclical variability, use 
of the long-run default-weighted average is appropriate, although 
stress-condition LGD generally exceeds this average.
   The drivers of severity can be linked to loss estimates in a number 
of ways. One approach is to segment the reference defaults into groups 
that do not overlap. For example, defaults could be grouped by business 
line, predominant collateral type, and loan-to-value coverage. The LGD 
estimate for each category is the mean loss calculated over the 
category's defaulted facilities. Loss must be calculated as the 
default-weighted average (where individual defaults receive equal 
weight) rather than the average of

[[Page 45965]]

annual loss rates, and must be based on results from periods during 
which default rates were relatively numerous if loss rates are 
materially cyclical.
   Banks can also draw estimates of LGD from a statistical model. For 
example, they can build a regression model of severity using data on 
loss severity and some quantitative measures of the loss drivers. Any 
model must meet the requirements for model validation discussed in 
Chapter 1. Other methods for computing LGD could also be appropriate.

   Example 1. A bank has internal data on defaulted facilities, 
including information on business line, facility type, seniority, 
and predominant collateral type (if the facility is secured). The 
data allow for a reasonable calculation of economic loss. The data 
span eight years and include three years that can be termed high-
default years. After analyzing the economic cycle using internal and 
external data, the bank concludes that the data show no evidence of 
material cyclical variability in loss severities, and that the 
default data span enough experience to allow estimation of a long-
run average. On the basis of preliminary analysis, the bank 
determines that the drivers of loss severity for large corporate 
facilities are similar to those for middle-market loans, and that 
the two groups can be estimated as a pool. Again on the basis of 
preliminary analysis, the bank segments this pool by seniority and 
by six collateral groupings, including unsecured. These groupings 
contain enough defaults to allow reasonably precise estimates. The 
loss severity estimates are then calculated by averaging loss rates 
within each segment. This practice is consistent with the standard.
   Example 2. A bank uses internal data in which information on 
security and seniority is lacking. The bank groups corporate and 
middle-market defaulted facilities into a single pool and calculates 
the LGD estimate as the mean loss rate. No adjustments for the lack 
of data are made in the estimation or application steps. This 
practice is unacceptable because there is ample external evidence 
that security and seniority matter in these segments. A bank with 
such limited internal default data must incorporate external or 
pooled data into the estimation.
   Example 3. A bank determines that a business unit--for example, 
a unit dedicated to a particular type of asset-based lending--forms 
a homogeneous pool for the purposes of estimating loss severity. 
That is, although the facilities in this pool may differ in some 
respects, the bank determines that they share a similar loss 
experience in default. The bank must provide reasonable support for 
this pooling through analysis of lending practices and available 
internal and external data. In this example, the mean of a single 
segment is consistent with the standard.

   S. Judgmental adjustments may play an appropriate role in LGD 
estimation, but must not be biased toward lower estimates.
   It is difficult to make general statements about good and bad 
practices in this area, because adjustments can take many different 
forms. The following examples illustrate how supervisors would be 
likely to evaluate particular adjustments observed in practice.
   Example 1. A bank divides observed defaults into segments 
according to collateral type. One of the segments has too few 
observations to produce a reliable estimate. Relying on external 
data and judgment, the bank determines that the segment's estimated 
severity of loss falls somewhere between the estimates for two other 
categories. This segment's severity is set judgmentally to be the 
mean of the estimates for the other segments. This practice is 
consistent with the standard.
   Example 2. A bank does not know when recoveries (and related 
costs) occurred in a portfolio segment; therefore, it cannot 
properly discount the segment's cash flows. However, the bank has 
sufficient internal data to calculate economic loss for defaulted 
facilities in another portfolio segment. The bank can support the 
assumption that the timing of cash flows for the two segments is 
comparable. Using the available data and informed judgment, the bank 
estimates that the measured loss without discounting should be 
grossed up to account for the time value of money and the 
opportunity cost of funds. This practice is consistent with the 
standard.
   Example 3. A bank segments internal defaults in a business unit 
by some factors, including collateral. Although the available 
internal and external evidence indicates a higher LGD, the bank 
judgmentally assigns a loss estimate of 2 percent for facilities 
secured by cash collateral. The basis for this adjustment is that 
the lower estimate is justified by the expectation that the bank 
would do a better job of following policies for monitoring cash 
collateral in the future. Such an adjustment is generally not 
appropriate because it is based on projections of future performance 
rather than realized experience. This practice is not consistent 
with the standard.
Mapping
   LGD mapping follows the same general principles that PD mapping 
does. A mapping must be plausible and must be based on a comparison of 
severity-related data elements common to both the reference data and 
the current portfolio. The mapping approach is expected to be unbiased, 
such that the exercise of judgment does not consistently lower LGD 
estimates. The default definitions in the reference data and the 
current portfolio of obligors should be comparable. The mapping process 
must be updated regularly, well-documented, and independently reviewed.
   S. A bank must conduct a robust comparison of available common 
elements in the reference data and the portfolio.
   Mapping involves matching facility-specific data elements available 
in the current portfolio to the factors in the reference data set used 
to estimate expected loss severity rates. Examples of factors that 
influence loss rates include collateral type and coverage, seniority, 
industry, and location.
   At least three kinds of mapping challenges may arise. First, even 
if similarly named variables are available in the reference data and 
portfolio data, they may not be directly comparable. For example, the 
definition of particular collateral types, or the meaning of 
``secured,'' may vary from one application to another. Hence, a bank 
must ensure that linked variables are truly similar. Although 
adjustments to enhance comparability can be appropriate, they must be 
rigorously developed and documented. Second, levels of aggregation may 
vary. For example, the reference data may only broadly identify 
collateral types, such as financial and nonfinancial. The bank's 
information systems for its portfolio might supply more detail, with a 
wide variety of collateral type identifiers. To apply the estimates 
derived from the reference data, the internal data must be regrouped to 
match the coarser level of aggregation in the reference data. Third, 
reference data often do not include workout costs and will often use 
different discounting. Judgmental adjustments for such problems must be 
well-documented and, as much as possible, empirically based.
   S. A mapping process must be established for each reference data 
set and for each estimation model.
   Mapping is never self-evident. Even when reference data are drawn 
from internal default experience, a bank must still link the 
characteristics of the reference data with those of the current 
portfolio.
   Different data sets and different approaches to severity estimation 
may be entirely appropriate, especially for different business segments 
or product lines. Each mapping process must be specified and 
documented.
Application
   At the application stage, banks apply the LGD estimation framework 
to their current portfolio of credit exposures. Doing so might require 
them to aggregate individual LGD estimates into broader averages (for 
example, into discrete severity grades) or to combine estimates in 
various ways.
   The inherent variability of recovery, due in part to unanticipated 
circumstances, demonstrates that no facility type is wholly risk-free, 
regardless of structure, collateral type, or collateral coverage. The 
existence of

[[Page 45966]]

recovery risk dictates that application of a zero percent LGD is not 
acceptable.
   S. IRB institutions that aggregate LGD estimates for severity 
grades from individual exposures within those grades must have a clear 
policy governing the aggregation process.
   Banks with discrete severity grades compute a single estimate of 
LGD for a representative exposure within each of those grades. If a 
bank with a discrete scale of severity grades maps those grades to the 
reference data using grade mapping, there will be a single estimate of 
LGD for each grade, and the bank does not need to aggregate further. 
However, if the bank maps at the individual transaction level, the bank 
may then choose to aggregate those individual LGD estimates to the 
grade level and use the grade LGD in capital calculations. Because 
different methods of aggregation are possible, a bank must have a clear 
policy regarding how aggregation should be accomplished; in general, 
simple averaging is preferred. (This standard is irrelevant for banks 
that choose to assign LGD estimates directly to individual exposures 
rather than grades, because aggregation is not required in that case.)
   S. An IRB institution must have a policy describing how it combines 
multiple sets of reference data.
   Multiple data sets may produce superior estimates of loss severity, 
if the results are appropriately combined. Combining such sets 
differently usually produces different estimates of LGD. As a matter of 
internal policy, a bank should investigate alternative combinations, 
and document the impact on the estimates. If the results are highly 
sensitive to the manner in which different data sources are combined, 
the bank must choose conservatively among the alternatives.

D. Exposure at Default (EAD)

   Compared with PD and LGD quantification, EAD quantification is less 
advanced. As such, it is addressed in somewhat less detail in this 
guidance than are PD and LGD quantification. Banks should continue to 
innovate in the area EAD estimation, refining and improving practices 
in EAD measurement and prediction. Additional supervisory guidance will 
be provided as more data become available and estimation techniques 
evolve.
   A bank must provide an estimate of expected EAD for each facility 
in its portfolio. EAD is defined as the bank's expected gross dollar 
exposure of the facility upon the obligor's default. For fixed 
exposures like term loans, EAD is equal to the current amount 
outstanding. For variable exposures such as loan commitments or lines 
of credit, exposure is equal to current outstandings plus an estimate 
of additional drawings up to the time of default. This additional 
drawdown, identified as loan equivalent exposure (LEQ) in many 
institutions, is typically expressed as a percentage of the current 
total committed but undrawn amount. EAD can thus be represented as:

EAD = current outstanding + LEQ x (total committed-current outstanding)

As it is the LEQ that must be estimated, LEQ is the focus of this 
guidance.
   Even though EAD estimation is less sophisticated than PD and LGD 
estimation, a bank still develops EAD estimates by working through the 
four stages that produce the other types of quantification: The bank 
must use a reference data set; it must apply an estimation technique to 
produce an expected total dollar exposure at default for a facility 
with a given array of characteristics; it must map its current 
portfolio to the reference data; and, by applying the estimation model, 
it must generate an EAD estimate for each portfolio facility or 
facility-type, as the case may be.
Data
   Like reference data sets used for LGD estimation, LEQ data sets 
contain only exposures to defaulting obligors. In many cases, the same 
reference data may be used for both LGD and LEQ. In addition to 
relevant descriptive characteristics (referred to as ``drivers'') that 
can be used in estimation, the reference data must include historical 
information on the exposure (both drawn and undrawn amounts) as of some 
date prior to default, as well as the drawn exposure at the date of 
default.
   As discussed below under ``Estimation,'' LEQ estimates may be 
developed using either a cohort method or a fixed-horizon method. The 
bank's reference data set must be structured so that it is consistent 
with the estimation method the bank applies. Thus, the data must 
include information on the total commitment, the undrawn amount, and 
the exposure drivers for each defaulted facility, either at fixed 
calendar dates for the cohort method or at a fixed interval prior to 
the default date for the fixed-horizon method.
   The reference data must contain variables that enable the bank to 
group the exposures to defaulted obligors in meaningful ways. Obligor 
and facility risk ratings are commonly believed to be significant 
characteristics for predicting additional drawdown. Since less 
empirical research has been done on EAD estimation, little is known 
about other potential drivers of EAD. Among the many possibilities, 
banks may consider time from origination, time to expiration or 
renewal, economic conditions, risk rating changes, or certain types of 
covenants. Some potential drivers may be linked to a bank's credit risk 
management skills, while others may be exogenous. Industry practice is 
likely to improve as banks extend their research to identify other 
meaningful drivers of EAD.
   A bank must ensure continued applicability of the reference data to 
its current portfolio of facilities. The reference data must include 
the types of variable exposures found in a bank's current portfolio. 
The definitions of default and exposure in the reference data should be 
consistent with the IRB definition of default, and consistent with the 
definitions used for PD and LGD quantification. Established processes 
must be in place to ensure that reference data sets are updated when 
new data are available. All data sources, variables, and the overall 
processes governing data collection and maintenance must be fully 
documented, and that documentation should be readily available for 
review.
   Seven years of data are required for EAD (or LEQ) estimation. The 
sample should include periods during which default rates were 
relatively high, and ideally cover a complete economic cycle.
Estimation
   To derive LEQ estimates, characteristics of the reference data are 
related to additional drawings preceding a default event. The 
estimation process must be capable of producing a plausible estimate of 
LEQ to support the EAD calculation for each facility. Two broad types 
of estimation methods are used in practice, the cohort method and the 
fixed-horizon method.
   Under the cohort method, a bank groups defaults into discrete 
calendar periods (such as a year or a quarter). The bank then estimates 
the relationship between the drivers as of the start of that calendar 
period, and EAD or LEQ for each exposure to a defaulter. For each 
exposure category (that is, for each combination of exposure drivers 
identified by the bank), the LEQ estimate is calculated as the mean 
additional drawing for facilities in that category. To combine results 
for multiple periods into a single long-run average, the period-by-
period means should be weighted by the proportion of defaults occurring 
in each period.
   Under the fixed-horizon method, for each exposure to a defaulted 
obligor the

[[Page 45967]]

bank compares additional drawdowns to the total commitment but undrawn 
amount that existed at the start of a fixed interval prior to the date 
of the default (the horizon). For example, the bank might base its 
estimates on a reference data set that supplies the actual exposure at 
default along with the drawn and undrawn amounts (as well as relevant 
drivers) at a date a fixed number of months prior to the date of each 
default, regardless of the actual calendar date on which the default 
occurred. Estimates of LEQ are computed from the average drawdowns that 
occur over the fixed-horizon interval, for whatever combinations of the 
driving variables the bank has determined are relevant for explaining 
and predicting exposure at default.
   Evidence may indicate that LEQ estimates are positively correlated 
with economic downturns; that is, it may be that LEQs increase during 
high-default periods. If so, the higher drawdowns that occur during 
high-default periods are denoted ``stress-condition LEQs,'' analogous 
to the ``stress-condition LGDs'' discussed earlier in this chapter. For 
any exposure type whose LEQ estimates exhibit material cyclicality, a 
bank must use the stress-condition LEQ for purposes of calculating EAD.
   In general, all available data should be used; particular 
observations or time periods should not be excluded from the data 
sample. Any adjustments a bank makes to the estimation results should 
be justified and fully documented. The analysis should be refreshed 
periodically as new data become available, and a bank should have a 
process in place to ensure that advances in analytical techniques and 
industry practice are considered as they emerge and are incorporated as 
appropriate. LEQ estimates should be updated at least annually. 
Detailed documentation, ongoing validation, and adequate oversight are 
fundamental controls that support a sound estimation process.
Mapping
   If the same variables that drive exposure in the reference data are 
also available for facilities in the portfolio, mapping may be 
relatively easy. However, the bank must still review the definitions to 
ensure that variables that seem to be the same actually are. If the 
relevant variables are not available in a bank's current portfolio 
information system, the bank will encounter the same mapping 
complexities that it does when mapping for PD and LGD in similar 
circumstances. A bank should have well-documented policies that govern 
the mapping. Any exceptions to mapping policy should be reviewed, 
justified and fully documented. Mapping may be done for each exposure 
or for broad categories of exposure; the latter would be analogous to 
the ``grade mapping'' discussed earlier in this chapter.
Application
   In the application stage, the estimated relationship between 
drivers and LEQ is applied to the bank's actual portfolio. To ensure 
that estimated EAD is at least as large as the currently drawn amount 
for all exposures, LEQs must not be negative. Multiple reference data 
sets may be used for LEQ estimation and combined at the application 
stage; those combinations should be rigorously developed, approved, and 
documented. Any smoothing or use of expert judgment to adjust the 
results should be well-justified and clearly documented. This includes 
any adjustment for definitions of default that do not meet the 
supervisory standards. The less robust the process, the more 
conservative the result should be.
   Some facility types may be treated as exceptions, and assigned an 
LEQ that does not vary with characteristics such as line of business or 
risk rating. Such exceptional treatment should be clearly justified, 
and the justification should be fully documented.
   EAD may be particularly sensitive to changes in the way banks 
manage individual credits. For example, a change in policy regarding 
covenants may have a significant impact on LEQ. When such changes take 
place, the bank should consider them when making its estimates--and it 
should do so from a conservative point of view. Policy changes likely 
to significantly increase LEQ should prompt immediate increases in LEQ 
estimates. If a bank's policy changes seem likely to reduce LEQ, 
estimates should be reduced only after the bank accumulates a 
significant amount of actual experience under the new policy to support 
the reductions.

E. Maturity (M)

   A bank must assign a value of effective remaining maturity (M) to 
each credit exposure in its portfolio. In general, M is the weighted-
average number of years to receipt of the cash flows the bank expects 
under the contractual terms of the exposure, where the weights are 
equal to the fraction of the total undiscounted cash flow to be 
received at each date. Mathematically, M is given by:
[GRAPHIC] [TIFF OMITTED] TN04AU03.008

where wt is the fraction of the total cash flow received at 
time t, that is:
[GRAPHIC] [TIFF OMITTED] TN04AU03.009

Ct is the undiscounted cash flow received at time t, with t 
measured in years from the date of the calculation of M.
   Effective maturity, sometimes referred to as ``average life,'' need 
not be a whole number, and often is not. For example, if 33 percent of 
the cash flow is expected at the end of one year (t=1) and the other 67 
percent two years from today (t=2), then M is calculated as:

M = (1x0.33) + (2x0.67) = 1.67

for an effective maturity of 1.67 years. This value of M would be used 
in the IRB capital calculation.
   The relevant cash flows are the future payments the bank expects to 
receive from the obligor, regardless of form; they may include payments 
of interest or fees, principal repayments, or other types of payments 
depending on the structure of the transaction. For exposures whose cash 
flow schedule is virtually predetermined unless the obligors defaults 
(fixed-rate loans, for example), the calculation of the weighted-
average remaining maturity is straightforward, using the scheduled 
timing and amounts of the individual undiscounted cash flows. These 
cash flows should be the contractually expected payments; the bank 
should not take into account the possibility of delayed or reduced cash 
flows due to potential future default.
   Cash flows associated with other types of credit exposures may be 
somewhat less certain. In such cases, the bank must establish a method 
of projecting expected cash flows. In general, the method used for any 
exposure should be the same as the one used by the bank for purposes of 
valuation or risk management. The method must be well-documented and 
subject to independent review and approval. A bank must demonstrate 
that the method used is standard industry practice, that it is widely 
used within the bank for purposes other than regulatory capital 
calculations, or both.
   To be conservative, a bank may set M equal to the maximum number of 
years the obligor could take to fully discharge the contractual 
obligation (provided that the maximum is not longer than five years, as 
noted below). In many cases, this maximum will correspond to the stated 
or nominal maturity of the instrument. Banks must make this 
conservative choice (maximum nominal maturity) if the timing and 
amounts of

[[Page 45968]]

the cash flows on the exposure cannot be projected with a reasonable 
degree of confidence.
   Certain over-the-counter derivatives contracts and repurchase 
transactions may be subject to master netting agreements. In such 
cases, the bank may compute a single value of M for the transactions as 
a group by weighting each individual transaction's effective maturity 
by that transaction's share of the total notional value subject to the 
netting agreement, and summing the result across all of the 
transactions.
   For IRB capital calculations, the value of M for any exposure is 
subject to certain upper and lower limits, regardless of the actual 
effective maturity of the exposure. In all cases, the value of M should 
be no greater than 5 years. If an exposure clearly has an effective 
maturity that exceeds this upper limit, the bank may simply use a value 
of M=5 rather than calculating the actual effective maturity.
   For most exposures, the value of M must be no less than one year. 
For certain short-term exposures (repo-style transactions, money market 
transactions, trade finance-related transactions, and exposures arising 
from payment and settlement processes) that are not part of a bank's 
ongoing financing of a borrower and that have an original maturity of 
less than three months, M may be set as low as one day. For over-the-
counter derivative and repurchase-style transactions subject to a 
master netting agreement, weighted average maturity must be set at no 
less than five days.

F. Validation

   Values of PD, LGD, and EAD are estimates with implications for 
credit risk and the future performance of a bank's credit portfolio 
under IRB; in essence, they are forecasts. ``Validation'' of these 
estimates describes the full range of activities used to assess their 
quality as forecasts of default rates, loss severity rates, and 
exposures at default. Chapter 1 discusses validation of IRB systems in 
general; this section focuses specifically on ratings quantification, 
which includes the assignment of PD to obligor grades and the 
assignment of LGD, EAD, and M to exposures.
   S. A validation process must cover all aspects of IRB 
quantification.
   Banks must have a process for validating IRB quantification; their 
policies must state who is accountable for validation, and describe the 
actions that will proceed from the different possible results. 
Validation should focus on the three estimated IRB parameters (PD, LGD, 
and EAD). Although the established validation process should result in 
an overall assessment of IRB quantification for each parameter, it also 
must cover each of the four stages of the quantification process as 
described in preceding sections of this chapter (data, estimation, 
mapping, and application). The validation process must be fully 
documented, and must be approved by appropriate levels of the bank's 
senior management. The process must be updated periodically to 
incorporate new developments in validation practices and to ensure that 
validation methods remain appropriate; documentation must be updated 
whenever validation methods change.
   Banks should use a variety of validation approaches or tools; no 
single validation tool can completely and conclusively assess IRB 
quantification. Three broad types of tools that are useful in this 
regard are evaluation of the conceptual soundness of the approach to 
quantification (evaluation of logic), comparison to other sources of 
data or estimates (benchmarking), and comparisons of actual outcomes to 
predictions (back-testing). Each of these types of tools has a role to 
play in validation, although the role varies across the four stages of 
quantification.
   Evaluation of logic is essential in validating all stages of the 
quantification process. The quantification process requires banks to 
adopt methods, choose variables, and make adjustments; each of these 
actions requires an exercise of judgment. Validation should ensure that 
these judgments are plausible and informed.
   A bank should also validate estimates by comparing them with 
relevant external sources, a process broadly described as benchmarking. 
``External'' in this context refers to anything other than the specific 
reference data, estimation approach, or mapping under consideration. 
Reference data can be compared with other data sources; choices of 
variables can be compared with similar choices made by others; 
estimation results can be compared with the results of alternative 
estimation methods using the same reference data. Other data sources 
may show that default and severity rates across the economy or the 
banking system are high or low relative to other periods, or may reveal 
unusual effects in parts of the quality spectrum.
   Effective validation must compare actual results with predictions. 
Such comparisons, often referred to as ``back-testing,'' are valuable 
comprehensive tests of the rating system and its quantification. 
However, they are only one element of the broader validation regime, 
and should not be a bank's only method of validation. Because they test 
the results of the rating system as a whole, they are unlikely to 
identify specific reasons for any divergence between expectations and 
realizations. Rather they will indicate only that further investigation 
is necessary.
   By applying back-testing to the reference data set as it is updated 
with new data, a bank can improve the estimation process. To further 
improve the process, a bank must regularly compare realized default 
rates, loss severities, and exposure-at-default experience from its 
portfolio with the PD, LGD, and EAD estimates on which capital 
calculations are based. Realizations should be compared with expected 
ranges based on the estimates. These expected ranges should take into 
account the bank's rating philosophy (the relative weight given to 
current and stress conditions in assigning ratings). Depending on that 
philosophy, year-by-year realized default rates and loss severities may 
be expected to differ significantly from the long-run average. If a 
bank adjusts final estimates to be conservative, it should likely do 
its back-testing on the unadjusted estimates.
   A bank's quantitative testing methods and other validation 
techniques should be robust to economic cycles. A sound validation 
process should take business cycles into account, and any adjustments 
for stages of the cycle should be clearly specified in advance and 
fully documented as part of the validation policy. The fact that a year 
has been ``unusual'' should not be taken as a reason to abandon the 
bank's standard validation practices.
   S. A bank must comprehensively validate parameter estimates at 
least annually, must document the results, and must report these 
results to senior management.
   A full and comprehensive annual validation is a minimum for 
effective risk management under IRB. More frequent validation may be 
appropriate for certain parts of the IRB system and in certain 
circumstances; for example, during high-default periods, banks should 
compute realized default and loss severity rates more frequently, 
perhaps quarterly. They must document the results of validation, and 
must report them to appropriate levels of senior risk management.
   S. The validation policy must outline appropriate remedial 
responses to the results of parameter validation.
   The goal of validation should be to continually improve the rating 
process and its quantification. To this end, the bank should establish 
thresholds or accuracy tolerances for validation results. Results that 
breach thresholds

[[Page 45969]]

should bring an appropriate response; that response should depend on 
the results and should not necessarily be to adjust the parameter 
estimates. When realized default, severity, or exposures rates diverge 
from expected ranges, those divergences may point to issues in the 
estimation or mapping elements of quantification. They may also 
indicate potential problems in other parts of the ratings assignment 
process. The bank's validation policy must describe (at least in broad 
terms) the types of responses that should be considered when relevant 
action thresholds are crossed.

Appendix to Part III: Illustrations of the Quantification Process

   This appendix provides examples to show how the logical 
framework described in this guidance, with its four stages (data, 
estimation, mapping, and application), applies when analyzing 
typical current bank practices. The framework is broadly 
applicable--for PD or LGD or EAD; using internal, external, or 
pooled reference data; for simple or complex estimation methods--
although the issues and concerns that arise at each stage depend on 
a bank's approach. These examples are intended only to illustrate 
the logic of the four-stage IRB quantification framework, and should 
not be taken to endorse the particular techniques presented in the 
examples. In fact, certain aspects of the examples are not 
consistent with the standards outlined in this guidance.

Example 1: PD Estimation From Bond Data

   [sbull] A bank establishes a correspondence between its internal 
grades and external rating agency grades; the bank has determined 
that its Grade 4 is equivalent to \3/4\ BB and \1/4\ B on the 
Standard and Poor's scale.
   [sbull] The bank regularly obtains published estimates of mean 
default frequencies for publicly rated BB and B obligors in North 
America from 1970 through 2002.
   [sbull] The BB and B historical default frequencies are weighted 
75/25, and the result is a preliminary PD for the bank's internal 
Grade 4 credits.
   [sbull] However, the bank then increases the PD by 10 percent to 
account for the fact that the S&P definition of default is more 
lenient than the IRB definition.
   [sbull] The bank makes a further adjustment to ensure that the 
resulting grade PD is greater than the PD attributed to Grade 3 and 
less than the PD attributed to Grade 5.
   [sbull] The result is the final PD estimate for Grade 4.

Process Analysis for Example 1

   Data--The reference data set consists of issuers of publicly 
rated debt in North America over the period 1970 through 2002. The 
data description is very basic: each issuer in the reference data is 
described only by its rating (such as AAA, AA, A, BBB, and so on).
   Estimation--The bank could have estimated default rates itself 
using a database purchased from Standard and Poor's, but since these 
estimates would just be the mean default rates per year for each 
grade, the bank could just as well (and in this example does) use 
the published historical default rates from S&P; in essence, the 
estimation step has been outsourced to S&P. The 10 percent 
adjustment of PD is part of the estimation process in this case 
because the adjustment was made prior to the application of the 
agency default rates to the internal portfolio data.
   Mapping--The bank's mapping is an example of a grade mapping; 
internal Grade 4 is linked to the 75/25 mix of BB and B. Based on 
the limited information presented in the example, this step should 
be explored further. Specifically, how did the bank determine the 
75/25 mix?
   Application--Although the application step is relatively 
straightforward in this case, the bank does make the adjustment of 
the Grade 4 PD estimate to give it the desired relationship to the 
adjacent grades. This adjustment is part of the application stage 
because it is made after the adjusted agency default rates are 
applied to the internal grades.

Example 2: PD Estimation Using a Merton-Type Equity-Based Model

   [sbull] A bank obtains a 20-year database of North American 
firms with publicly traded equity, some of which defaulted during 
the 20-year period.
   [sbull] The bank uses the Merton approach to modeling equity in 
these firms as a contingent claim, constructing an estimate of each 
firm's distance-to-default at the start of each year in the 
database. The bank then ranks the firm-years within the database by 
distance-to-default, divides the ordered observations into 20 equal 
groups or buckets, and computes a mean historical one-year default 
frequency for each bucket. That default frequency is taken as an 
estimate of the applicable PD for any obligor within the range of 
distance-to-default values represented by each of the 20 buckets.
   [sbull] The bank next looks at all obligors with publicly traded 
shares within each of its internal grades, applies the same Merton-
type model to compute distance-to-default at quarter-end, sorts 
these observations into the 20 buckets from the previous step, and 
assigns the corresponding PD estimate.
   [sbull] For each internal grade, the bank computes the mean of 
the individual obligor default probabilities and uses that average 
as the grade PD.

Process Analysis for Example 2

   Data--The reference data set consists of the North American 
firms with publicly traded equity in the acquired database. The 
reference data are described in this case by a single variable, 
specifically an identifier of the specific distance-to-default range 
from the Merton model (one of the 20 possible in this case) into 
which a firm falls in any year.
   Estimation--The estimation step is simple: the average default 
rate is calculated for each distance-to-default bucket. Since the 
data cover 20 years and a wide range of economic conditions, the 
resulting estimates satisfy the long-run average requirement.
   Mapping--The bank maps selected portfolio obligors to the 
reference data set using the distance-to-default generated by the 
Merton model. However, not all obligors can be mapped, since not all 
have traded equity. This introduces an element of uncertainty into 
the mapping that requires additional analysis by the bank: were the 
mapped obligors representative of other obligors in the same grade? 
The bank would need to demonstrate comparability between the 
publicly traded portfolio obligors and those not publicly traded. It 
may be appropriate for the bank to make conservative adjustments to 
its ultimate PD estimates to compensate for the uncertainty in the 
mapping. The bank also would need further analysis to demonstrate 
that the implied distance-to-default for each internal grade 
represented long-run expectations for obligors assigned to that 
grade; this could involve computing the Merton model for portfolio 
obligors over several years of relevant history that span a wide 
range of credit conditions.
   Application--The final step is aggregation of individual 
obligors to the grade level through calculation of the mean for each 
grade, and application of this grade PD to all obligors in the 
grade. The bank might also choose to modify PD assignments further 
at this stage, combining PD estimates derived from other sources, 
applying adjustments for cyclicality, introducing an appropriate 
degree of conservatism, or making other adjustments.

Example 3: LGD Estimation From Internal Default Data

   [sbull] For each loan in its portfolio, a bank records 
collateral coverage as a percentage, as well as which of four types 
of collateral applies.
   [sbull] A bank has retained data on all defaulted loans since 
1995. For each defaulted loan in the database, the bank has a record 
of the collateral type within the same four broad categories. 
However, collateral coverage is only recorded at three levels (low, 
moderate, or high, depending on the ratio of collateral to exposure 
at default).
   [sbull] The bank also records the timing and discounted value of 
recoveries net of workout costs for each defaulted loan in the 
database. Cash flows are tracked from the date of default to a 
``resolution date,'' defined as the point at which the remaining 
balance is less than 5 percent of the exposure at the time of 
default. A recovery percentage is computed, equal to the value of 
recoveries discounted to the date of default, divided by the 
exposure at default.
   [sbull] For each cell (each of the 12 combinations of collateral 
type and coverage), the bank computes a simple mean LGD percentage 
as the mean of one minus the recovery percentage. One of the 
categories has a mean LGD of less than zero (recoveries have 
exceeded exposure on average), so the bank sets the LGD at zero to 
be conservative.
   [sbull] The bank assigns an estimate of expected LGD to each 
loan in the current portfolio by using collateral information to 
slot it into one of the 12 cells. The bank then applies the mean 
historical LGD for that cell and adjusts the result upward by 10 
percent to compensate for the fact that the loss data come from a 
period believed to be unusually good economic performance.

[[Page 45970]]

Process Analysis for Example 3

   Data--The reference data is the collection of historical 
defaults with the loss amounts from the bank's historical portfolio. 
The reference data are described by the two categorical variables 
(levels of collateral coverage and types of collateral). It would be 
important to determine whether the defaults over the past few years 
are comparable to defaults from the current portfolio. One would 
also want to ask why the bank ignores potentially valuable 
information by converting the continuous data on collateral coverage 
into a trimodal categorical variable.
   Estimation--Conceptually, the bank is using a ``loss severity 
model'' in which 12 binary variables, one for each loan coverage/
type combination, explain the percentage loss. The coefficients on 
the variables are just the mean loss figures from the reference 
data.
   Mapping--Mapping in this case is fairly straightforward, since 
all of the relevant characteristics of the reference data are also 
in the loan system for the current portfolio. However, the bank 
should determine whether the variables are being recorded in the 
same way (for example, the same definitions of collateral types), 
otherwise some adjustment might be needed.
   Application--The bank is able to apply the loss model by simply 
plugging in the relevant values for the current portfolio (or what 
amounts to the same thing, looking up the cell mean). The bank's 
assignment of zero LGD for one of the cells merits special 
attention; while the bank represented this assignment as 
conservative, the adjustment does not satisfy the supervisory 
requirement that LGD must exceed zero. A larger upward adjustment is 
necessary. Finally, the upward adjustment of the LGD numbers to 
account for the benign environment in which the reference data were 
generated presents one additional wrinkle. The bank must provide a 
well-documented, empirically based analysis of why a 10 percent 
upward adjustment is sufficient.

IV. Data Maintenance

A. Overview

   Institutions using the IRB approach for regulatory capital purposes 
will need advanced data management practices to produce credible and 
reliable risk estimates. The guiding principle governing an IRB data 
maintenance system is that it must support the requirements for the 
quantification, validation, control and oversight mechanisms described 
in this guidance, as well as the institution's broader risk management 
and reporting needs. The precise data elements to be collected will be 
dictated by the features and methodology of the IRB system employed by 
the institution. The necessary data elements will therefore vary by 
institution and even among business lines within an institution.
   Institutions will have latitude in managing their data, subject to 
the following key data maintenance standards:
   Life Cycle Tracking--institutions must collect, maintain, and 
analyze essential data for obligors and facilities throughout the life 
and disposition of the credit exposure.
   Rating Assignment Data--institutions must capture all significant 
quantitative and qualitative factors used to assign the obligor and 
loss severity ratings.
   Support of IRB System--data collected by institutions must be of 
sufficient depth, scope, and reliability to:
   [sbull] Validate IRB system processes,
   [sbull] Validate parameters,
   [sbull] Refine the IRB system,
   [sbull] Develop internal parameter estimates,
   [sbull] Apply improvements historically,
   [sbull] Calculate capital ratios,
   [sbull] Produce internal and public reports, and
   [sbull] Support risk management.
   This chapter covers the requirements for maintaining internal data. 
Reference data sets used for estimating IRB parameters are discussed in 
Chapter 2.

B. Data Maintenance Framework

Life Cycle Tracking
   S. Institutions must collect, maintain, and analyze essential data 
for obligors and facilities throughout the life and disposition of the 
credit exposure.
   Using a life cycle or ``cradle to grave'' concept for each obligor 
and facility supports front-end validation, back-testing, system 
refinements and risk parameter estimates. A depiction of life-cycle 
tracking follows:
[GRAPHIC] [TIFF OMITTED] TN04AU03.001

   Data elements must be recorded at origination and whenever the 
rating is reviewed, regardless of whether the rating is actually 
changed. Data elements associated with current and past ratings must be 
retained and include the following:
   [sbull] Key borrower and facility characteristics,
   [sbull] Ratings for obligor and loss severity grades,
   [sbull] Key factors used to assign the ratings,
   [sbull] Person or model responsible for assigning the rating,
   [sbull] Date rating assigned, and
   [sbull] Overrides to the rating and authorizing individual.
   At disposition, data elements must include:
   [sbull] Nature of disposition: renewal, repayment, loan sale, 
default, restructuring,
   [sbull] For defaults: exposure, actual recoveries, source of 
recoveries, costs of workouts and timing,
   [sbull] Guarantor support,
   [sbull] Sale price for loans sold, and
   [sbull] Other key elements that the bank deems necessary.

[[Page 45971]]

Rating Assignment Data
   S. Institutions must capture all significant quantitative and 
qualitative factors used to assign the obligor and loss severity 
rating.
   Assigning a rating to an obligor requires the systematic collection 
of various borrower characteristics as these factors are critical to 
validating the rating system. Obligors are rated using various methods, 
as discussed in Chapter 1. Each of these methods presents different 
challenges for input collection. For example, in judgmental rating 
systems, the factors used in the ratings decision have not 
traditionally been explicitly recorded. For purposes of an IRB 
approach, institutions that use expert and constrained judgment must 
record these factors and deliver them to the data warehouse.
   For loss severity estimates, institutions must record the basic 
structural characteristics of facilities and the factors used in 
developing the facility rating or LGD estimate. These often include the 
seniority of the credit, the amount and type of collateral, the most 
recent collateral valuation date and its fair value.
   Institutions must also track any overrides of the obligor or loss 
severity rating. Tracking overrides separately allows risk managers to 
identify whether the outcome of such overrides suggests either problems 
with rating criteria, or an improper level of discretion in adjusting 
the ratings.
Example Data Elements
   For illustrative purposes, the following section provides examples 
of the kinds of data elements institutions will collect under an IRB 
data maintenance framework.
General descriptive obligor and facility data
   The data below could be contained within a loan record or derived 
from various sources within the data warehouse. Guarantor data 
requirements are the same as for the obligor.
Obligor/Guarantor Data
   [sbull] General data: name, address, industry
   [sbull] ID number (unique for all related parent/sub relationships)
   [sbull] Rating, date, and rater
   [sbull] PD percentage corresponding to rating
General Facility Characteristics
   [sbull] Facility amounts: committed, outstanding
   [sbull] Facility type: Term, revolver, bullet, amortizing, etc.
   [sbull] Purpose: acquisition, expansion, liquidity, inventory, 
working capital
   [sbull] Covenants
   [sbull] Facility ID number
   [sbull] Origination and maturity dates
   [sbull] Last renewal date
   [sbull] Obligor ID link
   [sbull] Rating, date and rater
   [sbull] LGD dollar amount or percentage
   [sbull] EAD dollar amount or percentage
Rating Assignment Data
   The data below provide an example of the categories and types of 
data that institutions must retain in order to continually validate and 
improve rating systems. These data items should tie directly to the 
documented criteria that the institution employs in assigning ratings, 
both qualitative and quantitative. For example, rating criteria often 
include ranges of leverage or cash flow for a particular obligor 
rating. In addition, qualitative factors, such as management 
effectiveness can be recorded in numeric form. For example, a 1 may 
equate to exceptionally strong management, and a 5 to very weak. The 
rating data elements collected should be complete enough so that others 
can review the relevant factors driving the rating decisions.
Quantitative Factors in Obligor Ratings
   [sbull] Asset and sale size
   [sbull] Key ratios used within rating criteria:
--profitability,
--cash flow,
--leverage,
--liquidity, and
--other relevant factors.
Qualitative Factors in Obligor Ratings
   [sbull] Quality of earnings and cash flow
   [sbull] Management effectiveness, reliability
   [sbull] Strategic direction, industry outlook, position
   [sbull] Country factors and political risk
   [sbull] Other relevant factors
External Factors in Obligor Ratings
   [sbull] Public debt rating and trend
   [sbull] External credit model score and trend
Rating Notations
   [sbull] Flag for overrides or exceptions
   [sbull] Authorized individual for changing rating
Key Facility Factors in LGD Ratings
   [sbull] Seniority
   [sbull] Collateral type: (cash, marketable securities, AR, stock, 
RE, etc.)
   [sbull] Collateral value and valuation date
   [sbull] Advance rates, LTV
   [sbull] Industry
   [sbull] Geography
Rating Notations
   [sbull] Flag for overrides or exceptions
   [sbull] Authorized individual for changing rating
Final Disposition Data
   Only recently have institutions begun to collect more complete data 
about a loan's disposition. Many institutions maintain subsidiary 
systems for their problem credits with details recorded, at times 
manually, on systems that were not linked with the institution's 
central loan or risk management systems. The unlinked data are a 
significant hindrance in developing reliable PD, LGD, and EAD 
estimates.
   In advanced systems, the ``grave'' portion of obligor and exposure 
tracking is an essential component for producing and validating risk 
estimates and is an important feedback mechanism for adjusting and 
improving risk estimates over time. Essential data elements are 
outlined below.
Obligor/Guarantor
   [sbull] Default date
   [sbull] Circumstances of default (for example, nonaccrual, 
bankruptcy chapters 7-11, nonpayment)
Facility
   [sbull] Outstandings at default
   [sbull] Amounts undrawn and outstanding plus time series prior to 
and through default
Disposition
   [sbull] Amounts recovered and dates (including source: cash, 
collateral, guarantor, etc.)
   [sbull] Collection cost and dates
   [sbull] Discount factors to determine economic cost of collection
   [sbull] Final disposition (for example, restructuring or sale)
   [sbull] Sales price, if applicable
   [sbull] Accounting items (charge-offs to date, purchased discounts)

C. Data Element Functions

   S. Data elements must be of sufficient depth, scope, and 
reliability to:
   [sbull] Validate IRB system processes,
   [sbull] Validate parameters,
   [sbull] Refine the IRB system,
   [sbull] Develop internal parameter estimates,
   [sbull] Apply improvements historically,
   [sbull] Calculate capital ratios,
   [sbull] Produce internal and public reports, and
   [sbull] Support risk management.
Validation and Refinement
   The data elements collected by institutions must be capable of 
meeting

[[Page 45972]]

the validation requirements described in Chapters 1 and 2. These 
requirements include validating the institution's IRB system processes, 
including the ``front end'' aspects such as assigning ratings so that 
any issues can be identified early. The data must support efforts to 
identify whether raters and models are following rating criteria and 
policies and whether ratings are consistent across portfolios. In 
addition, data must support the validation of parameters, particularly 
the comparison of realized outcomes with estimates. Thorough data on 
default and disposition characteristics are of paramount importance for 
parameter back-testing.
   A rich source of data for validation efforts provides insights on 
the performance of the IRB system, and contributes to a learning 
environment in which refinements can be made to the system. These 
potential refinements include enhancements to rating assignment 
controls, processes, criteria or model coefficients, rating system 
architecture and parameter estimates.
Developing Parameter Estimates
   As detailed in Chapter 2, institutions will be developing their PD, 
LGD, and EAD parameter estimates using reference data sets comprised of 
internal, pooled, and external data. Institutions are expected to work 
toward eventually using as much of their own experience as possible in 
their reference data sets.
Applying Rating System Improvements Historically
   For loss severity estimates, institutions must record the basic 
structural characteristics of facilities and the factors used in 
developing the facility rating or LGD estimate. These often include the 
seniority of the credit, the amount and type of collateral, the most 
recent collateral valuation date and its fair value.
   To maintain a consistent series of information for credit risk 
monitoring and validation purposes, institutions need to be able to 
apply historically improvements they make to their rating systems. In 
the example below, a bank experiences unexpected and rapid migrations 
and defaults in its grade 4 category during 2006. Analysis of the 
actual financial condition of borrowers that defaulted compared with 
those that did not suggests the debt-to-EBITDA range for its expert 
judgment criteria of 3.0 to 5.5 is too broad. Research indicates that 
grade 4 should be redefined to include only borrowers with debt-to-
EBITDA ratios of 3.0-4.5 and grade 5 as 4.5-6.5. In 2007, the change is 
initiated, but prior years' numbers are not recast (see Exhibit A). 
Consequently, a break in the series prevents the bank from evaluating 
credit quality changes over several years and from identifying whether 
applying the new rating criteria historically provides reasonable 
results.
[GRAPHIC] [TIFF OMITTED] TN04AU03.007

   Recognizing the need to provide senior managers and board members 
with a consistent risk trend, the new criteria are applied historically 
to obligors in grades 4 and 5 as reflected in Exhibit B. The original 
ratings assigned to the grades are maintained along with notations 
describing what the grade would be under the new rating criteria. If 
the precise weight an expert has given one of the redefined criteria is 
unknown, institutions are expected to make estimates on a best efforts 
basis. After the retroactive reallocation process, the bank observes 
that the mix of obligors in grade 5 declined somewhat over the past 
several years while the mix in grade 4 increased slightly. This 
contrasts with the trend identified before the retroactive 
reallocation. The result is that the multiyear transition statistics 
for grades 4 and 5 provide risk managers a clearer picture of risk.

[[Page 45973]]

[GRAPHIC] [TIFF OMITTED] TN04AU03.002

   This example is based on applying ratings historically using data 
already collected by the bank. However, for some rating system 
refinements, institutions may identify in the future drivers of default 
or loss that might not have been collected for borrowers or facilities 
in the past. That is why institutions are encouraged to collect data 
that they believe may serve as a stronger predictor of default in the 
future. For example, certain elements of a borrower's cash flow might 
currently be suspected to overstate actual operational health for a 
particular industry. In the future, should an institution decide to 
deduct this item from cash flow with a resulting downgrade of many 
obligor ratings, the institution that collected these data could apply 
this rating change for prior years. This would provide the benefit of 
providing a consistent picture of risk over time and also present 
opportunities to validate the new criteria using historical data. 
Recognizing that institutions will not be able to anticipate fully the 
data they might find useful in the future, institutions are expected to 
reallocate grades on a best efforts basis when practical.
Calculating Capital Ratios and Reporting to the Public
   Data retained by the bank will be essential for regulatory risk-
based capital calculations and public reporting under the Pillar 3 
disclosures. These uses underscore the need for a well-defined data 
maintenance framework and strong controls over data integrity. Control 
processes and data elements themselves should also be subject to 
periodic verification and testing by internal and external auditors. 
Supervisors will rely on these processes and also perform testing as 
circumstances warrant.
Supporting Risk Management
   The information that can be gleaned from more extensive data 
collection will support a broad range of risk management activities. 
Risk management functions will rely on accurate and timely data to 
track credit quality, make informed portfolio risk mitigation 
decisions, and perform portfolio stress tests. Trends developed from 
obligor and facility risk rating data will be used to support internal 
capital allocation models, pricing models, ALLL calculations, and 
performance management measures, among others. Summaries of these are 
included in reports to institutions' boards of directors, regulators, 
and in public disclosures.

D. Managing Data Quality and Integrity

   Because data are collected at so many different stages involving a 
variety of groups and individuals, there are numerous challenges to 
ensuring the quality of the data. For example:
   [sbull] Data will be retained over long timeframes,
   [sbull] Qualitative risk-rating variables will have subjective 
elements and will be open to interpretation, and
   [sbull] Exposures will be acquired through mergers and purchases, 
but without an adequate and easily retrievable institutional rating 
history.
Documentation and Definitions
   S. Institutions must document the process for delivering, retaining 
and updating inputs to the data warehouse and ensuring data integrity.
   Given the many challenges presented by data for an IRB system, the 
management of data must be formalized. Fully documenting how the 
institution's flow of data is managed provides a means for evaluating 
whether the data maintenance framework is functioning as intended. 
Moreover, institutions must be able to communicate to individuals 
developing or delivering various data the precise definition of the 
items intended to be collected. Consequently, a ``data dictionary'' is 
necessary to ensure consistent inputs from individuals and data vendors 
and to allow third parties (such as the rating system review function, 
auditors, or bank supervisors) to evaluate data quality and integrity.
   S. Institutions must develop comprehensive definitions for the data 
elements used within each credit group or business line (a ``data 
dictionary'').
Electronic Storage
   S. Institutions must store data in electronic format to allow 
timely retrieval for analysis, validation of risk rating systems, and 
required disclosures.
   To meet the significant data management challenges presented by the 
validation and control features of an IRB system, institutions will 
need to store their data electronically. Institutions will have a 
variety of storage techniques and potentially a variety of systems to 
create their data

[[Page 45974]]

warehouses. IRB data requirements can be achieved by melding together 
existing accounting, servicing, processing, workout and risk management 
systems, provided the linkages among these systems are well documented 
and include sufficient edit and integrity checks to ensure the data can 
be used reliably.
   Institutions without electronic databases would need to resort to 
manual reviews of paper files for ongoing back-testing and ad hoc 
``forensic'' data mining and would be unable to perform that work in 
the timely and comprehensive manner required of IRB systems. Forensic 
mining of paper files to build an initial data warehouse from the 
institution's credit history is encouraged. In some instances, paper 
research may be necessary to identify data elements or factors not 
originally considered significant in estimating the risk of a 
particular class of obligor or facility.
Data Gaps
   Rating histories are often lost or are irretrievable for loans 
acquired through mergers, acquisitions, or portfolio purchases. 
Institutions are encouraged wherever practical to collect any missing 
historical rating assignment driver data and to re-grade the acquired 
obligors and facilities for prior periods. In cases where retrieving 
historical data is not practical, institutions may attempt to create a 
rating history through a careful mapping of the legacy system and the 
new rating structure. Mapped ratings should be reviewed thoroughly for 
accuracy. The level of effort placed on filling data gaps should be 
commensurate with the size of the new exposures to be newly 
incorporated into the institution's IRB system.

V. Control and Oversight Mechanisms

A. Overview

   Banks' internal rating systems are the foundation for credit-risk 
management practices and play an important role in pricing, reserving, 
portfolio management, performance measurement, economic capital 
modeling, and long-term capital planning. Banks adopting the IRB 
approach will also use their credit-risk ratings to determine 
regulatory capital levels. The pivotal and varied uses of such risk 
ratings put enormous, sometimes conflicting, pressure on banks' 
internal rating systems. The consequences of inaccurate ratings and 
their associated estimates are significant, particularly as they affect 
minimum regulatory capital requirements.
   As risk ratings and their related parameters become better 
integrated in institutions' decision making, conflicting incentives 
arise that, if not well managed, can lead to overly optimistic or 
biased ratings. For example, sales and marketing staff (relationship 
managers or RMs) are typically compensated according to the volume of 
business they generate. That may predispose the RMs to assign more 
favorable ratings in order to achieve rate-of-return and sales 
objectives. More favorable ratings may create the appearance of higher 
risk-adjusted returns and business line profitability. Banks need to be 
aware of the full range of incentive conflicts that arise, and must 
develop effective controls to keep these incentive conflicts in check.
   Banks will have latitude in designing and implementing their 
control structures subject to the following principle:
   IRB institutions must implement a system of controls that includes 
the following elements: independence, transparency, accountability, use 
of ratings, rating system review, internal audit, and board and senior 
management oversight. While banks will have flexibility in how these 
elements are combined, they must incorporate sufficient checks and 
balances to ensure that the credit risk management system is 
functioning properly.
   Banks additionally will want to embody the following more generic 
principles in their control system: separation of duties, balancing 
incentives, and layers of review. Table 4.1 lists the key components of 
an IRB control and oversight system. How these control mechanisms can 
best be combined to reinforce one another is a key challenge for banks 
implementing IRB systems:

Table 4.1 Control and Oversight Mechanisms
[GRAPHIC] [TIFF OMITTED] TN04AU03.003


[[Page 45975]]


   As the following examples indicate, how a bank conducts its 
business will influence how it designs its control structure. A bank 
using an expert-judgment system will likely establish a different set 
of controls than a bank using mainly models. Recognizing that its 
expert-judgment system is less than fully transparent, a bank could 
offset this vulnerability by opting for complete independence in the 
rating approval process and an enhanced rating system review.
   Other considerations would influence the choice of controls when 
banks use models to assign ratings. While the ratings produced by 
models are transparent, a model's performance depends on how well the 
model was developed, the model's logic, and the quality of the data 
used to implement the model. Banks that use models to assign ratings 
must implement a system of controls that addresses model development, 
testing and implementation, data integrity and overrides. These 
activities would be covered by a comprehensive and independent rating 
system review and by ongoing spot checks on the accuracy of model 
inputs. Other control mechanisms such as accountability and audit would 
also be required.

B. Independence in the Rating Approval Process

   An independent rating process is one in which the parties 
responsible for approving ratings and transactions are separate from 
sales and marketing and in which the persons approving ratings are 
principally compensated on risk-rating accuracy. As relative 
independence increases, the likelihood of accurate ratings assignments 
grows markedly.
   S. Ratings must be subject to independent approval or review.
   One way institutions can better achieve objective and accurate risk 
ratings is by ensuring that its rating approval process is independent. 
Institutions that firmly separate sales/marketing from credit are 
better able to manage the conflict between the goal of high sales 
volume and the need for good credit quality. An institution whose 
rating process is less independent must compensate by strengthening 
other control and oversight mechanisms. A significant factor in the 
evaluation of the rating system will be the assessment of whether such 
compensating controls are sufficient to offset a less-than-independent 
ratings process. While the overriding objective is to achieve 
independence in the rating approval process, in some instances, the 
relative materiality of a portfolio and cost/benefit trade-offs may 
support a less rigorous control process.
   The degree of independence achieved in the rating process depends 
on how an institution is organized and how it conducts its lending 
activities.
Rating Approval Processes
   Responsibility for recommending and approving ratings varies by 
institution and, quite often, by portfolio.\7\ At some institutions, 
ratings are assigned and approved by relationship managers (RMs); at 
others, deal teams assign ratings that are later approved by credit 
officers. Still other institutions have independent credit officers 
assign and approve ratings. The culture of an institution and its 
business mix generally determine whether the business line or credit 
function is ultimately responsible for ratings.
---------------------------------------------------------------------------

   \7\ Rating processes vary by institution but generally involve 
an ``assignor'' and an ``approver.'' For instance, at many 
organizations the rating assignor is the person who ``owns'' the 
relationship (such as a ``relationship manager'') and the rating 
approver is an individual with credit authority (a ``credit risk 
manager''). In some cases, the rating assignor and approver are the 
same. Banks that separate the rating assignment and approval 
processes do so in order to minimize potential conflicts of interest 
and the potential for rating errors.
---------------------------------------------------------------------------

   The subsections that follow describe various rating assignment and 
approval structures used by banking organizations and the challenges 
that emerge in ensuring objective and consistent ratings. Any of the 
following structures can work as long as ratings are subject to an 
independent approval or review process, and are not unduly influenced 
by the line of business:
   Relationship Managers. As noted earlier, relationship managers are 
primarily responsible for marketing the bank's products and services, 
and their compensation is tied to the volume of business they generate. 
When RMs also have responsibility for assigning and approving ratings, 
there is an inherent conflict of interest. Credit quality and the 
ability to produce timely and accurate risk ratings are generally not 
major factors in an RM's compensation, even when he or she has 
responsibility for assigning and approving ratings. In addition, RMs 
also may become too close to the borrower to maintain their objectivity 
and remain unbiased. When banks delegate rating responsibility to RMs, 
they must offset the lack of independence with rigorous controls to 
prevent bias from affecting the rating process. Such controls must 
operate in practice, not just on paper, and would include, at a 
minimum, a comprehensive, independent post-closing review of ratings by 
a rating system review function.
   Deal Team. Some major banks employ a ``deal-team'' structure for 
credit origination and rating assignment. Using this approach, all 
members of the team--credit officers, investment bankers, underwriters, 
and others--contribute to analyzing creditworthiness, underwriting the 
deal, and assigning ratings.
   On the one hand, deal teams increase the access of credit officers 
to information on obligors and transactions early in the underwriting 
process, enabling them to make more informed credit decisions and to 
influence facility structure to address obligors' weaknesses. On the 
other hand, participation in the deal team could compromise the credit 
officer's objectivity. While credit officers typically report to an 
independent credit-risk-management function, they also have allegiance 
to the deal team that reports to executives within the sales and 
marketing line of business. In addition, credit officers may defer to 
the members of the team whose compensation is based on the revenue and 
sales volume they generate for the bank. Banks that maintain deal teams 
must ensure that the credit officer's independence is safeguarded 
through independent reporting lines and well-defined performance 
measures (e.g., adherence to policy, rating accuracy and timeliness).
   Credit Officers. Some banks give sole responsibility for assigning 
and approving ratings to credit officers who report to an independent 
credit function. In addition to assigning and approving and assigning 
initial ratings, credit officers regularly monitor the condition of 
obligors and refresh ratings as necessary. The potential downside of 
this structure is that these credit officers may have limited access to 
borrower information. Those credit officers that have a separate 
reporting line and whose compensation is principally based on their 
risk-rating accuracy are typically more independent than RMs or deal 
teams.
   Models. At some institutions, models assign ratings directly; at 
other institutions, models and judgment are combined to rate credits. 
Models introduce a high degree of independence to the rating process, 
but they too require human oversight and controls. Banks that use 
models must incorporate an independent judgmental review of the rating 
assignments to ensure that all relevant information is considered and 
to identify potential rating errors. Judgmental reviews are also needed 
when model outputs are

[[Page 45976]]

overridden. In addition, controls are needed to ensure accuracy of data 
inputs. When a bank uses a model to assign risk ratings, an individual 
obligor's rating is ``transparent.'' However, the model itself is not 
``transparent'' without a great deal of effort to document how the 
model functions.

C. Transparency

   Transparency is the ability of a third party, such as rating system 
reviewers, auditors or bank supervisors, to observe how the rating 
system operates and to understand the pertinent characteristics of 
individual ratings.
   S. IRB institutions must have a transparent rating system.
   Transparency in a rating system is achieved through documentation 
that covers the following:
   [sbull] The rating system's design, purpose, performance horizon, 
and performance standards;
   [sbull] The rating assignment process, including procedures for 
adjustments and overrides;
   [sbull] Rating definitions and criteria, scorecard criteria, and 
model specifications;
   [sbull] Parameter estimates and the process for their estimation;
   [sbull] Definition of the data elements to be warehoused to support 
controls, oversight, validation, and parameter estimation; and
   [sbull] Specific responsibilities of, and performance standards 
for, individuals and units involved in the rating system and its 
oversight.
   Transparency allows third parties (such as rating system review, 
auditors, or supervisors) to evaluate whether the rating system is 
performing as intended. Without transparency, it is difficult to hold 
people accountable for ratings errors and to validate the performance 
of the system.
   S. Rating criteria must be clear and specific and must include 
qualitative and quantitative factors.
   To produce transparent individual ratings, a bank's policies must 
contain clear, detailed ratings definitions. Banks should specify 
criteria for each factor that raters must consider, which may require 
unique rating definitions for certain industries. Banks should consider 
criteria for factors such as liquidity, sales and profitability, debt 
service and fixed charge coverage, minimum equity support, position 
within the industry, strength of management. A rating system with vague 
criteria or one merely defined by PDs or LGDs is not transparent. For 
example, the following rating definitions are not transparent because 
they require the rater to do too much interpreting:
   Borrower exhibits satisfactory quality and demonstrates acceptable 
principal and interest repayment capacity in the near term.
   Lower tier company in a cyclical industry. Unbalanced position with 
tight liquidity and high leverage. Declining or erratic profitability 
and marginal debt service capacity. Management is untested.

D. Accountability

   ``Accountability'' is holding people responsible for their actions 
and establishing adverse consequences for inaccurate ratings.
   S. Policies must identify the parties responsible for rating 
accuracy and rating system performance.
   For accountability to be effective, it should be both observable 
and ingrained in the culture. Persons who assign and approve rate 
credits, derive parameter estimates, or oversee rating systems must be 
held accountable for complying with rating system policies and ensuring 
that aspects of the rating system within their control are as unbiased 
and accurate as possible. These persons must have the tools and 
resources necessary to carry out their responsibilities, and their 
performance should be evaluated against clear and specific objectives 
documented in policy.
Responsibility for Assigning Ratings
   S. Individuals must be held accountable for complying with rating 
system policies and for assigning accurate ratings, and their 
performance and compensation must be linked to well-defined measurable 
performance standards.
   Responsibilities of raters should be clear, and performance should 
be measured against specific objectives. Performance evaluation and 
incentive compensation should be tied to performance goals. Examples of 
performance measures include:
   [sbull] Number and frequency of rating errors,
   [sbull] Significance of errors (for example, multiple downgrades), 
and
   [sbull] Proper and consistent application of criteria, including 
override criteria.
Responsibility for Rating System Performance
   Just as individuals will be held accountable for the accuracy of 
ratings, an individual must be held responsible for the overall 
performance of the rating system. This individual must ensure that the 
rating system and all of its component parts--rating assignments, 
parameter estimation, data collection, control and oversight 
mechanisms--are functioning as intended. While these components often 
are housed within separate units of the organization, an individual 
must be responsible for ensuring that the parts work together 
effectively and efficiently.

E. Use of Ratings

   S. Ratings used for regulatory capital must be the same ratings 
used to guide day-to-day credit risk management activities.
   The different uses and applications of the risk-rating system's 
outputs should promote greater accuracy and consistency of credit-risk 
evaluations across an organization. Ratings and the associated default, 
loss, and EAD estimates need to be incorporated within the credit-risk 
management, internal capital allocation, and corporate governance 
functions of IRB banks.
   S. Banks that use parameter estimates for risk management that are 
different from those used for regulatory capital must provide a well-
documented rationale for the differences.
   PD and LGD parameters used for regulatory capital purposes may not 
be appropriate for other uses purposes. For example, PD estimates used 
to estimate reserve needs could reflect current economic conditions 
that are different from the longer term view appropriate to 
calculations of regulatory capital. When banks employ different 
estimates, those parameters must be defensible and supported by the 
following:
   [sbull] Qualitative and quantitative analysis of the logic and 
rationale for the difference(s); and
   [sbull] Senior management approval of the difference(s).

F. Rating System Review (RSR)

   S. Banks must have a comprehensive, coordinated, independent review 
process to ensure that ratings are accurate and that the rating system 
is performing as intended.
   Rating system review (RSR) ensures that the rating system as a 
whole is functioning as intended. A broad range of responsibilities 
come under RSR's purview, as outlined in Table 4.2:

         Table 4.2.--Responsibilities of Rating System Review
------------------------------------------------------------------------

-------------------------------------------------------------------------
Scope of Review:
 Design of the rating system.
 Compliance with policies and procedures, including application of
  criteria.
 Check of all risk-rating grades for accuracy.
 Consistency across industries/portfolios/geographies.

[[Page 45977]]


 Model development.
 Model use, including inputs and outputs.
 Overrides and policy exceptions.
 Quantification process.
 Back-testing (perform or review).
 Actual and predicted ratings transitions.
 Benchmarking against third-party data sources (perform or review).
 Adequacy of data maintenance.
Analysis and Reporting:
 Identify errors and flaws.
 Recommend corrective action.
------------------------------------------------------------------------

   For each of these responsibilities, RSR is largely checking and 
confirming the work of others and ensuring that the rating system's 
components work well together. RSR's testing and review should identify 
current and potential weaknesses and should lead to recommendations and 
corrective action such as
   [sbull] Adjusting policies and procedures,
   [sbull] Requiring additional training of staff,
   [sbull] Investing in infrastructure improvements,
   [sbull] Adjusting rating criteria, and
   [sbull] Adjusting parameter estimates.
   S. Rating system review must report significant findings to senior 
management and the board quarterly.
   RSR's role is to identify issues and areas of concern and report 
findings to the area that is accountable. When issues are systematic, 
RSR should bring them to the attention of senior management and the 
board.
   The activities of this function could be distributed across 
multiple areas or housed within one unit. Organizations will choose a 
structure that fits within their management and oversight framework. 
These units must always have high standing within the organization and 
should be staffed by individuals possessing the requisite stature, 
skills, and experience.
   Like internal audit, RSR must be independent from all in-house 
designers and developers (that is, system and model designers) and 
raters (that is, ratings and parameter assigners) in the risk-rating 
process. RSR's independence eliminates potential conflicts of interest 
and gives the group credibility when it reports findings and 
conclusions to the board and senior management.

G. Internal Audit

   S. An independent internal audit function must determine whether 
rating system controls function as intended.
   S. Internal audit must evaluate annually whether the bank is in 
compliance with the risk-based capital regulation and supervisory 
guidance.
   Internal audit determines whether the bank's system of controls 
over internal ratings and the related parameters is robust. In its 
evaluation of controls, internal audit must consider any trade-offs 
made between the various mechanisms and confirm their continued 
appropriateness and relevance. As part of its review of control 
mechanisms, audit will evaluate the depth, scope, and quality of RSR's 
work and will conduct limited testing to ensure that their conclusions 
are well founded. The amount of testing will depend on whether audit is 
the primary or secondary reviewer of that work.
   Internal audit will report to the board and management on whether 
the bank is in compliance with the IRB standards. This report will 
allow the board and management to disclose that its rating processes 
and the controls surrounding these processes are in compliance with the 
IRB standards. This will be critical for public disclosure and ongoing 
work of supervisors.
External Audit
   As part of the process of certifying financial statements, external 
auditors will confirm that the institution's capital position is fairly 
presented. To verify that actual capital exceeds regulatory minimums 
and to confirm compliance with the IRB rules, the external auditors 
must ascertain that the IRB system is rating credit risk appropriately 
and linking these ratings to appropriate estimates. Auditors must 
evaluate the bank's internal control functions and its compliance with 
the risk-based capital regulation and supervisory guidance.

H. Corporate Oversight

   S. The full board or a committee of the board must approve key 
elements of the IRB system.
   Consistent with sound practice, bank management must ensure that a 
corporate culture exists in which institutional needs are readily 
identified and appropriate resources are brought to bear to rectify 
shortcomings. In the IRB context, senior management and the board of 
directors must ensure the objectivity and accuracy of the bank's 
credit-risk management systems and approach.
   Either the full board or a committee of the board should approve 
key elements of the risk-rating system. Information provided to the 
board should be sufficiently detailed to allow directors to confirm the 
continuing appropriateness of the institution's rating approach and to 
verify the adequacy of the controls supporting the rating system.
   S. Senior management must ensure that all components of the IRB 
system, including controls, are functioning as intended and comply with 
the risk-based capital regulation and supervisory guidance.
   Senior management's oversight should be even more active than that 
of the board of directors. Senior management should articulate what it 
expects of the technical and operational units of the risk-rating 
system, as well as what it expects of the units that manage the 
system's controls. To oversee the risk-rating system, senior management 
must have an extensive understanding of credit policies, underwriting 
standards, lending practices, and collection and recovery practices, 
and must be able to understand how these factors affect default and 
loss estimates. Senior management should not only oversee the controls 
process (its traditional role) but also should periodically meet with 
raters and validators to discuss the rating system's performance, areas 
needing improvement, and the status of efforts to improve previously 
identified deficiencies.
   The depth and frequency of information provided to the board and 
senior management must be commensurate with their oversight 
responsibilities and the condition of the institution. These reports 
should include the following information:
   [sbull] Risk profile by grade,
   [sbull] Risk rating migration across grades with emphasis on 
unexpected results,
   [sbull] Changes in parameter estimates by grade,
   [sbull] Comparison of realized PD, LGD, and EAD rates against 
expectations,
   [sbull] Reports measuring changes in regulatory and economic 
capital,
   [sbull] Results of capital stress testing, and
   [sbull] Reports generated by rating system review, audit, and other 
control units.
   Although all of an institution's controls must function smoothly, 
independently, and in concert with the others, the direction and 
oversight provided by the board and senior management are perhaps most 
important to ensure that the IRB system is functioning properly.

Document 2: Draft Supervisory Guidance on Operational Risk Advanced 
Measurement Approaches for Regulatory Capital

Table of Contents

I. Purpose
II. Background
III. Definitions
IV. Banking Activities and Operational Risk
V. Corporate Governance
   A. Board and Management Oversight

[[Page 45978]]

   B. Independent Firm-wide Risk Management Function
   C. Line of Business Management
VI. Operational Risk Management Elements
   A. Operational Risk Policies and Procedures
   B. Identification and Measurement of Operational Risk
   C. Monitoring and Reporting
   D. Internal Control Environment
VII. Elements of an AMA Framework
   A. Internal Operational Risk Loss Event Data
   B. External Data
   C. Business Environment and Internal Control Factor Assessments
   D. Scenario Analysis
VIII. Risk Quantification
   A. Analytical Framework
   B. Accounting for Dependence
IX. Risk Mitigation
X. Data Maintenance
XI. Testing and Verification
   Appendix A: Supervisory Standards for the AMA

I. Purpose

   The purpose of this guidance is to set forth the expectations of 
the U.S. banking agencies for banking institutions that use Advanced 
Measurement Approaches (AMA) for calculating the operational risk 
capital charge under the new capital regulation. Institutions using the 
AMA will have considerable flexibility to develop operational risk 
measurement systems appropriate to the nature of their activities, 
business environment, and internal controls. An institution's 
operational risk regulatory capital requirement will be calculated as 
the amount needed to cover its operational risk at a level of 
confidence determined by the supervisors, as discussed below. Use of an 
AMA is subject to supervisory approval.
   This draft guidance should be considered with the advance notice of 
proposed rulemaking (ANPR) on revisions to the risk-based capital 
standard published elsewhere in today's Federal Register. As with the 
ANPR, the Agencies are seeking industry comment on this draft guidance. 
In addition to seeking comment on all specific aspects of this 
supervisory guidance, the Agencies are seeking comment on the extent to 
which the supervisory guidance strikes the appropriate balance between 
flexibility and specificity. Likewise, the Agencies are seeking comment 
on whether an appropriate balance has been struck between the 
regulatory requirements set forth in the ANPR and the supervisory 
standards set forth in this guidance.

II. Background

   Effective management of operational risk is integral to the 
business of banking and to institutions' roles as financial 
intermediaries. Although operational risk is not a new risk, 
deregulation and globalization of financial services, together with the 
growing sophistication of financial technology, new business activities 
and delivery channels, are making institutions' operational risk 
profiles (i.e., the level of operational risk across an institution's 
activities and risk categories) more complex.
   This guidance identifies the supervisory standards (S) that 
institutions must meet and maintain to use an AMA for the regulatory 
capital charge for operational risk. The purpose of the standards is to 
provide the foundation for a sound operational risk framework, while 
allowing institutions to identify the most appropriate mechanisms to 
meet AMA requirements. Each institution will need to consider its 
complexity, range of products and services, organizational structure, 
and risk management culture as it develops its AMA. Operational risk 
governance processes need to be established on a firm-wide basis to 
identify, measure, monitor, and control operational risk in a manner 
comparable with the treatment of credit, interest rate, and market 
risks.
   Institutions will be expected to develop a framework that measures 
and quantifies operational risk for regulatory capital purposes. To do 
this, institutions will need a systematic process for collecting 
operational risk loss data, assessing the risks within the institution, 
and adopting an analytical framework that translates the data and risk 
assessments into an operational risk exposure (see definition below). 
The analytical framework must incorporate a degree of conservatism that 
is appropriate for the overall robustness of the quantification 
process. Because institutions will be permitted to calculate their 
minimum regulatory capital on the basis of internal processes, the 
requirements for data capture, risk assessment, and the analytical 
framework described below are detailed and specific.
   Effective operational risk measurement systems are built on both 
quantitative and qualitative risk assessment techniques. While the 
output of the regulatory framework for operational risk is a measure of 
exposure resulting in a capital number, the integrity of that estimate 
depends not only on the soundness of the measurement model, but also on 
the robustness of the institution's underlying risk management 
processes. In addition, supervisors view the introduction of the AMA as 
an important tool to further promote improvements in operational risk 
management and controls at large banking institutions.
   This document provides both AMA supervisory standards and a 
discussion of how those standards should be incorporated into an 
operational risk framework. The relevant supervisory standards are 
listed at the beginning of each section and a full compilation of the 
standards is provided in Appendix A. Not every section has specific 
supervisory standards. When spanning more than one section, supervisory 
standards are listed only once.
   Institutions will be required to meet, and remain in compliance 
with, all the supervisory standards to use an AMA framework. However, 
evaluating an institution's qualification with each of the individual 
supervisory standards will not be sufficient to determine an 
institution's overall readiness for AMA. Instead, supervisors and 
institutions must also evaluate how well the various components of an 
institution's AMA framework complement and reinforce one another to 
achieve the overall objectives of an accurate measure and effective 
management of operational risk. In performing their evaluation, 
supervisors will exercise considerable supervisory judgment, both in 
evaluating the individual components and the overall operational risk 
framework.
   An institution's AMA methodology will be assessed as part of the 
ongoing supervision process. This will allow supervisors to incorporate 
existing supervisory efforts as much as possible into the AMA 
assessments. Some elements of operational risk (e.g., internal controls 
and information technology) have long been subject to examination by 
supervisors. Where this is the case, supervisors will make every effort 
to leverage off these examination activities to assess the 
effectiveness of the AMA process. Substantive weaknesses identified in 
an examination will be factored into the AMA qualification process.

III. Definitions

   There are important definitions that institutions must incorporate 
into an AMA framework. They are:
   [sbull] Operational risk: The risk of loss resulting from 
inadequate or failed internal processes, people and systems, or from 
external events. The definition includes legal risk, which is the risk 
of loss resulting from failure to comply with laws as well as prudent 
ethical standards and contractual obligations. It also includes the 
exposure to litigation from all aspects of an institution's

[[Page 45979]]

activities. The definition does not include strategic or reputational 
risks.\8\
---------------------------------------------------------------------------

   \8\ An institution's definition of risk may encompass other risk 
elements as long as the supervisory definition is met.
---------------------------------------------------------------------------

   [sbull] Operational risk loss: The financial impact associated with 
an operational event that is recorded in the institution's financial 
statements consistent with Generally Accepted Accounting Principles 
(GAAP). Financial impact includes all out-of-pocket expenses associated 
with an operational event but does not include opportunity costs, 
foregone revenue, or costs related to investment programs implemented 
to prevent subsequent operational risk losses. Operational risk losses 
are characterized by seven event factors associated with:
   i. Internal fraud: An act of a type intended to defraud, 
misappropriate property or circumvent regulations, the law or company 
policy, excluding diversity/discrimination events, which involve at 
least one internal party.
   ii. External fraud: An act of a type intended to defraud, 
misappropriate property or circumvent the law, by a third party.
   iii. Employment practices and workplace safety: An act inconsistent 
with employment, health or safety laws or agreements, from payment of 
personal injury claims, or from diversity/discrimination events.
   iv. Clients, products, and business practices: An unintentional or 
negligent failure to meet a professional obligation to specific clients 
(including fiduciary and suitability requirements), or from the nature 
or design of a product.
   v. Damage to physical assets: The loss or damage to physical assets 
from natural disaster or other events.
   vi. Business disruption and system failures: Disruption of business 
or system failures.
   vii. Execution, delivery, and process management: Failed 
transaction processing or process management, from relations with trade 
counterparties and vendors.
   [sbull] Operational risk exposure: An estimate of the potential 
operational losses that the banking institution faces at a soundness 
standard consistent with a 99.9 per cent confidence level over a one-
year period. The institution will multiply the exposure by 12.5 to 
obtain risk-weighted assets for operational risk; this is added to the 
risk-weighted assets for credit and market risk to arrive at the 
denominator of the regulatory capital ratio.
   [sbull] Business environment and internal control factor 
assessments: The range of tools that provide a meaningful assessment of 
the level and trends in operational risk across the institution. While 
the institution may use multiple tools in an AMA framework, they must 
all have the same objective of identifying key risks. There are a 
number of existing tools, such as audit scores and performance 
indicators that may be acceptable under this definition.

IV. Banking Activities and Operational Risk

   The above definition of operational risk gives a sense of the 
breadth of exposure to operational risk that exists in banking today as 
well as the many interdependencies among risk factors that may result 
in an operational risk loss. Indeed, operational risk can occur in any 
activity, function, or unit of the institution.
   The definition of operational risk incorporates the risks stemming 
from people, processes, systems and external events. People risk refers 
to the risk of management failure, organizational structure or other 
human resource failures. These risks may be exacerbated by poor 
training, inadequate controls, poor staffing resources, or other 
factors. The risk from processes stem from breakdowns in established 
processes, failure to follow processes, or inadequate process mapping 
within business lines. System risk covers instances of both disruption 
and outright system failures in both internal and outsourced 
operations. Finally, external events can include natural disasters, 
terrorism, and vandalism.
   There are a number of areas where operational risks are emerging. 
These include:
   [sbull] Greater use of automated technology has the potential to 
transform risks from manual processing errors to system failure risks, 
as greater reliance is placed on globally integrated systems;
   [sbull] Proliferation of new and highly complex products;
   [sbull] Growth of e-banking transactions and related business 
applications expose an institution to potential new risks (e.g., 
internal and external fraud and system security issues);
   [sbull] Large-scale acquisitions, mergers, and consolidations test 
the viability of new or newly integrated systems;
   [sbull] Emergence of institutions acting as large-volume service 
providers create the need for continual maintenance of high-grade 
internal controls and back-up systems;
   [sbull] Development and use of risk mitigation techniques (e.g., 
collateral, insurance, credit derivatives, netting arrangements and 
asset securitizations) optimize an institution's exposure to market 
risk and credit risk, but potentially create other forms of risk (e.g., 
legal risk); and
   [sbull] Greater use of outsourcing arrangements and participation 
in clearing and settlement systems mitigate some risks while increasing 
others.
   The range of banking activities and areas affected by operational 
risk must be fully identified and considered in the development of the 
institution's risk management and measurement plans. Since operational 
risk is not confined to particular business lines \9\, product types, 
or organizational units, it should be managed in a consistent and 
comprehensive manner across the institution. Consequently, risk 
management mechanisms must encompass the full range of risks, as well 
as strategies that help to identify, measure, monitor and control those 
risks.
---------------------------------------------------------------------------

   \9\ Throughout this guidance, terms such as ``business units'' 
and ``business lines'' are used interchangeably and refer not only 
to an institution's revenue-generating businesses, but also to 
corporate staff functions such as human resources or information 
technology.
---------------------------------------------------------------------------

V. Corporate Governance

Supervisory Standards
   S 1. The institution's operational risk framework must include an 
independent firm-wide operational risk management function, line of 
business management oversight, and independent testing and verification 
functions.
   The management structure underlying an AMA operational risk 
framework may vary between institutions. However, within all AMA 
institutions, there are three key components that must be evident--the 
firm-wide operational risk management function, lines of business 
management, and the testing and verification function. These three 
elements are functionally independent \10\ organizational components, 
but should work in cooperation to ensure a robust operational risk 
framework.
---------------------------------------------------------------------------

   \10\ For the purposes of AMA, ``functional independence'' is 
defined as the ability to carry out work freely and objectively and 
render impartial and unbiased judgments. There should be appropriate 
independence between the firm-wide operational risk management 
functions, line of business management and staff and the testing/
verification functions. Supervisory assessments of independence 
issues will rely upon existing regulatory guidance (e.g. audit, 
internal control systems, board of directors/management, etc.)
---------------------------------------------------------------------------

A. Board and Management Oversight

Supervisory Standards
   S 2. The board of directors must oversee the development of the 
firm-wide operational risk framework, as

[[Page 45980]]

well as major changes to the framework. Management roles and 
accountability must be clearly established.
   S 3. The board of directors and management must ensure that 
appropriate resources are allocated to support the operational risk 
framework.
   The board is responsible for overseeing the establishment of the 
operational risk framework, but may delegate the responsibility for 
implementing the framework to management with the authority necessary 
to allow for its effective implementation. Other key responsibilities 
of the board include:
   [sbull] Ensuring appropriate management responsibility, 
accountability and reporting;
   [sbull] Understanding the major aspects of the institution's 
operational risk as a distinct risk category that should be managed;
   [sbull] Reviewing periodic high-level reports on the institution's 
overall operational risk profile, which identify material risks and 
strategic implications for the institution;
   [sbull] Overseeing significant changes to the operational risk 
framework; and
   [sbull] Ensuring compliance with regulatory disclosure 
requirements.
   Effective board and management oversight forms the cornerstone of 
an effective operational risk management process. The board and 
management have several broad responsibilities with respect to 
operational risk:
   [sbull] To establish a framework for assessing operational risk 
exposure and identify the institution's tolerance for operational risk;
   [sbull] To identify the senior managers who have the authority for 
managing operational risk;
   [sbull] To monitor the institution's performance and overall 
operational risk profile, ensuring that it is maintained at prudent 
levels and is supported by adequate capital;
   [sbull] To implement sound fundamental risk governance principles 
that facilitate the identification, measurement, monitoring, and 
control of operational risk;
   [sbull] To devote adequate human and technical resources to 
operational risk management; and
   [sbull] To institute remuneration policies that are consistent with 
the institution's appetite for risk and are sufficient to attract 
qualified operational risk management and staff.
   Management should translate the operational risk management 
framework into specific policies, processes and procedures that can be 
implemented and verified within the institution's different business 
units. Communication of these elements will be essential to the 
understanding and consistent treatment of operational risk across the 
institution. While each level of management is responsible for 
effectively implementing the policies and procedures within its 
purview, senior management should clearly assign authority, 
responsibilities, and reporting relationships to encourage and maintain 
this accountability and ensure that the necessary resources are 
available to manage operational risk. Moreover, management should 
assess the appropriateness of the operational risk management oversight 
process in light of the risks inherent in a business unit's activities. 
The testing and verification function is responsible for completing 
timely and comprehensive assessments of the effectiveness of 
implementation of the institution's operational risk framework at the 
line of business and firm-wide levels.
   Management collectively is also responsible for ensuring that the 
institution has qualified staff and sufficient resources to carry out 
the operational risk functions outlined in the operational risk 
framework. Additionally, management must communicate operational risk 
issues to appropriate staff that may not be directly involved in its 
management. Key management responsibilities include ensuring that:
   [sbull] Operational risk management activities are conducted by 
qualified staff with the necessary experience, technical capabilities 
and access to adequate resources;
   [sbull] Sufficient resources have been allocated to operational 
risk management, in the business lines as well as the independent firm-
wide operational risk management function and verification areas, so as 
to sufficiently monitor and enforce compliance with the institution's 
operational risk policy and procedures; and
   [sbull] Operational risk issues are effectively communicated with 
staff responsible for managing credit, market and other risks, as well 
as those responsible for purchasing insurance and managing third-party 
outsourcing arrangements.

B. Independent Firm-Wide Risk Management Function

Supervisory Standards
   S 4. The institution must have an independent operational risk 
management function that is responsible for overseeing the operational 
risk framework at the firm level to ensure the development and 
consistent application of operational risk policies, processes, and 
procedures throughout the institution.
   S 5. The firm-wide operational risk management function must ensure 
appropriate reporting of operational risk exposures and loss data to 
the board of directors and senior management.
   The institution must have an independent firm-wide operational risk 
management function. The roles and responsibilities of the function 
will vary between institutions, but must be clearly documented. The 
independent firm-wide operational risk function should have 
organizational stature commensurate with the institution's operational 
risk profile, while remaining independent of the lines of business and 
the testing and verification function. At a minimum, the institution's 
independent firm-wide operational risk management function should 
ensure the development of policies, processes, and procedures that 
explicitly manage operational risk as a distinct risk to the 
institution's safety and soundness. These policies, processes and 
procedures should include principles for how operational risk is to be 
identified, measured, monitored, and controlled across the 
organization. Additionally, they should provide for the collection of 
the data needed to calculate the institution's operational risk 
exposure.
   Additional responsibilities of the independent firm-wide 
operational risk management function include:
   [sbull] Assisting in the implementation of the overall firm-wide 
operational risk framework;
   [sbull] Reviewing the institution's progress towards stated 
operational risk objectives, goals and risk tolerances;
   [sbull] Periodically reviewing the institution's operational risk 
framework to consider the loss experience, effects of external market 
changes, other environmental factors, and the potential for new or 
changing operational risks associated with new products, activities or 
systems. This review process should include an assessment of industry 
best practices for the institution's activities, systems and processes;
   [sbull] Reviewing and analyzing operational risk data and reports; 
and
   [sbull] Ensuring appropriate reporting to senior management and the 
board.

C. Line of Business Management

Supervisory Standards
   S 6. Line of business management is responsible for the day-to-day 
management of operational risk within each business unit.
   S 7. Line of business management must ensure that internal controls 
and

[[Page 45981]]

practices within their line of business are consistent with firm-wide 
policies and procedures to support the management and measurement of 
the institution's operational risk.
   Line of business management is responsible for both managing 
operational risk within the business lines and ensuring that policies 
and procedures are consistent with and support the firm-wide 
operational risk framework. Management should ensure that business-
specific policies, processes, procedures and staff are in place to 
manage operational risk for all material products, activities, and 
processes. Implementation of the operational risk framework within each 
line of business should reflect the scope of that business and its 
inherent operational complexity and operational risk profile. Line of 
business management must be independent of both the firm-wide 
operational risk management and the testing and verification functions.

VI. Operational Risk Management Elements

   The operational risk management framework provides the overall 
operational risk strategic direction and ensures that an effective 
operational risk management and measurement process is adopted 
throughout the institution. The framework should provide for the 
consistent application of operational risk policies and procedures 
throughout the institution and address the roles of both the 
independent firm-wide operational risk management function and the 
lines of business. The framework should also provide for the consistent 
and comprehensive capture of data elements needed to measure and verify 
the institution's operational risk exposure, as well as appropriate 
operational risk analytical frameworks, reporting systems, and 
mitigation strategies. The framework must also include independent 
testing and verification to assess the effectiveness of implementation 
of the institution's operational risk framework, including compliance 
with policies, processes, and procedures.
   In practice, an institution's operational risk framework must 
reflect the scope and complexity of business lines, as well as the 
corporate organizational structure. Each institution's operational risk 
profile is unique and requires a tailored risk management approach 
appropriate for the scale and materiality of the risks present, and the 
size of the institution. There is no single framework that would suit 
every institution; different approaches will be needed for different 
institutions. In fact, many operational risk management techniques 
continue to evolve rapidly to keep pace with new technologies, business 
models and applications.
   The key elements in the operational risk management process 
include:
   [sbull] Appropriate policies and procedures;
   [sbull] Efforts to identify and measure operational risk;
   [sbull] Effective monitoring and reporting;
   [sbull] A sound system of internal controls; and
   [sbull] Appropriate testing and verification of the operational 
risk framework.

A. Operational Risk Policies and Procedures

Supervisory Standards
   S 8. The institution must have policies and procedures that clearly 
describe the major elements of the operational risk management 
framework, including identifying, measuring, monitoring, and 
controlling operational risk.
   Operational risk management policies, processes, and procedures 
should be documented and communicated to appropriate staff. The 
policies and procedures should outline all aspects of the institution's 
operational risk management framework, including:
   [sbull] The roles and responsibilities of the independent firm-wide 
operational risk management function and line of business management;
   [sbull] A definition for operational risk, including the loss event 
types that will be monitored;
   [sbull] The capture and use of internal and external operational 
risk loss data, including large potential events (including the use of 
scenario analysis);
   [sbull] The development and incorporation of business environment 
and internal control factor assessments into the operational risk 
framework;
   [sbull] A description of the internally derived analytical 
framework that quantifies the operational risk exposure of the 
institution;
   [sbull] An outline of the reporting framework and the type of data/
information to be included in line of business and firm-wide reporting;
   [sbull] A discussion of qualitative factors and risk mitigants and 
how they are incorporated into the operational risk framework;
   [sbull] A discussion of the testing and verification processes and 
procedures;
   [sbull] A discussion of other factors that affect the measurement 
of operational risk; and
   [sbull] Provisions for the review and approval of significant 
policy and procedural exceptions.

B. Identification and Measurement of Operational Risk

   The result of a comprehensive program to identify and measure 
operational risk is an assessment of the institution's operational risk 
exposure. Management must establish a process that identifies the 
nature and types of operational risk and their causes and resulting 
effects on the institution. Proper operational risk identification 
supports the reporting and maintenance of capital for operational risk 
exposure and events, facilitates the establishment of mechanisms to 
mitigate or control the risks, and ensures that management is fully 
aware of the sources of emerging operational risk loss events.

C. Monitoring and Reporting

Supervisory Standards
   S 9. Operational risk management reports must address both firm-
wide and line of business results. These reports must summarize 
operational risk exposure, loss experience, relevant business 
environment and internal control assessments, and must be produced no 
less often than quarterly.
   S 10. Operational risk reports must also be provided periodically 
to senior management and the board of directors, summarizing relevant 
firm-wide operational risk information.
   Ongoing monitoring of operational risk exposures is a key aspect of 
an effective operational risk framework. To facilitate monitoring of 
operational risk, results from the measurement system should be 
summarized in reports that can be used by the firm-wide operational 
risk and line of business management functions to understand, manage, 
and control operational risk and losses. These reports should serve as 
a basis for assessing operational risk and related mitigation 
strategies and creating incentives to improve operational risk 
management throughout the institution.
   Operational risk management reports should summarize:
   [sbull] Operational risk loss experience on an institution, line of 
business, and event-type basis;
   [sbull] Operational risk exposure;
   [sbull] Changes in relevant risk and control assessments;
   [sbull] Management assessment of early warning factors signaling an 
increased risk of future losses;
   [sbull] Trend analysis, allowing line of business and independent 
firm-wide operational risk management to assess

[[Page 45982]]

and manage operational risk exposures, systemic line of business risk 
issues, and other corporate risk issues;
   [sbull] Exception reporting; and
   [sbull] To the extent developed, operational risk causal factors.
   High-level operational risk reports must also be produced 
periodically for the board and senior management. These reports must 
provide information regarding the operational risk profile of the 
institution, including the sources of material risk both from a firm-
wide and line of business perspective, versus established management 
expectations.

D. Internal Control Environment

Supervisory Standards
   S 11. An institution's internal control structure must meet or 
exceed minimum regulatory standards established by the Agencies.
   Sound internal controls are essential to an institution's 
management of operational risk and are one of the foundations of safe 
and sound banking. When properly designed and consistently enforced, a 
sound system of internal controls will help management safeguard the 
institution's resources, produce reliable financial reports, and comply 
with laws and regulations. Sound internal controls will also reduce the 
possibility of significant human errors and irregularities in internal 
processes and systems, and will assist in their timely detection when 
they do occur.
   The Agencies are not introducing any new internal control 
standards, but rather emphasizing the importance of meeting existing 
standards. There is a recognition that internal control systems will 
differ among institutions due to the nature and complexity of an 
institution's products and services, organizational structure, and risk 
management culture. The AMA standards allows for these differences, 
while also establishing a baseline standard for the quality of the 
internal control structure. Institutions will be expected to at least 
meet the minimum interagency standards\11\ relating to internal 
controls as a criterion for AMA qualification.
---------------------------------------------------------------------------

   \11\ There are a number of interagency standards that cover 
topics relevant to the internal control structure. These include, 
for example, the Interagency Policy Statement on the Internal Audit 
Function and Its Outsourcing (March 2003), the Federal Financial 
Institution's Examination Council's (FFIEC's) Business Continuity 
Planning Booklet (May 2003), the FFIEC's Information Security 
Booklet (January 2003). In addition, each Agency has extensive 
guidance on corporate governance, internal controls, and monitoring 
and reporting in its respective examination policies and procedures.
---------------------------------------------------------------------------

   The extent to which an institution meets or exceeds the minimum 
standards will primarily be assessed through current and ongoing 
supervisory processes. As noted earlier, the Agencies will leverage off 
existing examination processes, to avoid duplication in assessing an 
institution's implementation of an AMA framework. Assessing the 
internal control environment is clearly an area where the supervisory 
authorities already focus considerable attention.

VII. Elements of an AMA Framework

Supervisory Standards
   S 12. The institution must demonstrate that it has appropriate 
internal loss event data, relevant external loss event data, 
assessments of business environment and internal controls factors, and 
results from scenario analysis to support its operational risk 
management and measurement framework.
   S 13. The institution must include the regulatory definition of 
operational risk as the baseline for capturing the elements of the AMA 
framework and determining its operational risk exposure.
   S 14. The institution must have clear standards for the collection 
and modification of the elements of the operational risk AMA framework.
   Operational risk inputs play a significant role in both the 
management and measurement of operational risk. Necessary elements of 
an institution's AMA framework include internal loss event data, 
relevant external loss event data, results of scenario analysis, and 
assessments of the institution's business environment and internal 
controls. Operational risk inputs aid the institution in identifying 
the level and trend of operational risk, determining the effectiveness 
of risk management and control efforts, highlighting opportunities to 
better mitigate operational risk, and assessing operational risk on a 
forward-looking basis.
   To use its AMA framework, an institution must demonstrate that it 
has established a consistent and comprehensive process for the capture 
of all elements of the AMA framework. The institution must also 
demonstrate that it has clear standards for the collection and 
modification of all AMA inputs. While the analytical framework will 
generally combine these inputs to develop the operational risk 
exposure, supervisors must have the capacity to review the individual 
inputs as well; specifically, supervisors will need to review the loss 
information that is being provided to the analytical framework that 
stems from internal loss event data, versus the loss event information 
provided by external loss event data capture, scenario analysis, or the 
assessments of the business environment and internal control factors.
   The capture systems must cover all material business lines, 
business activities and corporate functions that could generate 
operational risk. The institution must have a defined process that 
establishes responsibilities over the systems developed to capture the 
AMA elements. In particular, the issue of overriding the data capture 
systems must be addressed. Any overrides should be tracked separately 
and documented. Tracking overrides separately allows management and 
supervisors to identify the nature and rationale, including whether 
they stem from simple input errors or, more importantly, from exclusion 
because a loss event was not pertinent for the quantitative 
measurement. Management should have clear standards for addressing 
overrides and should clearly delineate who has authority to override 
the data systems and under what circumstances.
   As noted earlier, for AMA qualification purposes, an institution's 
operational risk framework must, at a minimum, use the definition of 
operational risk that is provided in paragraph 10 when capturing the 
elements of the AMA framework. Institutions may use an expanded 
definition if considered more appropriate for risk management and 
measurement efforts. However, for the quantification of operational 
risk exposure for regulatory capital purposes, an institution must 
demonstrate that the AMA elements are captured so as to meet the 
baseline definition.

A. Internal Operational Risk Loss Event Data

Supervisory Standards
   S 15. The institution must have at least five years of internal 
operational risk loss data \12\ captured across all material business 
lines, events, product types, and geographic locations.
---------------------------------------------------------------------------

   \12\ With supervisory approval, a shorter initial historical 
observation period is acceptable for banks newly authorized to use 
an AMA methodology.
---------------------------------------------------------------------------

   S 16. The institution must be able to map internal operational risk 
losses to the seven loss-event type categories.
   S 17. The institution must have a policy that identifies when an 
operational risk loss becomes a loss event and must be added to the 
loss

[[Page 45983]]

event database. The policy must provide for consistent treatment across 
the institution.
   S 18. The institution must establish appropriate operational risk 
data thresholds.
   S 19. Losses that have any characteristics of credit risk, 
including fraud-related credit losses, must be treated as credit risk 
for regulatory capital purposes. The institution must have a clear 
policy that allows for the consistent treatment of loss event 
classifications (e.g., credit, market, or operational risk) across the 
organization.
   The key to internal data integrity is the consistency and 
completeness with which loss event data capture processes are 
implemented across the institution. Management must ensure that 
operational risk loss event information captured is consistent across 
the business lines and incorporates any corporate functions that may 
also experience operational risk events. Policies and procedures should 
be addressed to the appropriate staff to ensure that there is 
satisfactory understanding of operational risk and the data capture 
requirements under the operational risk framework. Further, the 
independent operational risk management function must ensure that the 
loss data is captured across all material business lines, products 
types, event types, and from all significant geographic locations. The 
institution must be able to capture and aggregate internal losses that 
cross multiple business lines or event types. If data is not captured 
across all business lines or from all geographic locations, the 
institution must document and explain the exceptions.
   AMA institutions must be able to map operational risk losses into 
the seven loss event categories defined in paragraph 10. Institutions 
will not be required to produce reports or perform analysis for 
internal purposes on the basis of the loss event categories, but will 
be expected to use the information about the event-type categories as a 
check on the comprehensiveness of the institution's data set.
   The institution must have five years of internal loss data, 
although a shorter range of historical data may be allowed, subject to 
supervisory approval. The extent to which an institution collects 
operational risk loss event data will, in part, be dependent upon the 
data thresholds that the institution establishes. There are a number of 
standards that an institution may use to establish the thresholds. They 
may be based on product types, business lines, geographic location, or 
other appropriate factors. The Agencies will allow flexibility in this 
area, provided the institution can demonstrate that the thresholds are 
reasonable, do not exclude important loss events, and capture a 
significant proportion of the institution's operational risk losses.
   The institution must capture comprehensive data on all loss events 
above its established threshold level. Aside from information on the 
gross loss amount, the institution should collect information about the 
date of the event, any recoveries, and descriptive information about 
the drivers or causes of the loss event. The level of detail of any 
descriptive information should be commensurate with the size of the 
gross loss amount. Examples of the type of information collected 
include:
   [sbull] Loss amount;
   [sbull] Description of loss event;
   [sbull] Where the loss is reported and expensed;
   [sbull] Loss event type category;
   [sbull] Date of the loss;
   [sbull] Discovery date of the loss;
   [sbull] Event end date;
   [sbull] Management actions;
   [sbull] Insurance recoveries;
   [sbull] Other recoveries; and
   [sbull] Adjustments to the loss estimate.
   There are a number of additional data elements that may be 
captured. It may be appropriate, for example, to capture data on ``near 
miss'' events, where no financial loss was incurred. These near misses 
will not factor into the regulatory capital calculation, but may be 
useful for the operational risk management process.
   Institutions will also be permitted and encouraged to capture loss 
events in their operational risk databases that are treated as credit 
risk for regulatory capital purposes, but have an underlying element of 
operational risk failure. These types of events, while not incorporated 
into the regulatory capital calculation, may have implications for 
operational risk management. It will be essential for institutions that 
capture loss events that are treated differently for regulatory capital 
and management purposes to demonstrate that (1) loss events are being 
captured consistently across the institution; (2) the data systems are 
sufficiently advanced to allow for this differential treatment of loss 
events; and (3) credit, market, and operational risk losses are being 
appropriated in the correct manner for regulatory capital purposes.
   The Agencies have established a clear boundary between credit and 
operational risks for regulatory capital purposes. If a loss event has 
any element of credit risk, it must be treated as credit risk for 
regulatory capital purposes. This would include all credit-related 
fraud losses. In addition, operational risk losses with credit risk 
characteristics that have historically been included in institutions' 
credit risk databases will continue to be treated as credit risk for 
the purposes of calculating minimum regulatory capital.
   The accounting guidance for credit losses provides that creditors 
recognize credit losses when it is probable that they will be unable to 
collect all amounts due according to the contractual terms of a loan 
agreement. Credit losses may result from the creditor's own 
underwriting, processing, servicing or administrative activities along 
with the borrower's failure to pay according to the terms of the loan 
agreement. While the creditor's personnel, systems, policies or 
procedures may affect the timing or magnitude of a credit loss, they do 
not change its character from credit to operational risk loss for 
regulatory capital purposes. Losses that arise from a contractual 
relationship between a creditor and a borrower are credit losses 
whereas losses that arise outside of a relationship between a creditor 
and a borrower are operational losses.

B. External Data

Supervisory Standards
   S 20. The institution must have policies and procedures that 
provide for the use of external loss data in the operational risk 
framework.
   S 21. Management must systematically review external data to ensure 
an understanding of industry experience.
   External data may serve a number of different purposes in the 
operational risk framework. Where internal loss data is limited, 
external data may be a useful input in determining the institution's 
level of operational risk exposure. Even where external loss data is 
not an explicit input to an institution's data set, such data provides 
a means for the institution to understand industry experience, and in 
turn, provides a means for assessing the adequacy of its internal data. 
External data may also prove useful to inform scenario analysis, fit 
severity distributions, or benchmark the overall operational risk 
exposure results.
   To incorporate external loss information into an institution's 
framework, the institution should collect the following information:
   [sbull] External loss amount;
   [sbull] External loss description;
   [sbull] Loss event type category;
   [sbull] External loss event date;
   [sbull] Adjustments to the loss amount (i.e., recoveries, insurance 
settlements,

[[Page 45984]]

etc) to the extent that they are known; and
   [sbull] Sufficient information about the reporting institution to 
facilitate comparison to its own organization.
   Institutions may obtain external loss data in any reasonable 
manner. There are many ways to do so; some institutions are using data 
acquired through membership with industry consortia while other 
institutions are using data obtained from vendor databases or public 
sources such as court records or media reports. In all cases, 
management will need to carefully evaluate the data source to ensure 
that they are comfortable that the information being reported is 
relevant and reasonably accurate.

C. Business Environment and Internal Control Factor Assessments

Supervisory Standards
   S 22. The institution must have a system to identify and assess 
business environment and internal control factors.
   S 23. Management must periodically compare the results of their 
business environment and internal control factor assessments against 
actual operational risk loss experience.
   While internal and external loss data provide a historical 
perspective on operational risk, it is also important that institutions 
incorporate a forward-looking element to the operational risk measure. 
In principle, an institution with strong internal controls in a stable 
business environment will have less exposure to operational risk than 
an institution with internal control weaknesses that is growing rapidly 
or introducing new products. In this regard, institutions will be 
required to identify the level and trends in operational risk in the 
institution. These assessments must be current, comprehensive across 
the institution, and identify the critical operational risks facing the 
institution.
   The business environment and internal control factor assessments 
should reflect both the positive and negative trends in risk management 
within the institution as well as changes in an institution's business 
activities that increase or decrease risk. Because the results of the 
risk assessment are part of the capital methodology, management must 
ensure that the risk assessments are done appropriately and reflect the 
risks of the institution. Periodic comparisons should be made between 
actual loss exposure and the assessment results.
   The framework established to maintain the risk assessments must be 
sufficiently flexible to encompass an institution's increased 
complexity of activities, new activities, changes in internal control 
systems, or an increased volume of information.

D. Scenario Analysis

Supervisory Standards
   S 24. Management must have policies and procedures that identify 
how scenario analysis will be incorporated into the operational risk 
framework.
   Scenario analysis is a systematic process of obtaining expert 
opinions from business managers and risk management experts to derive 
reasoned assessments of the likelihood and impact of plausible 
operational losses consistent with the regulatory soundness standard. 
Within an institution's operational risk framework, scenario analysis 
may be used as an input or may, as discussed below, form the basis of 
an operational risk analytical framework.
   As an input to the institution's framework, scenario analysis is 
especially relevant for business lines or loss event types where 
internal data, external data, and assessments of the business 
environment and internal control factors do not provide a sufficiently 
robust estimate of the institution's exposure to operational risk. In 
some cases, an institution's internal loss history may be sufficient to 
provide a reasonable estimate of exposure to future operational losses. 
In other cases, the use of well-reasoned, scaled external data may 
itself be a form of scenario analysis.
   The institution must have policies and procedures that define 
scenario analysis and identify its role in the operational risk 
framework. The policy should cover key elements of scenario analysis, 
such as the manner in which the scenarios are generated, the frequency 
with which they are updated, and the scope and coverage of operational 
loss events they are intended to reflect.

VIII. Risk Quantification

A. Analytical Framework

Supervisory Standards
   S 25. The institution must have a comprehensive operational risk 
analytical framework that provides an estimate of the institution's 
operational risk exposure, which is the aggregate operational loss that 
it faces over a one-year period at a soundness standard consistent with 
a 99.9 per cent confidence level.
   S 26. Management must document the rationale for all assumptions 
underpinning its chosen analytical framework, including the choice of 
inputs, distributional assumptions, and the weighting across 
qualitative and quantitative elements. Management must also document 
and justify any subsequent changes to these assumptions.
   S 27. The institution's operational risk analytical framework must 
use a combination of internal operational loss event data, relevant 
external operational loss event data, business environment and internal 
control factor assessments, and scenario analysis. The institution must 
combine these elements in a manner that most effectively enables it to 
quantify its operational risk exposure. The institution can choose the 
analytical framework that is most appropriate to its business model.
   S 28. The institution's capital requirement for operational risk 
will be the sum of expected and unexpected losses unless the 
institution can demonstrate, consistent with supervisory standards, the 
expected loss offset.
   The industry has made significant progress in recent years in 
developing analytical frameworks to quantify operational risk. The 
analytical frameworks, which are a part of the overall operational risk 
framework, are based on various combinations of an institution's own 
operational loss experience, the industry's operational loss 
experience, the size and scope of the institution's activities, the 
quality of the institution's control environment, and management's 
expert judgment. Because these models capture specific characteristics 
of each institution, such models yield unique risk-sensitive estimates 
of the institutions' operational risk exposures.
   While the Agencies are not specifying the exact methodology that an 
institution should use to determine its operational risk exposure, 
minimum supervisory standards for acceptable approaches have been 
developed. These standards have been set so as to assure that the 
regulation can accommodate continued evolution of operational risk 
quantification techniques, yet remain amenable to consistent 
application and enforcement across institutions. The Agencies will 
require that the institution have a comprehensive analytical framework 
that provides an estimate of the aggregate operational loss that it 
faces over a one-year period at a soundness standard consistent with a 
99.9 percent confidence level, referred to as the institution's 
operational risk exposure. The institution will multiply the exposure 
estimate by 12.5 to obtain risk weighted assets for operational risk,

[[Page 45985]]

and add this figure to risk-weighted assets for credit and market risk 
to obtain total risk-weighted assets. The final minimum regulatory 
capital number will be 8 percent of total risk-weighted assets.
   The Agencies expect that there will be significant variation in 
analytical frameworks across institutions, with each institution 
tailoring its framework to leverage existing technology platforms and 
risk management procedures. These approaches may only be used, provided 
they meet the supervisory standards and include, as inputs, internal 
operational loss event data, relevant external operational loss event 
data, assessments of business environment and internal control factors, 
and scenario analysis. The Agencies do expect that there will be some 
uncertainty and potential error in the analytical frameworks because of 
the evolving nature of operational risk measurement and data capture. 
Therefore, a degree of conservatism will need to be built into the 
analytical frameworks to reflect the evolutionary status of operational 
risk and its impact on data capture and analytical modeling.
   A diversity of analytical approaches is emerging in the industry, 
combining and weighting these inputs in different ways. Most current 
approaches seek to estimate loss frequency and loss severity to arrive 
at an aggregate loss distribution. Institutions then use the aggregate 
loss distribution to determine the appropriate amount of capital to 
hold for a given soundness standard. Scenario analysis is also being 
used by many institutions, albeit to significantly varying degrees. 
Some institutions are using scenario analysis as the basis for their 
analytical framework, while others are incorporating scenarios as a 
means for considering the possible impact of significant operational 
losses on their overall operational risk exposure.
   The primary differences among approaches being used today relate to 
the weight that institutions place on each input. For example, 
institutions with comprehensive internal data may place less emphasis 
on external data or scenario analysis. Another example is that some 
institutions estimate a unique loss distribution for each business 
line/loss type combination (bottom-up approach) while others estimate a 
loss distribution on a firm-wide basis and then use an allocation 
methodology to assign capital to business lines (top-down approach).
   The Agencies expect internal loss event data to play an important 
role in the institution's analytical framework, hence the requirement 
for five years of internal operational risk loss data. However, as 
footnote 5 makes clear, five years of data is not always required for 
the analytical framework. For example, if a bank exited a business 
line, the institution would not be expected to make use of that 
business unit's loss experience unless it had relevance for other 
activities of the institution. Another example would be where a bank 
has made a recent acquisition where the acquired firm does not have 
internal loss event data. In these cases, the Agencies expect the 
institution to make use of the loss data available at the acquired 
institution and any internal loss data from operations similar to that 
of the acquired firm, but the institution will likely have to place 
more weight relevant external loss event data, results from scenario 
analysis, and factors reflecting assessments of the business 
environment and internal controls.
   Whatever analytical approach an institution chooses, it must 
document and provide the rationale for all assumptions embedded in its 
chosen analytical framework, including the choice of inputs, 
distributional assumptions, and the weighting of qualitative and 
quantitative elements. Management must also document and justify any 
subsequent changes to these assumptions. This documentation should:
   [sbull] Clearly identify how the different inputs are combined and 
weighted to arrive at the overall operational risk exposure so that the 
analytical framework is transparent. The documentation should 
demonstrate that the analytical framework is comprehensive and 
internally consistent. Comprehensiveness means that all required inputs 
are incorporated and appropriately weighted. At the same time, there 
should not be overlaps or double counting.
   [sbull] Clearly identify the quantitative assumptions embedded in 
the methodology and provide explanation for the choice of these 
assumptions. Examples of quantitative assumptions include 
distributional assumptions about frequency and severity, the 
methodology for combining frequency and severity to arrive at the 
overall loss distribution, and dependence assumptions between 
operational losses across and within business lines.
   [sbull] Clearly identify the qualitative assumptions embedded in 
the methodology and provide explanations for the choice of these 
assumptions. Examples of qualitative assumptions include the use of 
business environment and control factors as well as scenario analysis 
in the approach.
   [sbull] Where feasible, provide results based purely on 
quantitative methods separately from results that incorporate 
qualitative factors. This will provide a transparent means of 
determining the relative importance of quantitative versus qualitative 
inputs.
   [sbull] Where feasible, provide results based on alternative 
quantitative and qualitative assumptions to gauge the overall model's 
sensitivity to these assumptions.
   [sbull] Provide a comparison of the operational risk exposure 
estimate generated by the analytical framework with actual loss 
experience over time, to assess the reasonable of the framework's 
outputs.
   [sbull] Clearly identify all changes to assumptions, and provide 
explanations for such changes.
   [sbull] Clearly identify the results of an independent verification 
of the analytical framework.
   The regulatory capital charge for operational risk will include 
both expected losses (EL) and unexpected losses (UL). The Agencies have 
considered two approaches that might allow for some recognition of EL; 
these approaches are reserving and budgeting. However, both approaches 
raise questions about their ability to act as an EL offset for 
regulatory capital purposes. The current U.S. GAAP treatment for 
reserves (or liabilities) is based on an incurred-loss (liability) 
model. Given that EL is looking beyond current losses to losses that 
will be incurred in the future, establishing a reserve for operational 
risk EL is not likely to meet U.S. accounting standards. While reserves 
are specific allocations for incurred losses, budgeting is a process of 
generally allocating future income for loss contingencies, including 
losses resulting from operational risk. Institutions will be required 
to demonstrate that budgeted funds are sufficiently capital-like and 
remain available to cover EL over the next year. In addition, an 
institution will not be permitted to recognize EL offsets on budgeted 
loss contingencies that fall below the established data thresholds; 
this is relevant as many institutions currently budget for low 
severity, high frequency events that are more likely to fall below most 
institutions' thresholds.
   An institution's analytical framework complements but does not 
substitute for prudent controls. Rather, with improved risk 
measurement, institutions are finding that they can make better-
informed strategic decisions regarding enhancements to controls and 
processes, the desired scale and scope of the operations, and how 
insurance and

[[Page 45986]]

other risk mitigation tools can be used to offset operational risk 
exposure.

B. Accounting for Dependence

Supervisory Standards
   S 29. Management must document how its chosen analytical framework 
accounts for dependence (e.g., correlations) among operational losses 
across and within business lines. The institution must demonstrate that 
its explicit and embedded dependence assumptions are appropriate, and 
where dependence assumptions are uncertain, the institution must use 
conservative estimates.
   Management must document how its chosen analytical framework 
accounts for dependence (e.g., correlation) between operational losses 
across and within business lines. The issue of dependence is closely 
related to the choice between a bottom-up or a top-down modeling 
approach. Under a bottom-up approach, explicit assumptions regarding 
cross-event dependence are required to estimate operational risk 
exposure at the firm-wide level. Management must demonstrate that these 
assumptions are appropriate and reflect the institution's current 
environment. If the dependence assumptions are uncertain, the 
institution must choose conservative estimates. In so doing, the 
institution should consider the possibility that cross-event dependence 
may not be constant, and may increase during stress environments.
   Under a top-down approach, an explicit assumption regarding 
dependence is not required. However, a parametric distribution for loss 
severity may be more difficult to specify under the top-down approach, 
as it is a statistical mixture of (potentially) heterogeneous business 
line and event type distributions. Institutions must carefully consider 
the conditions necessary for the validity of top-down approaches, and 
whether these conditions are met in their particular circumstances. 
Similar to bottom-up approaches, institutions using top-down approaches 
must ensure that implicit dependence assumptions are appropriate and 
reflect the institution's current environment. If historic dependence 
assumptions embedded in top-down approaches are uncertain, the 
institution must be conservative and implement a qualitative adjustment 
to the analysis.

IX. Risk Mitigation

Supervisory Standards
   S 30. Institutions may reduce their operational risk exposure 
results by no more than 20% to reflect the impact of risk mitigants. 
Institutions must demonstrate that mitigation products are sufficiently 
capital-like to warrant inclusion in the adjustment to the operational 
risk exposure.
   There are many mechanisms to manage operational risk, including 
risk transfer through risk mitigation products. Because risk mitigation 
can be an important element in limiting or reducing operational risk 
exposure in an institution, an adjustment is being permitted that will 
directly impact the amount of regulatory capital that is held for 
operational risk. The adjustment is limited to 20% of the overall 
operational risk exposure result determined by the institution using 
its loss data, qualitative factors, and quantitative framework.
   Currently, the primary risk mitigant used for operational risk is 
insurance. There has been discussion that some securities products may 
be developed to provide risk mitigation benefits; however, to date, no 
specific products have emerged that have characteristics sufficient to 
be considered capital-replacement for operational risk. As a result, 
securities products and other capital market instruments may not be 
factored in to the regulatory capital risk mitigation adjustment at 
this time.
   For an institution that wishes to adjust its regulatory capital 
requirement as a result of the risk mitigating impact of insurance, 
management must demonstrate that the insurance policy is sufficiently 
capital-like to provide the cushion that is necessary. A product that 
would fall in this category must have the following characteristics:
   [sbull] The policy is provided through a third party \13\ that has 
a minimum claims paying ability rating of A; \14\
---------------------------------------------------------------------------

   \13\ Where operational risk is transferred to a captive or an 
affiliated insurer such that risk is retained within the group 
structure, recognition of such risk transfer will only be allowed 
for regulatory capital purposes where the risk has been transferred 
to a third party (e.g., an unaffiliated reinsurer) that meets the 
standards set forth in this section.
   \14\ Rating agencies may use slightly different rating 
scales.For the purpose of this supervisory guidance, the insurer 
must have a rating that is at least the equivalent of A under 
Standard and Poor's Insurer Financial Strength Ratings or an A2 
under Moody's Insurance Financial Strength Ratings.
---------------------------------------------------------------------------

   [sbull] The policy has an initial term of one year; \15\
---------------------------------------------------------------------------

   \15\ Institutions must decrease the amount of the adjustment if 
the remaining term is less than one year. The institution must have 
a clear policy in place that links the remaining term to the 
adjustment factor.
---------------------------------------------------------------------------

   [sbull] The policy has no exclusions or limitations based upon 
regulatory action or for the receiver or liquidator of a failed bank;
   [sbull] The policy has clear cancellation and non-renewal notice 
periods; and
   [sbull] The policy coverage has been explicitly mapped to actual 
operational risk exposure of the institution.
   Insurance policies that meet these standards may be incorporated 
into an institution's adjustment for risk mitigation. An institution 
should be conservative in its recognition of such policies, for 
example, the institution must also demonstrate that insurance policies 
used as the basis for the adjustment have a history of timely payouts. 
If claims have not been paid on a timely basis, the institution must 
exclude that policy from the operational risk capital adjustment. In 
addition, the institution must be able to show that the policy would 
actually be used in the event of a loss situation; that is, the 
deductible may not be set so high that no loss would ever conceivably 
exceed the deductible threshold.
   The Agencies will not specify how institutions should calculate the 
risk mitigation adjustment. Nevertheless, institutions are expected to 
use conservative assumptions when calculating adjustments. An 
institution should discount (i.e., apply its own estimates of haircuts) 
the impact of insurance coverage to take into account factors, which 
may limit the likelihood or size of claims payouts. Among these factors 
are the remaining terms of a policy, especially when it is less than a 
year, the willingness and ability of the insurer to pay on a claim in a 
timely manner, the legal risk that a claim may be disputed, and the 
possibility that a policy can be cancelled before the contractual 
expiration.

X. Data Maintenance

Supervisory Standards
   S 31. Institutions using the AMA approach for regulatory capital 
purposes must use advanced data management practices to produce 
credible and reliable operational risk estimates.
   Data maintenance is a critical factor in an institution's 
operational risk framework. Institutions with advanced data management 
practices should be able to track operational risk loss events from 
initial discovery through final resolution. These institutions should 
also be able to make appropriate adjustments to the data and use the 
data to identify trends, track problem areas, and identify areas of 
future risk. Such data should include not only operational risk loss 
event information, but also information on risk assessments, which are 
factored into the operational risk exposure calculation. In general, 
institutions using the AMA

[[Page 45987]]

should have the same data maintenance standards for operational risk as 
those set forth for A-IRB institutions under the credit risk guidance.
   Operational risk data elements captured by the institution must be 
of sufficient depth, scope, and reliability to:
   [sbull] Track and identify operational risk loss events across all 
business lines, including when a loss event impacts multiple business 
lines.
   [sbull] Calculate capital ratios based on operational risk exposure 
results. The institution must also be able to factor in adjustments 
related to risk mitigation, correlations, and risk assessments.
   [sbull] Produce internal and public reports on operational risk 
measurement and management results, including trends revealed by loss 
data and/or risk assessments. The institution must also have sufficient 
data to produce exception reports for management.
   [sbull] Support risk management activities.
   The data warehouse \16\ 16 must contain the key data elements 
needed for operational risk measurement, management, and verification. 
The precise data elements may vary by institution and also among 
business lines within an institution. An important element of ensuring 
consistent reporting of the data elements is to develop comprehensive 
definitions for each data element used by the institution for reporting 
operational risk loss events or for the risk assessment inputs. The 
data must be stored in an electronic format to allow for timely 
retrieval for analysis, verification and testing of the operational 
risk framework, and required disclosures.
---------------------------------------------------------------------------

   \16\ In this document, the terms ``database'' and ``data 
warehouse'' are used interchangeably to refer to a collection of 
data arranged for easy retrieval using computer technology.
---------------------------------------------------------------------------

   Management will need to identify those responsible for maintaining 
the data warehouse. In particular, policies and processes will need to 
be developed for delivering, storing, retaining, and updating the data 
warehouse. Policies and procedures must also cover the edit checks for 
data input functions, as well as the requirements for the testing and 
verification function to verify data integrity. Like other areas of the 
operational risk framework, it is critical that management ensure 
accountability for ongoing data maintenance, as this will impact 
operational risk management and measurement efforts.

XI. Testing and Verification

Supervisory Standards
   S 32. The institution must test and verify the accuracy and 
appropriateness of the operational risk framework and results.
   S 33. Testing and verification must be done independently of the 
firm-wide operational risk management function and the institution's 
lines of business.
   The operational risk framework must provide for regular and 
independent testing and verification of operational risk management 
policies, processes and measurement systems, as well as operational 
risk data capture systems. For most institutions, operational risk 
verification and testing will primarily be done by the audit function. 
Internal and external audits can provide an independent assessment of 
the quality and effectiveness of the control systems' design and 
performance. However, institutions may use other independent internal 
units (e.g. quality assurance) or third parties. The testing and 
verification function, whether internally or externally performed, 
should be staffed by qualified individuals who are independent from the 
firm-wide operational risk management function and the institution's 
lines of business.
   The verification of the operational risk measurement system should 
include the testing of:
   [sbull] Key operational risk processes and systems;
   [sbull] Data feeds and processes associated with the operational 
risk measurement system;
   [sbull] Adjustments to empirical operational risk capital 
estimates, including operational risk exposure;
   [sbull] Periodic certification of operational risk models used and 
their underlying assumptions; and
   [sbull] Assumptions underlying operational risk exposure, data 
decision models, and operational risk capital charge.
   The operational risk reporting processes should be periodically 
reviewed for scope and effectiveness. The institution should have 
independent verification processes to ensure the timeliness, accuracy, 
and comprehensiveness of operational risk reporting systems, both at 
the firm-wide and the line of business levels.
   Independent verification and testing should be done to ensure the 
integrity and applicability of the operational risk framework, 
operational risk exposure/loss data, and the underlying assumptions 
driving the regulatory capital measurement process. Appropriate 
reports, summarizing operational risk verification and testing findings 
for both the independent firm-wide risk management function and lines 
of business should be provided to appropriate management and the board 
of directors or a designated board committee.

Appendix A: Supervisory Standards for the AMA

   S 1. The institution's operational risk framework must include 
an independent firm-wide operational risk management function, line 
of business management oversight, and independent testing and 
verification functions.
   S 2. The board of directors must oversee the development of the 
firm-wide operational risk framework, as well as major changes to 
the framework. Management roles and accountability must be clearly 
established.
   S 3. The board of directors and management must ensure that 
appropriate resources are allocated to support the operational risk 
framework.
   S 4. The institution must have an independent operational risk 
management function that is responsible for overseeing the 
operational risk framework at the firm level to ensure the 
development and consistent application of operational risk policies, 
processes, and procedures throughout the institution.
   S 5. The firm-wide operational risk management function must 
ensure appropriate reporting of operational risk exposures and loss 
data to the board of directors and senior management.
   S 6. Line of business management is responsible for the day-to-
day management of operational risk within each business unit.
   S 7. Line of business management must ensure that internal 
controls and practices within their line of business are consistent 
with firm-wide policies and procedures to support the management and 
measurement of the institution's operational risk.
   S 8. The institution must have policies and procedures that 
clearly describe the major elements of the operational risk 
management framework, including identifying, measuring, monitoring, 
and controlling operational risk.
   S 9. Operational risk management reports must address both firm-
wide and line of business results. These reports must summarize 
operational risk exposure, loss experience, relevant business 
environment and internal control assessments, and must be produced 
no less often than quarterly.
   S 10. Operational risk reports must also be provided 
periodically to senior management and the board of directors, 
summarizing relevant firm-wide operational risk information.
   S 11. An institution's internal control structure must meet or 
exceed minimum regulatory standards established by the Agencies.
   S 12. The institution must demonstrate that it has appropriate 
internal loss event data, relevant external loss event data, 
assessments of business environment and internal controls factors, 
and results from scenario analysis to support its operational risk 
management and measurement framework.
   S 13. The institution must include the regulatory definition of 
operational risk as the baseline for capturing the elements of the

[[Page 45988]]

AMA framework and determining its operational risk exposure.
   S 14. The institution must have clear standards for the 
collection and modification of the elements of the operational risk 
AMA framework.
   S 15. The institution must have at least five years of internal 
operational risk loss data \17\ captured across all material 
business lines, events, product types, and geographic locations.
---------------------------------------------------------------------------

   \17\ With supervisory approval, a shorter initial historical 
observation period is acceptable for banks newly authorized to use 
an AMA methodology.
---------------------------------------------------------------------------

   S 16. The institution must be able to map internal operational 
risk losses to the seven loss-event type categories.
   S 17. The institution must have a policy that identifies when an 
operational risk loss becomes a loss event and must be added to the 
loss event database. The policy must provide for consistent 
treatment across the institution.
   S 18. The institution must establish appropriate operational 
risk data thresholds.
   S 19. Losses that have any characteristics of credit risk, 
including fraud-related credit losses, must be treated as credit 
risk for regulatory capital purposes. The institution must have a 
clear policy that allows for the consistent treatment of loss event 
classifications (e.g., credit, market, or operational risk) across 
the organization.
   S 20. The institution must have policies and procedures that 
provide for the use of external loss data in the operational risk 
framework.
   S 21. Management must systematically review external data to 
ensure an understanding of industry experience.
   S 22. The institution must have a system to identify and assess 
business environment and internal control factors.
   S 23. Management must periodically compare the results of their 
business environment and internal control factor assessments against 
actual operational risk loss experience.
   S 24. Management must have policies and procedures that identify 
how scenario analysis will be incorporated into the operational risk 
framework.
   S 25. The institution must have a comprehensive operational risk 
analytical framework that provides an estimate of the institution's 
operational risk exposure, which is the aggregate operational loss 
that it faces over a one-year period at a soundness standard 
consistent with a 99.9 per cent confidence level.
   S 26. Management must document the rationale for all assumptions 
underpinning its chosen analytical framework, including the choice 
of inputs, distributional assumptions, and the weighting across 
qualitative and quantitative elements. Management must also document 
and justify any subsequent changes to these assumptions.
   S 27. The institution's operational risk analytical framework 
must use a combination of internal operational loss event data, 
relevant external operational loss event data, business environment 
and internal control factor assessments, and scenario analysis. The 
institution must combine these elements in a manner that most 
effectively enables it to quantify its operational risk exposure. 
The institution can choose the analytical framework that is most 
appropriate to its business model.
   S 28. The institution's capital requirement for operational risk 
will be the sum of expected and unexpected losses unless the 
institution can demonstrate, consistent with supervisory standards, 
the expected loss offset.
   S 29. Management must document how its chosen analytical 
framework accounts for dependence (e.g., correlations) among 
operational losses across and within business lines. The institution 
must demonstrate that its explicit and embedded dependence 
assumptions are appropriate, and where dependence assumptions are 
uncertain, the institution must use conservative estimates.
   S 30. Institutions may reduce their operational risk exposure 
results by no more than 20% to reflect the impact of risk mitigants. 
Institutions must demonstrate that mitigation products are 
sufficiently capital-like to warrant inclusion in the adjustment to 
the operational risk exposure.
   S 31. Institutions using the AMA approach for regulatory capital 
purposes must use advanced data management practices to produce 
credible and reliable operational risk estimates.
   S 32. The institution must test and verify the accuracy and 
appropriateness of the operational risk framework and results.
   S 33. Testing and verification must be done independently of the 
firm-wide operational risk management function and the institution's 
lines of business.

   Dated: July 17, 2003.
John D. Hawke, Jr.,
Comptroller of the Currency.
   By order of the Board of Governors of the Federal Reserve 
System, July 21, 2003.
Jennifer J. Johnson,
Secretary of the Board.
   Dated at Washington, DC, this 11th day of July, 2003.
   By order of the Board of Directors.

   Federal Deposit Insurance Corporation.
Robert E. Feldman,
Executive Secretary.
   Dated: July 18, 2003.

   By the Office of Thrift Supervision.
James E. Gilleran,
Director.

[FR Doc. 03-18976 Filed 8-1-03; 8:45 am]

BILLING CODE 4810-33-P

Last Updated: March 24, 2024