Hubbry Logo
RiskMetricsRiskMetricsMain
Open search
RiskMetrics
Community hub
RiskMetrics
logo
7 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
RiskMetrics
RiskMetrics
from Wikipedia

The RiskMetrics variance model (also known as exponential smoother) was first established in 1989, when Sir Dennis Weatherstone, the new chairman of J.P. Morgan, asked for a daily report measuring and explaining the risks of his firm. Nearly four years later in 1992, J.P. Morgan launched the RiskMetrics methodology to the marketplace, making the substantive research and analysis that satisfied Sir Dennis Weatherstone's request freely available to all market participants.

In 1998, as client demand for the group's risk management expertise exceeded the firm's internal risk management resources, the Corporate Risk Management Department was spun off from J.P. Morgan as RiskMetrics Group with 23 founding employees. The RiskMetrics technical document was revised in 1996. In 2001, it was revised again in Return to RiskMetrics. In 2006, a new method for modeling risk factor returns was introduced (RM2006). On 25 January 2008, RiskMetrics Group listed on the New York Stock Exchange (NYSE: RISK). In June 2010, RiskMetrics was acquired by MSCI for $1.55 billion.[1]

Risk measurement process

[edit]

Portfolio risk measurement can be broken down into steps. The first is modeling the market that drives changes in the portfolio's value. The market model must be sufficiently specified so that the portfolio can be revalued using information from the market model. The risk measurements are then extracted from the probability distribution of the changes in portfolio value. The change in value of the portfolio is typically referred to by portfolio managers as profit and loss, or P&L

Risk factors

[edit]

Risk management systems are based on models that describe potential changes in the factors affecting portfolio value. These risk factors are the building blocks for all pricing functions. In general, the factors driving the prices of financial securities are equity prices, foreign exchange rates, commodity prices, interest rates, correlation and volatility. By generating future scenarios for each risk factor, we can infer changes in portfolio value and reprice the portfolio for different "states of the world".

Portfolio risk measures

[edit]

Standard deviation

[edit]

The first widely used portfolio risk measure was the standard deviation of portfolio value, as described by Harry Markowitz. While comparatively easy to calculate, standard deviation is not an ideal risk measure since it penalizes profits as well as losses.

Value at risk

[edit]

The 1994 tech doc popularized VaR as the risk measure of choice among investment banks looking to be able to measure their portfolio risk for the benefit of banking regulators. VaR is a downside risk measure, meaning that it typically focuses on losses.

Expected shortfall

[edit]

A third commonly used risk measure is expected shortfall, also known variously as expected tail loss, XLoss, conditional VaR, or CVaR.

Marginal VaR

[edit]

The Marginal VaR of a position with respect to a portfolio can be thought of as the amount of risk that the position is adding to the portfolio. It can be formally defined as the difference between the VaR of the total portfolio and the VaR of the portfolio without the position.

To measure the effect of changing positions on portfolio risk, individual VaRs are insufficient. Volatility measures the uncertainty in the return of an asset, taken in isolation. When this asset belongs to a portfolio, however, what matters is the contribution to portfolio risk.

— Philippe Jorion (2007)

Incremental risk

[edit]

Incremental risk statistics provide information regarding the sensitivity of portfolio risk to changes in the position holding sizes in the portfolio.

An important property of incremental risk is subadditivity. That is, the sum of the incremental risks of the positions in a portfolio equals the total risk of the portfolio. This property has important applications in the allocation of risk to different units, where the goal is to keep the sum of the risks equal to the total risk.

Since there are three risk measures covered by RiskMetrics, there are three incremental risk measures: Incremental VaR (IVaR), Incremental Expected Shortfall (IES), and Incremental Standard Deviation (ISD).

Incremental statistics also have applications to portfolio optimization. A portfolio with minimum risk will have incremental risk equal to zero for all positions. Conversely, if the incremental risk is zero for all positions, the portfolio is guaranteed to have minimum risk only if the risk measure is subadditive.

Coherent risk measures

[edit]

A coherent risk measure satisfies the following four properties:

1. Subadditivity

A risk measure is subadditive if for any portfolios A and B, the risk of A+B is never greater than the risk of A plus the risk of B. In other words, the risk of the sum of subportfolios is smaller than or equal to the sum of their individual risks.

Standard deviation and expected shortfall are subadditive, while VaR is not.

Subadditivity is required in connection with aggregation of risks across desks, business units, accounts, or subsidiary companies. This property is important when different business units calculate their risks independently and we want to get an idea of the total risk involved. Lack of subadditivity could also be a matter of concern for regulators, where firms might be motivated to break up into affiliates to satisfy capital requirements.

2. Translation invariance

Adding cash to the portfolio decreases its risk by the same amount.

3. Positive homogeneity of degree 1

If we double the size of every position in a portfolio, the risk of the portfolio will be twice as large.

4. Monotonicity

If losses in portfolio A are larger than losses in portfolio B for all possible risk factor return scenarios, then the risk of portfolio A is higher than the risk of portfolio B.

Assessing risk measures

[edit]

The estimation process of any risk measure can be wrong by a considerable margin. If from the imprecise estimate we cannot get a good understanding what the true value could be, then the estimate is virtually worthless. A good risk measurement is to supplement any estimated risk measure with some indicator of their precision, or, of the size of its error.

There are various ways to quantify the error of some estimates. One approach is to estimate a confidence interval of the risk measurement.

Market models

[edit]

RiskMetrics describes three models for modeling the risk factors that define financial markets.

Covariance approach

[edit]

The first is very similar to the mean-covariance approach of Markowitz. Markowitz assumed that asset covariance matrix can be observed. The covariance matrix can be used to compute portfolio variance. RiskMetrics assumes that the market is driven by risk factors with observable covariance. The risk factors are represented by time series of prices or levels of stocks, currencies, commodities, and interest rates. Instruments are evaluated from these risk factors via various pricing models. The portfolio itself is assumed to be some linear combination of these instruments.

Historical simulation

[edit]

The second market model assumes that the market only has finitely many possible changes, drawn from a risk factor return sample of a defined historical period. Typically one performs a historical simulation by sampling from past day-on-day risk factor changes, and applying them to the current level of the risk factors to obtain risk factor price scenarios. These perturbed risk factor price scenarios are used to generate a profit (loss) distribution for the portfolio.

This method has the advantage of simplicity, but as a model, it is slow to adapt to changing market conditions. It also suffers from simulation error, as the number of simulations is limited by the historical period (typically between 250 and 500 business days).

Monte carlo simulation

[edit]

The third market model assumes that the logarithm of the return, or, log-return, of any risk factor typically follows a normal distribution. Collectively, the log-returns of the risk factors are multivariate normal. Monte Carlo algorithm simulation generates random market scenarios drawn from that multivariate normal distribution. For each scenario, the profit (loss) of the portfolio is computed. This collection of profit (loss) scenarios provides a sampling of the profit (loss) distribution from which one can compute the risk measures of choice.

Criticism

[edit]

Nassim Taleb in his book The Black Swan (2007) wrote:

Banks are now more vulnerable to the Black Swan than ever before with "scientists" among their staff taking care of exposures. The giant firm J. P. Morgan put the entire world at risk by introducing in the nineties RiskMetrics, a phony method aiming at managing people’s risks. A related method called “Value-at-Risk,” which relies on the quantitative measurement of risk, has been spreading.[2]

References

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
RiskMetrics Group, Inc. was a New York-based firm specializing in risk analytics, , and solutions for institutional investors, asset managers, hedge funds, and corporations. Founded as a spin-off from in 1998, it commercialized the bank's proprietary RiskMetrics methodology—a variance-covariance approach to measuring portfolio (VaR) using historical data and exponential weighting—which had originated internally in the early to quantify market risks across asset classes. The company expanded through acquisitions, notably (ISS) in 2007 for proxy advisory and governance ratings, went public via IPO in 2007, and was acquired by Inc. in 2010 for $1.55 billion in a cash-and-stock deal, integrating its tools into MSCI's broader index and analytics platform. While instrumental in standardizing quantitative in global finance, the methodology drew scrutiny for relying on Gaussian assumptions that potentially underestimated extreme events, as evidenced by its limitations during the .

History and Development

Origins at

RiskMetrics originated as an internal value-at-risk (VaR) system developed by & Co. in the late 1980s to quantify and manage firm-wide amid rising volatility, leverage, and exposure. The system modeled several hundred key risk factors—including equity prices, foreign exchange rates, commodity prices, and interest rates—using a constructed from historical data, initially updated quarterly. Architected by Guldimann, who chaired the firm's committee and had prior experience in asset-liability analysis, the methodology aggregated daily position deltas (reported via email) into a linear representation of the portfolio for risk computation. The core approach assumed normally distributed logarithmic returns and focused on one-day 95% VaR in U.S. dollars, shifting from traditional notional exposure limits to probabilistic risk measures. Under Chairman Dennis Weatherstone, who emphasized comprehensive risk reporting following events like the 1987 stock market crash and contributions to the Group of Thirty's derivatives study, VaR metrics were integrated into daily 4:15 p.m. meetings by 1990, replacing limits with standardized VaR-based thresholds. This internal evolution addressed the limitations of fragmented risk silos, enabling aggregation across trading desks and portfolios through variance-covariance techniques. By 1993, demonstrations of the system at a J.P. Morgan-hosted conference generated external interest, prompting the firm to refine its methodology for broader applicability. The internal framework, while proprietary, formed the foundation for RiskMetrics, which J.P. Morgan's risk group publicly disclosed in October 1994 via a 50-page technical document and freely distributed daily data covering approximately 20 markets, marking a deliberate effort to standardize risk practices industry-wide.

Public Release and Standardization

In October 1994, publicly released RiskMetrics, disclosing its internal variance-covariance-based methodology for measurement along with freely available daily datasets covering major . The initiative aimed to promote transparency in practices, which lacked a common benchmark at the time, by providing institutions with standardized tools for calculating metrics like (VaR). This release included initial broad market coverage, with subsequent refinements documented in technical manuals issued between 1994 and 1996. The methodology's emphasis on empirical, data-driven matrices and exponential weighting for volatility forecasting facilitated consistent cross-institutional comparisons, rapidly establishing RiskMetrics as an industry benchmark. By making core components openly accessible without proprietary restrictions, encouraged adoption, leading to widespread use among banks, asset managers, and regulators seeking uniform frameworks. This standardization effort addressed fragmentation in pre-1994 practices, where varying assumptions hindered reliable risk aggregation across portfolios. Over the following years, the framework's influence extended through iterative updates, such as the 1996 fourth edition of the RiskMetrics Technical Document, which solidified its role as a for integrating historical into forward-looking risk models. Adoption metrics from the era indicate that by the late , RiskMetrics underpinned risk reporting for a significant portion of global financial institutions, though critics later noted limitations in assuming normal distributions and historical representativeness.

Corporate Evolution and Acquisition

RiskMetrics Group emerged as an independent entity following its spin-off from in September 1998, transitioning from an internal toolset to a standalone provider of commercial , , and software solutions. This separation enabled focused expansion into multi-asset class , with the company reporting compound annual growth rates exceeding 65% in subsequent years. In June 2004, RiskMetrics secured $122 million in funding from investors including , Spectrum Equity Investors, and Technology Crossover Ventures, supporting product development and . To diversify beyond traditional market and , the firm acquired (ISS) on January 11, 2007, incorporating , proxy advisory, and voting analytics into its portfolio. RiskMetrics conducted its in 2007, enhancing its capital base for further innovation in risk modeling and enterprise solutions. On March 1, 2010, Inc. announced a $1.55 billion cash-and-stock acquisition of RiskMetrics, aiming to combine its expertise with MSCI's indexing and analytics platforms. The deal closed on June 1, 2010, after shareholder approval and regulatory clearance, marking the integration of RiskMetrics' methodologies into MSCI's broader ecosystem while preserving key brands and technologies.

Core Components

Risk Measurement Process

The RiskMetrics risk measurement process follows a parametric variance-covariance approach to estimate the distribution of portfolio returns and derive risk metrics such as (VaR). It begins with the specification of key parameters: the holding period TT (typically 1 day, scaled to longer horizons) and the confidence level α\alpha (commonly 95% or 99%). Historical daily from sources like and , covering including , equities, , and commodities across over 30 countries, serve as inputs. Portfolio positions are decomposed into cash flows or sensitivities exposed to standardized risk factors, known as RiskMetrics vertices—such as interest rate buckets (e.g., 1-month, 3-month, up to 30-year), equity country indices, FX rates, and commodity prices. Linear instruments map directly via deltas, while nonlinear ones like options use delta-gamma approximations or Monte Carlo revaluations to account for convexity, generating scenarios based on spot prices, strike ratios (0.90–1.12), and time to expiration (1 day to 1 year). Exposures are represented as a vector w\mathbf{w} of weighted sensitivities (e.g., betas for equities). The covariance matrix Σ\Sigma of 1-day risk factor returns is estimated from 1–5 years of historical log returns, assuming conditional multivariate normality with zero mean and no autocorrelation. An exponentially weighted moving average (EWMA) model updates forecasts recursively: for variances, σt,12=λσt1,12+(1λ)rt12\sigma_{t,1}^2 = \lambda \sigma_{t-1,1}^2 + (1-\lambda) r_{t-1}^2; for covariances, σij,t,12=λσij,t1,12+(1λ)ri,t1rj,t1\sigma_{ij,t,1}^2 = \lambda \sigma_{ij,t-1,1}^2 + (1-\lambda) r_{i,t-1} r_{j,t-1}, using a decay factor λ=0.94\lambda = 0.94 for daily horizons (0.97 for monthly) to weight recent data more heavily, equivalent to about 75 effective daily observations. Portfolio variance is then wTΣw\mathbf{w}^T \Sigma \mathbf{w}, yielding standard deviation σp=wTΣw\sigma_p = \sqrt{\mathbf{w}^T \Sigma \mathbf{w}}
Add your contribution
Related Hubs
User Avatar
No comments yet.