Lessons from a $9 Billion Spreadsheet
Mar 14, 2013
Quantitative modeling has long been used across a broad range of applications in the financial sector, including in asset valuation, forecasting risk exposures, and estimating returns to a particular portfolio. More recently, developments in computing power, as well as proliferation of statistical methods and availability of data resources, have allowed for increased use of more complex models. However, as data-driven modeling methods have become more sophisticated, they have also become increasingly opaque.
Financial models take on many shapes and sizes, from simple spreadsheets to proprietary enterprise-level software. It is common practice that quality assurance testing and validation are part of the development process for complex software applications, and users are trained in their implementation. However, while spreadsheets remain a critical part of firms’ analytical and monitoring practices, the same testing, validation, and control processes are often overlooked in their development. In fact, all models, including simple and complex spreadsheets, are subject to model risk. This type of risk is inherent because models are simplifications of the workings of real-life markets, summarizing complex interactions into discreet quantifiable metrics, which, by definition, will produce imperfect results.
A spreadsheet may be used to simply discount cash flows to determine an asset’s value. However, even a basic discounted cash flow model must use a cost of capital measure, which requires a number of assumptions. In other cases, models may comprise of a system of spreadsheets that pull in data from a number of sources (potentially public and proprietary), perform complex manipulation and calculations, and present the user with a suite of customizable options. More complex models, which rely on multiple data sources and use advanced statistical methods, may yield even greater risk exposure because the spreadsheet must incorporate a greater number of assumptions and could be prone to human error.
This type of scenario unfolded last year in a large-scale example of realized model risk, widely publicized as the "London Whale" trading scandal. JPMorgan carried out a series of trades that resulted in an estimated loss of $9 billion. The model JPMorgan used to estimate value-at-risk (VaR), a measure of potential loss on an investment, was based on “a series of Excel spreadsheets, which had to be completed manually, by a process of copying and pasting data from one spreadsheet to another.” The complexity of the model, combined with the manual effort required to update the data inputs, introduced a substantial amount of room for human error. In fact, a post-mortem review of what had occurred identified an error in the calculation of the relative changes in hazard rates and correlation estimates. Specifically, the model divided the difference between the old and new rates by their sum rather than their average, as the modeler had intended. It was also determined that this error had the effect of muting volatility by a factor of two and thus lowered the VaR, which exposed the firm to substantially more risk than anticipated.
The JPMorgan experience, while extreme, provides an important lesson. Improperly designed quantitative models, even spreadsheets, can lead to disastrous results. While it is imperative that the developers have requisite expertise, any financial institution relying on a model in its decision-making must ensure the model goes through rigorous and independent review prior to implementation. Likewise, integrity of the data and model structure should not be sacrificed in favor of complexity.
To read more about the result of the JPMorgan trading scandal, and to view the Senate report and hearings on the incident, click here.