When the COVID-19 outbreak became a global pandemic, financial-markets volatility hit its highest level in more than a decade, amid pervasive uncertainty over the long-term economic impact. Calm has returned to markets in recent months, but volatility continues to trend above its long-term average. Amid persistent uncertainty, financial institutions are seeking to develop more advanced quantitative capabilities to support faster and more accurate decision making.
As financial markets gyrated in recent months, banks faced particular problems calculating value at risk (VAR) across asset classes. Many institutions experienced elevated levels of VAR back-testing exceptions, leading to higher regulatory-capital multipliers. Increases of as much as 30 percent were reported, prompting regulators to apply exemptions in some cases. There were also challenges with valuation adjustments, as derivatives faced snowballing collateral calls and increasing funding costs. Where credit-value-adjustment (CVA) risks were excluded from market risk models, CVA hedges sat “naked” on the balance sheet, leading to significant uplifts in exposures, and therefore in risk-weighted assets (RWAs). One large US dealer was hit with a loss of $950 million stemming from a valuation adjustment (XVA) in the first quarter of 2020. Elsewhere, rising gap risk in illiquid securities catalyzed painful fair-value losses—as high as $200 million in the case of a major Europe-based bank.
In an unpredictable environment, financial modelers were required to come up with solutions, but were often stymied by inadequate models or the need for huge computational power that was not always available. Given the speed of response required, models in some cases were rendered unusable. The inevitable result was an increase in risk exposures and opacity from valuations, sometimes in absolute value and other times relating to the reason for specific model outputs.
An imperative to act
Since forecasting institutions expect the global economy to have contracted by about 5 percent in 2020, banks should aim to optimize their trading books and risk positions. This ambition requires more accurate and timely valuations. With those priorities in mind, more advanced models and sufficient computational power are imperatives. Indeed, “speed” is of the essence.
In response, some leading institutions have started to incorporate advanced techniques into their quantitative armories. In pricing, an area that has experienced a spike in recent activity, several banks are applying machine learning (ML) to enhance traditional models—for example, by calibrating parameters more efficiently. In particular, banks have used neural networks, a type of ML focused on nonlinear and complex data relationships. Advanced machine-learning techniques can do the following:
- speed up calculations, reducing operational costs and allowing real-time risk management of complex products
- animate more complex models that may currently be unusable in practice, and unlock more accurate valuations
- generate high volumes of synthetic but market-consistent data, helping, for example, to offset the disruptive impact of COVID-19-related market moves
One way to implement neural networks is to apply them to pricing, where they can “learn” how to price vanilla calibration instruments under a given (possibly complex) model, and then act as pricing engines for new model calibration. The approach obviates one of the most significant challenges associated with ML, which is parameter interpretability. In this case, there is no interpretability issue because the network uses the original model’s parameters. This means that there is no ML “black box,” and the key calibrated parameters can be interpreted in the original model’s context.
Neural networks can also support future exposure modeling for valuation adjustments (Exhibit 1).
The network can be trained on established samples, such as those relating to the evolution of risk factors and corresponding cash flows for the products being modeled. The additional efficiency provided by the network makes for improved accuracy and faster processing (Exhibit 2). That saves banks from using time-consuming nested Monte Carlo approaches and less accurate analytical approximations or “least squares”—style regressions.
There are equally promising applications in real-time portfolio valuation, risk assessment, and margining.
Three steps to deepening ML engagement
Machine learning offers significant enhancement for conventional quantitative approaches through its ability to interpolate across large data sets and streamline model calibration. Banks would benefit by deepening their ML engagement and testing new use cases. The uncertain macroeconomic environment should act as a catalyzer to this process and trigger banks to act. The emphasis initially should be on discrete applications rather than wholesale transformation. Use cases can later be extended and expanded across the business.
There is no blueprint for model development, and individual businesses must solve for their own pressing needs. However, the experience of early movers suggests that reliable options for establishing a track record encompass three key steps:
1. Identify quick wins
While ML can help to improve numerous calculation processes, it is more useful in some contexts than others. The task for decision makers is to identify potentially winning applications that will help create a positive track record. Likely candidates are models that consume large amounts of time or computing power. ML can both speed the work of these models and lay the groundwork for scaling their application. Among the applications that have begun to attract attention are valuations of level-3 assets, XVA calculations, profit-and-loss attributions (“P&L explains”), adaptations for Fundamental Review of the Trading Book (FRTB), and stress testing.
A “discovery phase” of an ML transformation could proceed as follows:
- Identify concrete cases based on accepted criteria, such as the complexity of models, exposure in books, or computational bottlenecks. For example, complex, hard-to-value derivatives such as structured callable trades could be good targets.
- Size the estimated impact and align various stakeholder groups.
- Create an action plan, including the effort and time required for implementing the identified use cases.
2. Build capabilities to embrace a culture enabled by machine learning
Machine learning has the potential to create significant efficiencies in a range of activities. However, financial institutions cannot maximize the ML opportunity without acquiring the necessary capabilities to build, maintain, and apply ML-enabled models. They must also take steps to help employees understand and exploit potential benefits so that ML is embedded in the culture of the organization.
This could be achieved by following through with the earlier approach and establishing and executing pilot programs to implement prioritized use cases. During these pilots, the following practices can be applied:
- build capabilities via learning on the job
- understand typical challenges and pitfalls and how to solve them
- acquire continuous feedback on how new applications can fit into the wider organization
3. Roll out at scale
Over time, sprints, prototypes, and quick wins will have accumulated sufficiently to create the conditions for a more sustained machine-learning rollout. Assuming a critical mass of use cases, quant teams should move to integrate ML into a wider range of activities. They may begin with the front office and extend into risk, finance, compliance, and research.
A plan to scale up the machine-learning program could include the following activities:
- strategic execution of identified priority use cases
- continuous exploration of additional areas where ML could be relevant, such as anti–money laundering, know your customer, or cybersecurity
- updating risk-management practices, such as model governance and risk assessment, to monitor and control new risks introduced by ML
Machine learning has the potential to enable institutions to do more in capital markets, to move faster, and to move with greater accuracy. The working conditions created during the pandemic have accelerated reliance on digital access and the data-driven environment. Given these factors, machine learning could easily begin to migrate into mainstream operations. With this in mind, firms must not delay in building their capabilities. They must experiment, develop use cases, and move quickly to the production of machine-learning-enhanced models. Those that create and execute a sensible implementation strategy are likely to emerge from the current crisis stronger, more assured of risk exposures, and better prepared for what lies ahead.