What Is Value at Risk?
Value at Risk (VaR) estimates the maximum expected loss on a portfolio over a specific time period at a given confidence level. It compresses the entire downside risk of a position into a single number - making it the most widely used risk metric in finance.
Here's what that means concretely. If someone tells you the 95% daily VaR of a portfolio is £1 million, they're saying: "On 95% of trading days, we expect to lose less than £1 million. On the remaining 5% of days, losses could exceed that figure." It's a threshold, not a ceiling.
VaR became the standard risk metric in the 1990s after JPMorgan published its RiskMetrics methodology and made it freely available. Regulators quickly adopted it for bank capital requirements, and by 2026 it remains the foundation of risk management across banks, hedge funds, asset managers, and insurance companies. Every trading desk has VaR limits. Every risk report includes VaR numbers. Every regulatory framework references it.
Three inputs define a VaR calculation:
- Confidence level - typically 95% or 99%
- Time horizon - typically one day or ten days
- Portfolio composition - the assets, positions, and their sensitivities
The output is a single number expressed in currency terms: the loss threshold that won't be exceeded with the stated probability over the stated time period.
How to Interpret VaR
VaR tells you the boundary between normal losses and tail losses at your chosen confidence level. It does not tell you how large losses can get once you're past that boundary - and this distinction is critical for anyone working with the metric.
What the confidence level means. A 95% daily VaR of £500,000 means that on 95 out of every 100 trading days, you expect losses to be smaller than £500,000. On the remaining 5 days, losses will exceed £500,000. A 99% VaR is a stricter measure - it looks further into the tail. A 99% daily VaR of £1.2 million means you expect to breach that level roughly 2-3 times per year (since there are about 252 trading days).
| Confidence Level | Expected Breaches Per Year (Daily VaR) | Typical Use |
|---|---|---|
| 90% | ~25 days | Internal stress testing |
| 95% | ~13 days | Internal risk limits |
| 99% | ~2.5 days | Regulatory capital |
| 99.9% | ~0.25 days | Economic capital models |
What the time horizon means. Daily VaR is the standard for trading desks because positions are marked to market daily and can usually be adjusted within a day. Ten-day VaR is used for regulatory capital calculations under Basel frameworks because it approximates the time needed to liquidate a trading book in stressed conditions. For a portfolio of liquid equities, one day makes sense. For illiquid credit positions, longer horizons are more appropriate.
A common approximation converts daily VaR to a different horizon using the square-root-of-time rule:
VaR_T = VaR_1 * sqrt(T)
So a daily VaR of £1 million scales to a 10-day VaR of approximately £1 million * sqrt(10) = £3.16 million. This rule assumes returns are independent and identically distributed - a simplification that breaks down during market crises when serial correlation and volatility clustering appear.
What VaR does not tell you. This is where many people get tripped up. VaR is silent on the severity of tail losses. If your 99% daily VaR is £2 million, you know you'll breach that level roughly 2-3 times per year. But will those breaches be £2.1 million or £20 million? VaR doesn't say. This limitation was exposed dramatically during the 2008 financial crisis when institutions that relied on VaR were blindsided by losses many multiples of their reported VaR numbers.
VaR also isn't a worst-case measure. It's a statistical statement about a specific quantile of the loss distribution, not a bound on how bad things can get.
The Three Methods of Calculating VaR
There are three standard approaches to calculating VaR - parametric (variance-covariance), historical simulation, and Monte Carlo simulation. Each makes different trade-offs between simplicity, speed, and the ability to capture complex risk.
Parametric (variance-covariance) VaR assumes returns follow a known distribution, usually normal, and calculates VaR directly from the portfolio's mean and standard deviation. It's fast and simple but breaks down for non-linear portfolios and fat-tailed distributions.
Historical simulation VaR uses actual historical returns to build the loss distribution empirically. It makes no assumptions about the shape of the distribution - whatever happened in the past is the model. The trade-off is that it depends entirely on the historical period being representative of future risk.
Monte Carlo VaR generates thousands of simulated portfolio returns using stochastic models, then extracts VaR from the simulated distribution. It's the most flexible approach - it can handle non-linear instruments, fat tails, and complex correlations - but it's also the most computationally expensive.
| Method | Distributional Assumption | Handles Non-Linearity | Speed | Complexity |
|---|---|---|---|---|
| Parametric | Normal (or specified) | No | Very fast | Low |
| Historical | None (uses actual data) | Partially | Fast | Low |
| Monte Carlo | User-specified models | Yes | Slow | High |
Most institutions use more than one method. A trading desk might run parametric VaR for intraday monitoring because it's instantaneous, historical simulation for end-of-day reporting, and Monte Carlo for portfolios with significant options exposure. The choice depends on the portfolio's complexity and the computational budget available.
Parametric (Variance-Covariance) VaR
Parametric VaR calculates the loss threshold directly from the portfolio's return distribution, typically assuming normality. Given a confidence level, you look up the corresponding z-score, multiply by the portfolio's standard deviation, and that's your VaR.
The formula for a single-asset parametric VaR is:
VaR = |mu - z * sigma| * P
where mu is the expected return (often set to zero for short horizons), z is the z-score for the chosen confidence level (1.645 for 95%, 2.326 for 99%), sigma is the standard deviation of returns, and P is the portfolio value.
For a portfolio of multiple assets, you need to account for correlations. The portfolio variance is:
sigma_p^2 = w^T * C * w
where w is the vector of portfolio weights and C is the covariance matrix of asset returns. The portfolio VaR is then:
VaR = z * sigma_p * P
Strengths. Parametric VaR is extremely fast. Once you have the covariance matrix, the calculation is a single matrix multiplication. This makes it ideal for real-time monitoring and for large portfolios where simulation would be slow. It's also easy to decompose - you can calculate each asset's contribution to total VaR, which is valuable for portfolio construction and risk budgeting.
Weaknesses. The normal distribution assumption is the core problem. Real financial returns have fatter tails than a normal distribution predicts, meaning parametric VaR systematically underestimates the frequency and magnitude of extreme losses. It also assumes linear relationships between risk factors and portfolio value, which fails for portfolios containing options or other instruments with non-linear payoffs. A portfolio of deep out-of-the-money options has a highly asymmetric return distribution that a normal model simply cannot represent.
Despite these limitations, parametric VaR remains widely used in 2026 because of its speed and interpretability. For portfolios of stocks, bonds, and currencies without significant options exposure, it provides a reasonable first-order approximation of risk.
Historical Simulation VaR
Historical simulation VaR applies actual past returns to your current portfolio to build a loss distribution, then reads VaR directly from the appropriate percentile. It makes no assumptions about the shape of the return distribution - whatever the historical data shows is what you get.
The method works in four steps:
- Collect a window of historical returns for each asset in the portfolio (e.g. the last 500 trading days)
- For each historical day, calculate what the portfolio's return would have been given today's weights and positions
- Sort these hypothetical portfolio returns from worst to best
- The VaR at confidence level alpha is the loss at the (1 - alpha) percentile
For a 95% VaR with 500 historical observations, you'd sort all 500 portfolio returns and take the 25th worst return (since 5% of 500 is 25). That return, applied to the current portfolio value, is your VaR.
Strengths. The most appealing feature is that it captures whatever distributional features exist in the data - fat tails, skewness, and non-normality are all reflected automatically. It also captures historical correlations between assets as they actually occurred, including correlation breakdowns during stress periods. No need to estimate a covariance matrix or assume a distributional form.
Weaknesses. The method assumes the past is a reasonable guide to the future. If your historical window doesn't include a period of market stress, your VaR estimate will be optimistically low. If it includes an unusually volatile period, estimates may be too conservative. The choice of window length creates a trade-off: longer windows provide more data but may include regimes that are no longer relevant; shorter windows are more responsive but produce noisier estimates.
Historical simulation also suffers from a lack of smoothness. Because VaR depends on a single order statistic, adding or removing one day from the window can cause the estimate to jump. And the method can't generate scenarios that haven't already occurred - it can only rearrange history, not extrapolate beyond it.
In practice, many firms use weighted historical simulation, where recent observations receive higher weights than older ones. This addresses the staleness problem while retaining the distribution-free nature of the approach.
Monte Carlo VaR
Monte Carlo VaR simulates thousands of possible future portfolio returns using stochastic models, then extracts VaR from the simulated loss distribution. It's the most flexible of the three methods and can handle virtually any combination of assets, non-linearities, and distributional assumptions.
The method follows this structure:
- Specify stochastic models for each risk factor (stock prices, interest rates, volatilities, exchange rates)
- Calibrate the models to current market data, including correlations between risk factors
- Generate a large number of simulated scenarios (typically 10,000 to 1,000,000)
- For each scenario, revalue the entire portfolio
- Sort the resulting profit-and-loss distribution and read off the VaR at the desired percentile
The key advantage is flexibility. Need to model a portfolio with barrier options, convertible bonds, and mortgage-backed securities? Monte Carlo handles it. Want to use a fat-tailed distribution instead of normal? Just change the random number generator. Want to include stochastic volatility or jumps? Add them to the simulation model.
Strengths. Monte Carlo VaR can handle any portfolio complexity, any distributional assumption, and any dependency structure. It naturally captures the non-linear behaviour of options and structured products. You can incorporate probability models that match observed market dynamics far better than the normality assumption. It also produces the full distribution of portfolio outcomes, not just VaR - so you can compute Expected Shortfall, stress scenarios, and other risk measures from the same simulation run.
Weaknesses. Computational cost is the primary drawback. Full portfolio revaluation across 100,000 scenarios requires pricing every instrument 100,000 times. For complex portfolios with exotic derivatives, this can take minutes or hours even on modern hardware in 2026. Statistical noise is another issue - Monte Carlo estimates have sampling error that decreases only as the square root of the number of simulations. Variance reduction techniques (antithetic variates, importance sampling, control variates) help but add implementation complexity.
There's also model risk. Monte Carlo VaR is only as good as the stochastic models driving the simulation. If you simulate stock prices using geometric Brownian motion but real returns exhibit fat tails and jumps, your VaR will be misestimated regardless of how many paths you run.
Calculating VaR in Python
Here are implementations of all three VaR methods using numpy and pandas. We'll work with a simple two-asset portfolio to keep the examples clear.
Setting Up the Data
import numpy as np import pandas as pd np.random.seed(42) n_days = 1000 portfolio_value = 1_000_000 # GBP # Simulate two correlated asset return series mu = np.array([0.0005, 0.0003]) # daily expected returns cov_matrix = np.array([ [0.0004, 0.0001], [0.0001, 0.0003] ]) returns = np.random.multivariate_normal(mu, cov_matrix, n_days) weights = np.array([0.6, 0.4]) # Portfolio daily returns portfolio_returns = returns @ weights print(f"Mean daily return: {portfolio_returns.mean():.6f}") print(f"Std daily return: {portfolio_returns.std():.6f}") print(f"Sample size: {len(portfolio_returns)} days")
Parametric VaR
from scipy import stats def parametric_var(returns, confidence=0.95, portfolio_value=1_000_000): """Calculate parametric (variance-covariance) VaR assuming normal returns.""" mu = np.mean(returns) sigma = np.std(returns) z = stats.norm.ppf(1 - confidence) var_pct = -(mu + z * sigma) var_gbp = var_pct * portfolio_value return var_pct, var_gbp var_95_pct, var_95_gbp = parametric_var(portfolio_returns, confidence=0.95) var_99_pct, var_99_gbp = parametric_var(portfolio_returns, confidence=0.99) print(f"95% Parametric VaR: {var_95_pct:.4%} = £{var_95_gbp:,.0f}") print(f"99% Parametric VaR: {var_99_pct:.4%} = £{var_99_gbp:,.0f}")
Historical Simulation VaR
def historical_var(returns, confidence=0.95, portfolio_value=1_000_000): """Calculate VaR using historical simulation.""" sorted_returns = np.sort(returns) index = int((1 - confidence) * len(sorted_returns)) var_pct = -sorted_returns[index] var_gbp = var_pct * portfolio_value return var_pct, var_gbp hist_var_95_pct, hist_var_95_gbp = historical_var(portfolio_returns, confidence=0.95) hist_var_99_pct, hist_var_99_gbp = historical_var(portfolio_returns, confidence=0.99) print(f"95% Historical VaR: {hist_var_95_pct:.4%} = £{hist_var_95_gbp:,.0f}") print(f"99% Historical VaR: {hist_var_99_pct:.4%} = £{hist_var_99_gbp:,.0f}")
Monte Carlo VaR
def monte_carlo_var(mu, cov_matrix, weights, n_simulations=100_000, confidence=0.95, portfolio_value=1_000_000): """Calculate VaR using Monte Carlo simulation.""" simulated_returns = np.random.multivariate_normal(mu, cov_matrix, n_simulations) portfolio_sims = simulated_returns @ weights sorted_returns = np.sort(portfolio_sims) index = int((1 - confidence) * n_simulations) var_pct = -sorted_returns[index] var_gbp = var_pct * portfolio_value return var_pct, var_gbp mc_var_95_pct, mc_var_95_gbp = monte_carlo_var( mu, cov_matrix, weights, n_simulations=100_000, confidence=0.95 ) mc_var_99_pct, mc_var_99_gbp = monte_carlo_var( mu, cov_matrix, weights, n_simulations=100_000, confidence=0.99 ) print(f"95% Monte Carlo VaR: {mc_var_95_pct:.4%} = £{mc_var_95_gbp:,.0f}") print(f"99% Monte Carlo VaR: {mc_var_99_pct:.4%} = £{mc_var_99_gbp:,.0f}")
Comparing All Three Methods
results = pd.DataFrame({ "Method": ["Parametric", "Historical", "Monte Carlo"], "95% VaR (GBP)": [f"£{var_95_gbp:,.0f}", f"£{hist_var_95_gbp:,.0f}", f"£{mc_var_95_gbp:,.0f}"], "99% VaR (GBP)": [f"£{var_99_gbp:,.0f}", f"£{hist_var_99_gbp:,.0f}", f"£{mc_var_99_gbp:,.0f}"], }) print(results.to_string(index=False))
For this simple normally-distributed example, all three methods produce similar numbers. The differences become more pronounced with real market data - where returns are fat-tailed and skewed - and with portfolios containing options or other non-linear instruments. That's precisely when the choice of method matters most.
Limitations of VaR
VaR has well-documented shortcomings that every risk manager should understand. Treating VaR as a complete picture of portfolio risk is dangerous - it was designed to answer a narrow question, and it answers only that question.
VaR doesn't tell you how bad losses can get. This is the most important limitation. A 99% daily VaR of £5 million tells you that on roughly 2-3 days per year, losses will exceed £5 million. But the average loss on those days might be £8 million, or £15 million, or £50 million. VaR is completely silent on the severity of tail losses, which is exactly when severity matters most. During the 2008 crisis, several firms experienced losses exceeding 10 times their reported VaR. The metric told them where the cliff edge was but nothing about how far the drop.
The normality assumption in parametric VaR understates tail risk. Financial returns consistently exhibit fat tails - extreme moves occur more often than a normal distribution predicts. The October 1987 crash, the 2008 financial crisis, and the March 2020 COVID sell-off were all events that a normal model would place at probabilities indistinguishable from zero. Using parametric VaR without adjusting for fat tails guarantees systematic underestimation of extreme risk.
VaR is not sub-additive. A mathematically "coherent" risk measure should satisfy the property that diversification doesn't increase risk: the VaR of a combined portfolio should be no larger than the sum of the individual VaRs. VaR can violate this property. In specific (admittedly contrived) cases, combining two portfolios can produce a higher VaR than the sum of their individual VaRs. This means a risk aggregation system based on VaR can penalise diversification - the opposite of what you'd want.
VaR is procyclical. In calm markets, volatility is low, historical windows are benign, and VaR estimates shrink. This encourages firms to increase positions and take more risk. When a crisis hits, volatility spikes, VaR estimates surge, and firms are forced to cut positions simultaneously - amplifying the sell-off. VaR naturally runs in the same direction as market cycles, making it a destabilising force at the worst possible time.
Model risk and the illusion of precision. VaR produces a single number with spurious precision. Reporting "the 99% daily VaR is £4,237,891" implies a level of accuracy that the underlying models simply cannot deliver. The estimate depends on distributional assumptions, correlation structures, lookback windows, and data quality - all of which introduce uncertainty. Different reasonable assumptions can produce materially different VaR numbers for the same portfolio.
It's backward-looking. Whether you're using historical simulation (literally replaying history) or parametric VaR (estimated from historical returns), the method assumes the future resembles the past. Regime changes, structural market shifts, and unprecedented events aren't captured. The models that said subprime mortgage portfolios had minimal risk in 2006 were calibrated to a period where national house prices hadn't declined simultaneously.
Expected Shortfall (CVaR)
Expected Shortfall - also called Conditional VaR (CVaR) or Expected Tail Loss - measures the average loss in the worst cases beyond the VaR threshold. Where VaR says "losses won't exceed X with probability alpha", Expected Shortfall says "when losses do exceed X, the average loss is Y".
This directly addresses VaR's biggest weakness. If your 99% daily VaR is £2 million, Expected Shortfall tells you that on those roughly 2-3 worst days per year, you expect to lose an average of, say, £3.5 million. It tells you something about the shape of the tail, not just its location.
Mathematically, for a continuous loss distribution:
ES_alpha = E[L | L > VaR_alpha]
which is the expected value of losses conditional on exceeding the VaR threshold. For historical data, you can calculate it by averaging all losses that exceed the VaR:
ES_alpha = (1 / (n * (1 - alpha))) * sum of all losses exceeding VaR_alpha
Expected Shortfall has several properties that make it theoretically superior to VaR:
It is sub-additive. The Expected Shortfall of a combined portfolio is always less than or equal to the sum of the individual Expected Shortfalls. This means it always rewards diversification and never penalises it - making it a coherent risk measure in the mathematical sense.
It captures tail severity. Two portfolios with identical VaR numbers can have vastly different Expected Shortfalls. A portfolio with thin tails (losses rarely exceed VaR, but when they do, the excess is small) will have a low ES. A portfolio with fat tails (rare but catastrophic losses) will have a high ES. This distinction matters enormously.
Basel III prefers it. The Basel Committee on Banking Supervision shifted from VaR to Expected Shortfall in its Fundamental Review of the Trading Book (FRTB) framework. Under the revised market risk standards being implemented across major jurisdictions in 2026, banks must use 97.5% Expected Shortfall rather than 99% VaR for their internal models approach. The regulatory world has recognised VaR's limitations and moved toward a metric that better captures tail risk.
Here's a Python implementation:
def expected_shortfall(returns, confidence=0.95, portfolio_value=1_000_000): """Calculate Expected Shortfall (CVaR) from historical returns.""" sorted_returns = np.sort(returns) cutoff_index = int((1 - confidence) * len(sorted_returns)) tail_losses = sorted_returns[:cutoff_index] es_pct = -np.mean(tail_losses) es_gbp = es_pct * portfolio_value return es_pct, es_gbp es_95_pct, es_95_gbp = expected_shortfall(portfolio_returns, confidence=0.95) es_99_pct, es_99_gbp = expected_shortfall(portfolio_returns, confidence=0.99) print(f"95% Expected Shortfall: {es_95_pct:.4%} = £{es_95_gbp:,.0f}") print(f"99% Expected Shortfall: {es_99_pct:.4%} = £{es_99_gbp:,.0f}") print() print(f"95% VaR: {var_95_pct:.4%} = £{var_95_gbp:,.0f}") print(f"99% VaR: {var_99_pct:.4%} = £{var_99_gbp:,.0f}")
Expected Shortfall is always at least as large as VaR at the same confidence level. The ratio between them reveals how heavy-tailed the loss distribution is - for a normal distribution, the 99% ES is about 1.15 times the 99% VaR. For real market data with fat tails, the ratio is typically larger, meaning the average tail loss is substantially worse than the VaR threshold.
VaR in Practice: How Banks and Funds Use It
Banks, hedge funds, and asset managers use VaR as the backbone of their day-to-day risk management. The metric appears at every level - from individual trader limits through desk-level aggregation to firm-wide reporting to regulators and boards.
Regulatory capital. Under Basel frameworks, banks must hold capital proportional to their trading book risk. The internal models approach allows banks to use their own VaR models (subject to supervisory approval) to determine capital requirements. The capital charge is typically a multiple of the VaR estimate - for example, three times the 10-day 99% VaR, with the multiplier increasing if backtesting reveals the model is underperforming. The shift to FRTB in 2026 is moving this toward Expected Shortfall, but VaR remains embedded in many transitional and standardised approaches.
Internal risk limits. Trading desks operate within VaR limits set by risk management. A desk might have a daily VaR limit of £10 million at the 99% level. If positions grow and VaR approaches the limit, the desk either reduces risk or requests a temporary limit increase. These limits cascade upward - individual trader limits roll up into desk limits, which roll up into business unit limits, which aggregate to firm-wide VaR.
Desk-level risk management. Risk managers use VaR decomposition to understand where risk concentrates. Component VaR shows how much each position contributes to total portfolio VaR, accounting for diversification benefits. Marginal VaR measures how much VaR changes when you add a small amount of a given asset. These tools help desk managers optimise their risk budget - directing it toward their highest-conviction trades.
Backtesting. VaR models must be validated against reality. The standard approach counts how often actual losses exceed the VaR prediction. For a 99% daily VaR model over 250 trading days, you'd expect roughly 2-3 breaches per year. Significantly more breaches suggest the model is underestimating risk (too many exceptions); significantly fewer suggest it's too conservative (which ties up capital unnecessarily). The Basel traffic light system classifies models into green (0-4 exceptions), yellow (5-9 exceptions), and red (10+ exceptions) zones, with capital multipliers increasing for worse performance.
Stress testing. VaR is complemented by stress testing, which asks "what happens if?" rather than relying on statistical models. Scenarios might include a 2008-style credit crisis, a sudden interest rate shock, or a geopolitical event. Stress testing fills the gap that VaR leaves - it explores specific tail scenarios rather than relying on historical patterns or distributional assumptions. In 2026, regulators require both VaR-based measures and comprehensive stress testing programmes.
Performance attribution. Some funds use VaR to normalise returns by risk. Return-on-VaR metrics show how much profit a desk generates per unit of risk taken, helping allocate capital to the most efficient risk-takers.
The overarching lesson from decades of VaR usage is clear: VaR is a useful tool within a broader risk framework, but it's a dangerous metric if treated as the sole measure of risk. Every major trading loss in financial history - from Barings to Long-Term Capital Management to the subprime crisis - occurred while VaR numbers appeared manageable. The number must be combined with stress testing, Expected Shortfall, position-level analysis, and qualitative judgement to produce a genuine understanding of risk.
Frequently Asked Questions
What is a good Value at Risk number?
There's no universally "good" VaR number - it depends entirely on the portfolio's mandate, the firm's risk appetite, and the confidence level and time horizon used. A high-frequency market-making desk might run a daily VaR of £500,000, while a macro hedge fund might accept £50 million. What matters is that VaR is proportionate to the expected return and consistent with the firm's ability to absorb losses. A VaR that's too high relative to capital is reckless; one that's too low suggests the portfolio isn't taking enough risk to meet its return objectives. The key is understanding what the number means in context, not whether it's "high" or "low" in absolute terms.
What is the difference between VaR and Expected Shortfall?
VaR tells you the threshold loss at a given confidence level - the point where normal losses end and tail losses begin. Expected Shortfall (CVaR) tells you the average loss when you're in the tail - how bad things typically get once VaR is breached. For a 99% confidence level, VaR says "losses won't exceed this amount 99% of the time." Expected Shortfall says "in the worst 1% of cases, the average loss is this amount." Expected Shortfall is always greater than or equal to VaR and provides more information about tail risk. The Basel III FRTB framework has endorsed Expected Shortfall as the preferred regulatory risk metric precisely because it captures what VaR misses.
Can VaR be negative?
Yes, but it depends on convention. If a portfolio has a strong positive expected return, the VaR at lower confidence levels can be negative, meaning even in the specified worst case, the portfolio is still expected to gain money. This typically happens only at low confidence levels (like 90%) for portfolios with high expected returns and low volatility. At 99% confidence, it's rare for VaR to be negative. The sign convention also matters - some practitioners define VaR as a positive number representing a loss, while others use the negative of the return quantile. Make sure you're clear about which convention is in use before interpreting any VaR figure.
How often should VaR models be backtested?
Regulatory standards require ongoing backtesting, typically daily. Each trading day, you compare the actual portfolio profit or loss against the previous day's VaR prediction and record whether a breach occurred. Most institutions review breach counts quarterly and annually against the expected frequency. A 99% daily VaR model should produce roughly 2-3 exceptions per year. If you're seeing significantly more, the model is underestimating risk and needs recalibration. If you're seeing far fewer, the model may be too conservative, tying up excess capital. The Basel traffic light system formalises this assessment by mapping exception counts over 250 days to capital multiplier zones.
Is VaR still relevant in 2026?
Yes, VaR remains deeply embedded in financial risk management in 2026, despite its well-known limitations. It serves as a common language for communicating risk across trading desks, risk committees, and regulators. The regulatory shift toward Expected Shortfall under FRTB doesn't eliminate VaR - it supplements it. Banks still compute VaR for internal limits, backtesting, and legacy regulatory requirements. Hedge funds and asset managers continue to use VaR for position sizing and risk budgeting. The metric has survived because it's simple, intuitive, and provides a useful baseline even if it shouldn't be the only risk measure in the toolkit.
What is the difference between parametric, historical, and Monte Carlo VaR?
Parametric VaR assumes returns follow a normal distribution and calculates VaR analytically from the mean and standard deviation - it's the fastest but least realistic for fat-tailed returns. Historical VaR uses actual past returns without any distributional assumption, making it simple and distribution-free but dependent on the historical window being representative. Monte Carlo VaR simulates thousands of scenarios using stochastic models and can handle any distributional form, non-linear instruments, and complex correlations - but it's computationally expensive. The choice depends on portfolio complexity: parametric works for simple linear portfolios, historical suits most equity and fixed-income books, and Monte Carlo is essential for portfolios with significant options or structured product exposure.
Want to go deeper on Value at Risk (VaR) Explained: A Practical Guide with Python 2026?
This article covers the essentials, but there's a lot more to learn. Inside Quantt, you'll find hands-on coding exercises, interactive quizzes, and structured lessons that take you from fundamentals to production-ready skills — across 50+ courses in technology, finance, and mathematics.
Free to get started · No credit card required