Predicting stock price movements remains one of the central challenges in quantitative finance. While perfect prediction is impossible, volatility provides a statistical framework for understanding the probable range of future prices. By measuring how much a stock's returns have historically fluctuated, we can construct probability cones that visualize where prices might reasonably land.
This article implements a complete Monte Carlo simulation engine in Python that projects future stock prices using Geometric Brownian Motion. We'll fetch real market data, calculate historical volatility, generate thousands of simulated price paths, and visualize the results as probability cones with configurable confidence intervals.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights
This article covers:
- Section 1: Understanding volatility as a measure of uncertainty and the mathematical foundation behind volatility cones
- Section 2: Complete Python implementation including data fetching, volatility calculation, Monte Carlo simulation, and visualization
- Section 3: Interpreting the simulation results and understanding what the probability cones reveal
- Section 4: Practical use cases for traders, risk managers, and portfolio analysts
- Section 5: Limitations of the GBM model and important caveats for real-world application
1. Volatility and the Mathematics of Uncertainty
Volatility quantifies the degree of variation in a stock's returns over time. Think of it as a measure of "wiggliness" — a highly volatile stock experiences large swings in both directions, while a low-volatility stock moves in smaller, more predictable increments. This statistical property forms the foundation for options pricing, risk management, and the probability cones we'll build.
Mathematically, we measure volatility as the standard deviation of logarithmic returns:
σ = √[Σ(rᵢ - r̄)² / (N-1)]
Where σ represents volatility, rᵢ is the return on day i, r̄ is the mean return, and N is the number of observations. We use logarithmic returns rather than simple percentage changes because they're additive across time and symmetric for gains and losses — properties that make the math cleaner.
The volatility cone emerges from a key insight: if we know today's price and the stock's historical volatility, we can project a range of probable future prices. The cone widens as we look further into the future because uncertainty compounds. A stock might be 2% higher or lower tomorrow, but over 30 days, those daily fluctuations accumulate into a much wider range of outcomes.
To generate these projections, we employ Geometric Brownian Motion (GBM), which models stock prices as following the stochastic differential equation: dS = μS dt + σS dW. Here, S is the stock price, μ is the drift (expected return), σ is volatility, and dW represents random Brownian increments. This model assumes returns are normally distributed and independent across time — assumptions we'll revisit in the limitations section.
2. Python Implementation
2.1 Setup and Parameters
Our implementation requires several configurable parameters that control the simulation behavior. The ticker symbol determines which stock to analyze, the lookback period defines how much historical data to use for volatility calculation, and the projection window sets how far into the future we simulate. The number of simulations controls the granularity of our probability estimates — more simulations yield smoother probability distributions but require more computation.
import numpy as np
import pandas as pd
import yfinance as yf
import matplotlib.pyplot as plt
from datetime import datetime, timedelta
# Configuration parameters
TICKER = "AAPL"
LOOKBACK_DAYS = 252 # Trading days for volatility calculation
PROJECTION_DAYS = 60 # Days to project forward
NUM_SIMULATIONS = 10000 # Monte Carlo paths
TRADING_DAYS_PER_YEAR = 252
CONFIDENCE_INTERVALS = [0.5, 0.75, 0.95] # Probability bands
# Volatility multipliers for scenario analysis
VOLATILITY_MULTIPLIERS = {
'low': 0.75,
'base': 1.0,
'high': 1.5
}
2.2 Data Fetching and Volatility Calculation
We fetch historical price data using yfinance and calculate both the annualized volatility and the drift term. The drift represents the expected daily return, while volatility captures the dispersion around that expectation. We annualize the daily volatility by multiplying by the square root of trading days per year.
def fetch_stock_data(ticker: str, lookback_days: int) -> pd.DataFrame:
"""Fetch historical stock data and calculate returns."""
end_date = datetime.now()
start_date = end_date - timedelta(days=int(lookback_days * 1.5))
stock_data = yf.download(ticker, start=start_date, end=end_date, progress=False)
stock_data = stock_data.tail(lookback_days + 1)
# Calculate log returns for GBM compatibility
stock_data['Log_Returns'] = np.log(stock_data['Close'] / stock_data['Close'].shift(1))
stock_data = stock_data.dropna()
return stock_data
def calculate_volatility_metrics(stock_data: pd.DataFrame) -> dict:
"""Calculate volatility and drift from historical data."""
log_returns = stock_data['Log_Returns'].values
daily_volatility = np.std(log_returns, ddof=1)
annual_volatility = daily_volatility * np.sqrt(TRADING_DAYS_PER_YEAR)
daily_drift = np.mean(log_returns)
return {
'daily_volatility': daily_volatility,
'annual_volatility': annual_volatility,
'daily_drift': daily_drift,
'current_price': stock_data['Close'].iloc[-1].item()
}
# Fetch and process data
stock_data = fetch_stock_data(TICKER, LOOKBACK_DAYS)
metrics = calculate_volatility_metrics(stock_data)
print(f"Current Price: ${metrics['current_price']:.2f}")
print(f"Annual Volatility: {metrics['annual_volatility']:.1%}")
print(f"Daily Drift: {metrics['daily_drift']:.4%}")
2.3 Monte Carlo Simulation Engine
The simulation engine generates thousands of possible price paths using the GBM model. Each path represents one possible future, with daily price changes driven by both the expected drift and random volatility shocks. By running many simulations, we build a distribution of outcomes at each future time point.
def run_monte_carlo_simulation(
current_price: float,
daily_drift: float,
daily_volatility: float,
projection_days: int,
num_simulations: int,
volatility_multiplier: float = 1.0
) -> np.ndarray:
"""
Generate Monte Carlo price paths using Geometric Brownian Motion.
Returns array of shape (num_simulations, projection_days + 1)
"""
adjusted_volatility = daily_volatility * volatility_multiplier
# Pre-allocate price path array
price_paths = np.zeros((num_simulations, projection_days + 1))
price_paths[:, 0] = current_price
# Generate random shocks for all paths and days at once
random_shocks = np.random.standard_normal((num_simulations, projection_days))
# GBM discrete approximation: S(t+1) = S(t) * exp((μ - σ²/2)dt + σ√dt * Z)
drift_term = daily_drift - 0.5 * adjusted_volatility**2
for day in range(projection_days):
price_paths[:, day + 1] = price_paths[:, day] * np.exp(
drift_term + adjusted_volatility * random_shocks[:, day]
)
return price_paths
def calculate_percentile_bands(
price_paths: np.ndarray,
confidence_intervals: list
) -> dict:
"""Calculate percentile bands from simulation results."""
bands = {}
for ci in confidence_intervals:
lower_pct = (1 - ci) / 2 * 100
upper_pct = (1 + ci) / 2 * 100
bands[ci] = {
'lower': np.percentile(price_paths, lower_pct, axis=0),
'upper': np.percentile(price_paths, upper_pct, axis=0),
'median': np.percentile(price_paths, 50, axis=0)
}
return bands
# Run simulations for each volatility scenario
simulation_results = {}
for scenario, multiplier in VOLATILITY_MULTIPLIERS.items():
price_paths = run_monte_carlo_simulation(
current_price=metrics['current_price'],
daily_drift=metrics['daily_drift'],
daily_volatility=metrics['daily_volatility'],
projection_days=PROJECTION_DAYS,
num_simulations=NUM_SIMULATIONS,
volatility_multiplier=multiplier
)
simulation_results[scenario] = {
'paths': price_paths,
'bands': calculate_percentile_bands(price_paths, CONFIDENCE_INTERVALS)
}
2.4 Visualization
The visualization displays the volatility cone with nested confidence bands. Wider bands represent higher confidence intervals — the 95% band captures more extreme outcomes than the 50% band. We also overlay a sample of individual simulation paths to illustrate the stochastic nature of the projections.
plt.style.use('dark_background')
fig, axes = plt.subplots(1, 3, figsize=(16, 6))
colors = {'0.5': '#3498db', '0.75': '#9b59b6', '0.95': '#e74c3c'}
scenario_titles = {'low': 'Low Volatility (0.75x)', 'base': 'Base Case (1.0x)', 'high': 'High Volatility (1.5x)'}
days = np.arange(PROJECTION_DAYS + 1)
for idx, (scenario, results) in enumerate(simulation_results.items()):
ax = axes[idx]
bands = results['bands']
paths = results['paths']
# Plot sample paths (faint)
sample_indices = np.random.choice(NUM_SIMULATIONS, 100, replace=False)
for i in sample_indices:
ax.plot(days, paths[i], alpha=0.03, color='white', linewidth=0.5)
# Plot confidence bands (widest first for proper layering)
for ci in sorted(CONFIDENCE_INTERVALS, reverse=True):
ax.fill_between(
days,
bands[ci]['lower'],
bands[ci]['upper'],
alpha=0.3,
color=colors[str(ci)],
label=f'{int(ci*100)}% CI'
)
# Plot median path
ax.plot(days, bands[0.5]['median'], color='#2ecc71', linewidth=2, label='Median')
# Mark current price
ax.axhline(y=metrics['current_price'], color='yellow', linestyle='--',
alpha=0.5, linewidth=1, label='Current Price')
ax.set_title(f"{TICKER} - {scenario_titles[scenario]}", fontsize=12, fontweight='bold')
ax.set_xlabel('Days Forward')
ax.set_ylabel('Price ($)')
ax.legend(loc='upper left', fontsize=8)
ax.grid(True, alpha=0.2)
plt.suptitle(f'Monte Carlo Price Projection ({NUM_SIMULATIONS:,} simulations)',
fontsize=14, fontweight='bold', y=1.02)
plt.tight_layout()
plt.savefig('volatility_cone.png', dpi=150, bbox_inches='tight',
facecolor='#1a1a2e', edgecolor='none')
plt.show()
# Print summary statistics
print(f"\n{'='*60}")
print(f"60-Day Price Projections for {TICKER}")
print(f"{'='*60}")
for scenario, results in simulation_results.items():
final_prices = results['paths'][:, -1]
print(f"\n{scenario.upper()} SCENARIO:")
print(f" 5th percentile: ${np.percentile(final_prices, 5):.2f}")
print(f" Median: ${np.percentile(final_prices, 50):.2f}")
print(f" 95th percentile: ${np.percentile(final_prices, 95):.2f}")
Figure 1. Monte Carlo volatility cones showing 50%, 75%, and 95% confidence intervals across three volatility scenarios, with sample price paths illustrating individual simulation trajectories.
Enjoying this strategy so far? This is only a taste of what's possible.
Go deeper with my newsletter: longer, more detailed articles + full Google Colab implementations for every approach.
Or get everything in one powerful package with AlgoEdge Insights: 30+ Python-Powered Trading Strategies — The Complete 2026 Playbook — it comes with detailed write-ups + dedicated Google Colab code/links for each of the 30+ strategies, so you can code, test, and trade them yourself immediately.
Exclusive for readers: 20% off the book with code
MEDIUM20.Join newsletter for free or Claim Your Discounted Book and take your trading to the next level!
3. Interpreting the Results
The visualization reveals how volatility assumptions dramatically affect projected price ranges. In the base case scenario, a stock with 25% annualized volatility might show a 95% confidence interval spanning roughly ±20% from the current price over 60 days. The high volatility scenario (1.5x multiplier) expands this range substantially, while the low volatility scenario compresses it.
The cone's characteristic shape emerges from the square-root-of-time scaling inherent in Brownian motion. Uncertainty grows with the square root of time rather than linearly — doubling the projection window doesn't double the expected price range, it multiplies it by √2. This mathematical property explains why the cone expands rapidly at first, then gradually flattens.
The median path typically follows a slight upward trajectory reflecting the drift term. However, the drift's effect is often dwarfed by volatility over short time horizons. For most stocks, daily drift is on the order of 0.03-0.05%, while daily volatility might be 1-2%. This asymmetry explains why short-term stock movements appear nearly random despite positive long-term expected returns.
4. Use Cases
Options Pricing Validation: Compare the simulation's probability distribution against implied volatility from options markets. Significant divergence might indicate mispriced options or market expectations of upcoming volatility changes.
Position Sizing: Use the confidence intervals to determine appropriate position sizes. If the 95% downside scenario would breach your risk tolerance, the position may be too large relative to your portfolio.
Scenario Planning: The volatility multipliers enable stress testing. Before earnings announcements or major economic events, running high-volatility scenarios helps prepare for expanded price ranges.
Client Communication: The visual nature of volatility cones makes them effective for explaining risk to non-technical stakeholders. The nested bands intuitively convey that wider outcomes are possible but less probable.
5. Limitations and Edge Cases
Normality assumption failure: GBM assumes returns are normally distributed, but real markets exhibit fat tails — extreme moves occur more frequently than the model predicts. The 2008 financial crisis and 2020 COVID crash both produced moves that GBM would classify as virtually impossible.
Constant volatility assumption: The model uses historical volatility as a fixed parameter, but real volatility clusters and mean-reverts. Periods of high volatility tend to persist, and volatility itself is stochastic. GARCH models or stochastic volatility models address this limitation.
No regime changes: GBM cannot capture structural breaks — a company facing bankruptcy, an industry disruption, or a merger announcement fundamentally changes the distribution of future returns in ways historical data cannot predict.
Drift estimation noise: The expected return (drift) is notoriously difficult to estimate reliably. Small changes in the lookback period can produce meaningfully different drift estimates, and historical drift is a poor predictor of future drift.
Independence assumption: GBM assumes daily returns are independent, but markets exhibit momentum and mean-reversion effects at various time scales. Serial correlation in returns violates this assumption.
Concluding Thoughts
Volatility cones provide a principled framework for visualizing the range of probable future stock prices. By combining historical volatility measurement with Monte Carlo simulation, we transform abstract statistical concepts into actionable visual insights. The implementation demonstrates how a few hundred lines of Python can produce institutional-quality risk analysis.
The key insight is that volatility quantifies uncertainty, not direction. A stock can be highly volatile yet have positive expected returns — the cone simply communicates that the path to those returns will be bumpy. Understanding this distinction separates informed risk-taking from gambling.
For further exploration, consider implementing stochastic volatility models that allow volatility itself to vary randomly, or compare the simulation outputs against market-implied distributions from options prices. These extensions bridge the gap between historical analysis and forward-looking market expectations.
Most algo trading content gives you theory.
This gives you the code.3 Python strategies. Fully backtested. Colab notebook included.
Plus a free ebook with 5 more strategies the moment you subscribe.5,000 quant traders already run these:
Subscribe | AlgoEdge Insights



Top comments (0)