Your Backtest Said ₹4.2 Lakh Profit. The Market Took ₹80,000. Here's Exactly Why
You spent three weeks building the strategy. The backtest printed a beautiful equity curve — 73% win rate, Sharpe ratio above 1.5, max drawdown under 8%. You went live. Six weeks later, the account is down ₹80,000 and the strategy is paused.
This isn't bad luck. It's backtesting bias — and it quietly destroys more Indian trading accounts than any market crash ever will.
The Uncomfortable Truth About Every Backtest You've Ever Run
A backtest isn't a prediction. It's a story you're telling about the past, and the human brain is dangerously good at telling flattering stories.
The Indian algo trading market now sees 54% of all cash market turnover and 73% of stock futures volume driven by automated strategies. Most of those strategies were backtested before deployment. A significant number of them decay or fail within months of going live.
The problem isn't the strategy logic. It's the invisible assumptions baked into how you tested it.
Here are the six biases that are most likely rotting your backtest right now — and how to surgically remove each one.
Bias 1: Survivorship Bias — You're Only Testing Winners
What it is: When you backtest a stock-selection strategy using today's Nifty 500 or Nifty 50 constituents, you're only including companies that survived to make the current list. The companies that went bankrupt, got delisted, or shrank enough to be removed? Gone. They don't appear in your test.
What it does to your results: Your backtest only includes the stocks that didn't blow up. Of course the strategy looks profitable — you've already eliminated every way it could have lost.
A real Indian markets example: Test a "buy Nifty 50 momentum stocks" strategy using today's Nifty 50 components across 2015–2025. You'll include HDFC Bank, Infosys, Reliance — companies that became giants over this period. You won't include the companies that were in the Nifty 50 in 2015 but have since been removed due to poor performance. The backtest reflects hindsight, not foresight.
The fix: Use point-in-time index composition data. Test only on stocks that were in the index or universe at the time of each trade — not the current list. This is harder to source but is non-negotiable for any stock selection strategy.
Bias 2: Look-Ahead Bias — Your Algorithm Knew the Future
What it is: Your strategy logic accidentally uses data that wouldn't have been available at the time of the trade decision.
What it does: Creates impossibly clean entry and exit signals that work perfectly in backtests and never replicate in live trading.
The most common Indian F&O version of this: You're backtesting an options strategy on daily closing prices. Your entry logic fires at 3:20 PM based on that day's closing price — but then you fill the position at the closing price. In reality, you can't know the closing price until the market closes, and you can't fill at the closing price unless you submit a market-on-close order in advance.
Another version: Earnings-based strategies that use adjusted EPS data. That data is often restated after the fact. Your backtest uses the restated number. The market at the time was trading on the original, unrevised figure. You didn't have the clean number. Your backtest did.
The fix: Apply a strict "time stamp" discipline to every data input. For each bar, ask: "Could my algorithm have known this number at the exact moment of the trade decision?" If the answer is "only in retrospect," remove it from the logic.
Bias 3: Overfitting — You Optimised for the Past, Not the Future
What it is: You test your strategy, it underperforms. So you tweak the parameters. Still underperforming. More tweaking. Eventually, you find a combination of settings that produces an excellent backtest. You declare success.
What actually happened: You didn't find edge. You memorised noise.
The Indian trader version: You're testing a moving average crossover on Nifty. 20/50 MA doesn't work. You try 23/57. Better. You try 21/55. Better still. After 40 combinations, you find that 26/63 with a 14-period RSI filter and a VIX threshold above 14.2 produces a 78% win rate. You deploy it. It immediately stops working.
Those parameters didn't capture a real market behaviour. They captured the specific texture of a particular period of Nifty data. The market changed. The edge evaporated.
Warning signs your strategy is overfitted:
- Win rate above 80% on a trend-following or momentum strategy
- The strategy needs very specific parameter values (e.g., 23-day MA, not 20 or 25)
- Performance collapses dramatically when parameters shift even slightly
- The equity curve is too smooth — real markets don't produce that
The fix — Walk-Forward Analysis: Split your historical data into thirds. Optimise your parameters on the first two-thirds (the in-sample period). Then test — without touching the parameters — on the final third (the out-of-sampleperiod). If the strategy holds up, it has real edge. If it collapses, it was always curve-fitted. Then roll the window forward and repeat.
Bias 4: Slippage and Impact Blindness — Your Fill Prices Don't Exist
What it is: Your backtest assumes you fill every trade at the signal price. The live market fills you at a different price — sometimes meaningfully worse.
Why Indian markets make this especially brutal:
| Instrument | Backtest assumption | Live market reality |
|---|---|---|
| Nifty 50 large caps | Fill at signal price | Usually close — liquid |
| Mid-cap F&O | Fill at signal price | Bid-ask spread eats 0.2–0.5% per trade |
| Far OTM options mid-week | Fill at signal price | Wide spreads, partial fills, gap at open |
| Any strategy around 9:15 AM | Fill at yesterday's close + 0.05% | Open auction creates 0.3–2% gaps routinely |
The 9:20 AM short straddle problem: India's most popular retail options strategy — enter a short straddle at 9:20 AM on expiry day — has clean backtest numbers because tests assume fills at the 9:20 AM price. In live markets, the first 20 minutes of expiry day are chaotic. The bid-ask spreads are wide, volume is concentrated, and slippage on both legs combined can cost 30–60 points on Nifty — which is the entire expected theta profit for the trade.
The fix: Build a realistic slippage model into every backtest. For options strategies, use 50% of the average bid-ask spread as your expected slippage per leg. For intraday equity, use at least 0.05–0.1% per entry. For strategies that trigger at open, use an additional 0.2–0.5% open-auction impact. Your final backtest P&L number should survive these deductions before you consider a strategy viable.
Bias 5: Regime Blindness — Your Strategy Only Works in One Market Type
What it is: Your backtest covers a period dominated by one market regime. The strategy was optimised (consciously or not) for that regime. When the regime changes, the strategy fails.
The Indian markets version of this: Any options-selling strategy backtested primarily on 2021–2023 data. That period had specific characteristics — moderate volatility, regular theta decay, relatively predictable weekly range. Strategies built on this data were essentially optimised for a low-volatility trending market.
Then 2024–2025 brought sudden VIX spikes, global macro shocks, and expiry-day reversals that defied historical patterns. The short straddle stopped printing. Retail traders blamed "changed market conditions" — but the real failure was that the backtest only told a story about one kind of market.
The three Indian market regimes your strategy must survive:
- Trending bull market (e.g., 2020 recovery, 2023 rally) — momentum works, mean-reversion hurts
- Choppy/sideways market (e.g., mid-2022, early 2025) — theta strategies shine, trend-following fails
- Shock/crisis regime (e.g., March 2020, sudden VIX spikes) — everything gets stress-tested
The fix: Deliberately test your strategy across all three. If it only survives two out of three regimes, it's not a robust strategy — it's a regime bet. Position sizing should reflect this: smaller allocation to strategies that are regime-dependent.
Bias 6: Strategy Decay — Your Edge Has an Expiry Date
What it is: Even a genuinely profitable strategy, free from all the above biases, can stop working. Not because your logic was wrong — but because the market learned.
How it works in Indian markets: A profitable pattern attracts capital. More traders discover the same signal, execute the same trade, and compete for the same edge. The crowding trades away the alpha. The pattern becomes less predictive. The strategy decays.
The 9:20 straddle is the perfect case study. It was genuinely profitable for years. It got written about. Documented. Shared in forums. Courses built around it. By 2025–2026, institutional desks and sophisticated retail traders all knew the pattern — and the expected returns on expiry morning became increasingly compressed as everyone front-ran the same setup.
"As a profitable pattern becomes public knowledge, more capital chases the same edge. This crowding leads to strategy decay, where the alpha is traded away, making the strategy progressively less effective over time."
The fix: Build decay monitoring into your strategy lifecycle from day one. Track your live performance rolling 30-day vs. your backtest baseline monthly. If live performance is consistently running 40%+ below your backtest expectation, the strategy is in decay. Have an exit criterion defined before you deploy, not after you're already in drawdown and emotionally compromised.
What a Bias-Clean Backtest Actually Looks Like
After applying all six fixes, a real backtesting framework for Indian markets looks like this:
1. Data layer:
- Point-in-time index constituents (not current composition)
- Adjusted OHLC for corporate actions (splits, dividends, bonus issues)
- Tick-level or 1-minute data for intraday strategies — not daily bars
- Options chain data with real bid-ask spreads, not just last traded price
2. Execution model:
- 0.05–0.1% slippage on liquid equity intraday
- 50% of average bid-ask spread per leg on options
- 0.3–0.5% open-auction gap for market-open strategies
- Full brokerage, STT, exchange charges, GST — not "estimated" as 0.01%
3. Validation protocol:
- Walk-forward analysis across at least 3 non-overlapping out-of-sample windows
- Regime tagging: test performance separately in bull, sideways, and shock periods
- Sensitivity analysis: vary your parameters ±20% and check if the strategy still works
- Monte Carlo simulation: randomly shuffle your trade sequence 1,000 times to check if your results are luck or edge
4. Deployment standards:
- Only deploy strategies that pass out-of-sample testing
- Define a live decay threshold before deployment
- Start with 20–30% of planned allocation — scale up after 30–60 days of live validation
The Honest Conversation About Backtesting
Here's the truth that most algo trading content avoids: a backtest cannot tell you if a strategy will be profitable. It can only tell you if the strategy had internal logical consistency with historical data.
That's still valuable. A strategy that fails badly in backtest under realistic assumptions isn't worth deploying. But a strategy that passes every bias check and produces clean out-of-sample results is still only a hypothesis — one worth testing live with controlled risk.
The goal isn't to find the backtest that promises the most profit. The goal is to find the strategy that survives enough stress tests that deploying it with real capital is a calculated risk rather than a blind bet.
That's what systematic trading actually is. Not the equity curve. The process.
🚀 Firefly by Fintrens is built around strategies that are designed to be tested rigorously, not just to look good in a backtest. Our infrastructure supports full audit trails, strategy monitoring, and live performance tracking so decay is visible before it becomes damage. See how Firefly works →