Look, I'll be upfront with you: I'm not a hedge fund quant. I don't have a Bloomberg terminal. I learned MQL5 the same way most of us learn things — by breaking stuff repeatedly at 2am until it started making sense.
This post isn't a "get rich from trading" pitch. It's about the specific engineering decisions that separate a bot that slowly drains your account from one that actually holds up under real market conditions. I want to talk about what I got wrong for way longer than I should have, and what eventually worked.
The mistake I kept making (and I bet you're making it too)
My first few EAs were what I now call "single-factor bots." RSI crosses 30 → buy. MACD histogram turns positive → buy. They backtested beautifully because they were essentially curve-fitted to historical data. Put them live and they'd enter trades during news events, during ranging markets, during sessions with 40-point spreads. They didn't care. They just fired.
The problem isn't that RSI or MACD is useless. The problem is that any single indicator is basically a coin flip without context. Markets are multi-dimensional. A rising RSI in a downtrend during the Asian session with a 35-point spread is a completely different situation than a rising RSI in an uptrend during London open with a 4-point spread. Treating them identically is where bots die.
The shift that changed everything: confluence scoring
After enough blown accounts (demo, thankfully, mostly), I started thinking about entries differently. Instead of asking "did indicator X trigger?", I started asking "how many independent things agree right now?"
The idea is dead simple in code but surprisingly powerful in practice. You define N conditions. Each condition that's true adds 1 to a score. You only trade when the score hits a minimum threshold.
Here's a rough idea of what that looks like in MQL5:
int buyScore = 0;
if(close > trendEMA50) buyScore++; // trend direction
if(close > h4EMA) buyScore++; // higher timeframe
if(fastEMA > slowEMA) buyScore++; // EMA stack
if(macdMain > macdSig) buyScore++; // momentum
if(rsi > 40 && rsi < 65) buyScore++; // RSI zone
if(adx > 20) buyScore++; // trend strength (not ranging)
if(bullishCandle) buyScore++; // candle confirmation
if(close > bbMidline) buyScore++; // BB positioning
if(buyScore >= 5) openBuy();
This approach has a few things going for it that aren't obvious immediately:
It degrades gracefully. If MACD is giving mixed signals but everything else agrees, you still get a trade — just with one less point. Compare that to a system where a single indicator veto blocks everything regardless.
It naturally filters session and volatility problems. During low-liquidity Asian hours on a ranging pair, your EMA stack condition, your trend condition, and your ADX condition all fail simultaneously. The score doesn't reach threshold. The bot sits on its hands. That's the behavior you want and you didn't have to write a single session filter to get it.
It's easy to tune without blowing up the logic. Want to be more conservative? Raise the threshold from 4 to 5. Want more entries? Lower it. It doesn't cascade into unexpected behavior the way nested if-statements do.
The stop-loss problem nobody talks about enough
Here's something that cost me real time to figure out: a good entry is maybe 30% of the battle. Stop-loss placement is where most retail bots actually lose money.
Fixed-pip stops are almost always wrong. If you set a 30-pip stop on EURUSD and then trade the same EA on GBPJPY or XAUUSD, you're not accounting for the fact that those instruments have completely different volatility profiles. A 30-pip move on gold happens in minutes. On EURUSD it might take hours.
ATR-based stops fix this. You measure the Average True Range over N periods and set your stop as a multiple of that. If ATR is currently 80 pips, your 1.5× ATR stop is 120 pips. If ATR is 20 pips (quiet market), it's 30 pips. The bot adapts automatically.
The same logic applies to take profit. I use a 2:1 ATR ratio minimum (TP = 2× ATR, SL = 1× ATR). Some of my configs use 3:1 for trending pairs. It makes a bigger difference than most indicator tweaking ever will.
One thing that genuinely surprised me
I added an ADX filter expecting it to just cut trade frequency. It did. But what I didn't expect was the effect on average winner size.
ADX above 20 means the market is in a trending state. When you filter for that, your winning trades aren't just more frequent — they tend to run further because you're catching moves with actual momentum behind them rather than random noise that briefly crossed your other conditions.
It felt like one of those "why didn't I do this earlier" moments. The bot wasn't missing good trades, it was skipping ones that were statistically more likely to reverse immediately.
What I ended up building
After going through all of this (and a lot of iterations I haven't mentioned), I ended up with a fairly complete EA that does most of what I described:
- 9-factor confluence scoring, configurable threshold
- ATR-based dynamic SL/TP with user-set multipliers
- Automatic breakeven once price moves 1 ATR in your favour
- Trailing stop based on ATR distance
- Daily loss limit and profit target (as % of balance)
- Weekly drawdown kill-switch
- Session filters (London, New York, Asia toggleable)
- Spread filter to avoid trading during news or low liquidity
- Proper margin checking before opening trades
I put it together as GlimTrader Pro v3.0 and made it available as a compiled .ex5 — you can grab it here if you want to skip the six months of headaches.
Stuff I'd tell myself on day one
Don't trust backtests with default settings. Optimize on one date range, validate on a completely different one. If performance falls apart, you've overfit.
Your broker's spread matters more than you think. An EA that works at a 1-pip spread will look completely different at 4 pips. Build a spread filter.
Lot sizing based on account % is not optional. Fixed lot sizing turns a drawdown into an account wipe. Risk 1% per trade max while you're learning the system.
Demo trade for at least 3 weeks before going live. Real execution has slippage, gaps, disconnects. The backtest doesn't.
If you're going through the same frustrations I went through, I hope this saves you some time. The confluence approach genuinely changed how I think about automated entries. Worth experimenting with even if you build your own version from scratch.
Happy to answer questions in the comments—particularly around the ATR logic or the scoring approach.
Top comments (0)