DEV Community

TildAlice
TildAlice

Posted on • Originally published at tildalice.io

LSTM Encoder-Decoder vs Seq2Seq Transformer: CMAPSS RUL Benchmark

The Encoder-Decoder Gap Nobody Talks About

Most RUL prediction tutorials slap a single LSTM on multivariate sensor data and call it a day. That's fine for toy problems, but real turbofan engines don't degrade in a straight line — they have operational regime shifts, maintenance events, and multi-phase degradation patterns that single-pass models fundamentally can't capture. After running both architectures through all four NASA CMAPSS subsets, the Transformer encoder-decoder beat LSTM encoder-decoder by 18% RMSE on FD004 (the hardest subset with 6 operating conditions and 2 fault modes), but actually performed worse on FD001.

That result surprised me.

The standard narrative is "Transformers good, RNNs bad" — but the reality is messier. Encoder-decoder architectures specifically change the game because they force the model to compress sensor history into a latent representation before decoding the RUL trajectory. This compression bottleneck acts as implicit regularization, and LSTM's sequential inductive bias sometimes helps here rather than hurts.


A moody vintage detective's desk with typewriter, magnifying glass, and secret documents.


Continue reading the full article on TildAlice

Top comments (0)