Auto-Lag Networks for Real-Valued Sequence to Sequence Prediction - ICANN2019

Gilles

The International Conference on Artificial Neural Networks (ICANN) is the annual flagship conference of the European Neural Network Society (ENNS). The ideal of ICANN is to bring together researchers from two worlds: information sciences and neurosciences. The scope is wide, ranging from machine learning algorithms to models of real nervous systems. The aim is to facilitate discussions and interactions in the effort towards developing more intelligent computational systems and increasing our understanding of neural and cognitive processes in the brain. The 28th edition of the ICANN has been held in Munich, Germany, from 17th – 19th September, 2019. After a thorough review and scientific evaluation process by a panel of experts, our paper "Auto-Lag Networks for Real Valued Sequence to Sequence Prediction" was accepted for presentation within the time series session. The following section gives an abstract of the presented work as well as the links were one may find the full paper.

Auto-Lag Networks for Real Valued Sequence to Sequence Prediction


Many machine learning problems involve predicting a sequence of future values of a target variable. State-of-the-art approaches for such use cases involve LSTM based sequence to sequence models.

To improve they performances, those models generally use lagged values of the target variable as additional input features. Therefore, appropriate lag factor has to be chosen during feature engineering. This choice often requires busi- ness knowledge of the data. Furthermore, state-of-the-art sequence to sequence models are not designed to naturally handle hierarchical time series use cases.

In this paper, we propose a novel architecture that naturally handles hierarchical time series. The contribution of this paper is thus two-folds.

First we show the limitations of classical sequence to sequence models in the case of problems involving a real valued target variable, namely the error accumulation problem and we propose a novel LSTM based approach to overcome those limitations.

Second, we highlight the limitations of man- ually selecting fixed lag values to improve the performance of a model. We then use an attention mechanism to introduce a dynamic and automatic lag factor selection that overcomes the former limitations and requires no business knowledge of the data. We call this architecture Auto-Lag Network (AL-Net). We finally validate our Auto-Lag Net model against state-of-the-art results.

Get our research paper

The full paper is availaible at springer following the link:https://link.springer.com/chapter/10.1007/978- 3-030-30490-433.

However to get a free copy of the paper, feel free to contact either of the authors:

  • gilles.madi@prevision.io

  • nicolas.gaude@prevision.io

To cite the paper, use the following bibtex reference:

@inproceedings{wamba2019auto,

title={Auto-Lag Networks for Real Valued Sequence to Sequence Prediction}, author={Wamba, Gilles Madi and Gaude, Nicolas},

booktitle={International Conference on Artificial Neural Networks}, pages={412--425},

year={2019}, organization={Springer}

}

Gilles -