Attention-based Speech Recognizer

The reference implementation for the papers

End-to-End Attention-based Large Vocabulary Speech Recognition. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio (arxiv draft, ICASSP 2016)


Task Loss Estimation for Sequence Prediction. Dzmitry Bahdanau, Dmitriy Serdyuk, Philémon Brakel, Nan Rosemary Ke, Jan Chorowski, Aaron Courville, Yoshua Bengio (arxiv draft, submitted to ICLR 2016).

This code is no longer maintained

This codebase is based on outdated techonologies (Theano, Blocks, etc) and is no longer maintained. We recommend you to look for more modern speech recognition implemenations (see e.g.

How to use

Then, please proceed to exp/wsj for the instructions how to replicate our results on Wall Street Journal (WSJ) dataset (available at the Linguistic Data Consortium as LDC93S6B and LDC94S13B).


Given that you have the dataset in HDF5 format, the models can be trained without Kaldi and PyFst.



The repository contains custom modified versions of Theano, Blocks, Fuel, picklable-itertools, Blocks-extras as [subtrees] ( In order to ensure that these specific versions are used, we recommend to uninstall regular installations of these packages if you have them installed in addition to sourcing