The reference implementation for the papers
End-to-End Attention-based Large Vocabulary Speech Recognition. Dzmitry Bahdanau, Jan Chorowski, Dmitriy Serdyuk, Philemon Brakel, Yoshua Bengio (arxiv draft, ICASSP 2016)
Task Loss Estimation for Sequence Prediction. Dzmitry Bahdanau, Dmitriy Serdyuk, Philémon Brakel, Nan Rosemary Ke, Jan Chorowski, Aaron Courville, Yoshua Bengio (arxiv draft, submitted to ICLR 2016).
This codebase is based on outdated techonologies (Theano, Blocks, etc) and is no longer maintained. We recommend you to look for more modern speech recognition implemenations (see e.g. https://github.com/Alexander-H-Liu/End-to-end-ASR-Pytorch).
Then, please proceed to
exp/wsj for the instructions how
to replicate our results on Wall Street Journal (WSJ) dataset
(available at the Linguistic Data Consortium as LDC93S6B and LDC94S13B).
Given that you have the dataset in HDF5 format, the models can be trained without Kaldi and PyFst.
It should be compiled with
--shared option, it means that Kaldi should be configured like
we need Kaldi to be compiled in shared mode to be able to use kaldi-python.
We don't train anything with Kaldi, so there is no need to compile it with cuda, so if you have any problems with Kaldi+CUDA, feel free to turn it off:
./configure --shared --use-cuda=no
After this step you should have openfst installed at
pip install pykwalify toposort pyyaml numpy pandas pyfst
python setup.py install
kaldi-python will be compiled and installed to your system, you can check that everything went right by running
python -c "import kaldi_io"
The repository contains custom modified versions of Theano, Blocks, Fuel,
picklable-itertools, Blocks-extras as [subtrees]
In order to ensure that these
specific versions are used, we recommend to uninstall regular installations
of these packages if you have them installed in addition to sourcing