A neural machine translation system implemented by python2 + tensorflow.


  1. Multi-Process Data Loading/Processing (Problems Exist)
  2. Multi-GPU Training/Decoding
  3. Gradient Aggregation


Supported Models



How to use this toolkit for machine translation?


  1. organize the parameters and interpretations in config.
  2. reformat and fulfill code comments
  3. simplify and remove unecessary coding
  4. improve rnn models


If you use the source code, please consider citing the follow paper:

  author =  "Zhang, Biao
        and Xiong, Deyi
        and su, jinsong
        and Lin, Qian
        and Zhang, Huiji",
  title =   "Simplifying Neural Machine Translation with Addition-Subtraction Twin-Gated Recurrent Networks",
  booktitle =   "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
  year =    "2018",
  publisher =   "Association for Computational Linguistics",
  pages =   "4273--4283",
  location =    "Brussels, Belgium",
  url =     ""

If you are interested in the CAEncoder model, please consider citing our TASLP paper:

 author = {Zhang, Biao and Xiong, Deyi and Su, Jinsong and Duan, Hong},
 title = {A Context-Aware Recurrent Encoder for Neural Machine Translation},
 journal = {IEEE/ACM Trans. Audio, Speech and Lang. Proc.},
 issue_date = {December 2017},
 volume = {25},
 number = {12},
 month = dec,
 year = {2017},
 issn = {2329-9290},
 pages = {2424--2432},
 numpages = {9},
 url = {},
 doi = {10.1109/TASLP.2017.2751420},
 acmid = {3180106},
 publisher = {IEEE Press},
 address = {Piscataway, NJ, USA},


When developing this repository, I referred to the following projects:


For any questions or suggestions, please feel free to contact Biao Zhang