SLQA

Multi-Granularity Hierarchical Attention Fusion Networks for Reading Comprehension and Question Answering

If anyone would love to test the performance on SQuAD, please tell me the final score. You just need to change the coca_reader.py to read in dataset of SQuAD.

Paper: http://www.aclweb.org/anthology/P18-1158
Allennlp:https://github.com/allenai/allennlp

Tutorial

First you should install allennlp and make sure you have downloaded the elmo and glove. You will find the version information in config/seperate_slqa.json. You could also use elmo by url, please turn to allennlp tutorials for help.
mkdir elmo
mkdir glove
Then for train, run:
allennlp train config/seperate_slqa.json -s output_dir --include-package coca-qa
To modified the parameters for the model, you can see config/seperate_slqa.json. I recommend you to learn how to use allennlp. It's very easy and useful.

Note

The clean for useless config file and further improvement won't be added util Sep. 2019.

For performance on SQuAD1.1 and further improvement, please see the issue page.

update:

update 12.6:

update 12.7:

update 12.9:

I think this version will be the final. Since I don't know how to reach the performance mentioned in paper where it's good than bidaf with self-attention and elmo. The final F1 score on CoQA is 61.879 where bidaf++ can reach 65. Besides, I didn't use any previous questions and answers. May be the performance with historical information is good enough but I have no time to test now.

update 12.10:

F1 score while training: 63.81

TODO: