This repository contains the source code of the models submitted by NTUA-SLP team in SemEval 2018 tasks 1, 2 and 3.


Please follow the steps below in order to be able to train our models:

1 - Install Requirements

pip install -r ./requirements.txt

2 - Download our pre-trained word embeddings

The models were trained on top of word2vec embeddings pre-trained on a big collection of Twitter messages. We collected a big dataset of 550M English Twitter messages posted from 12/2014 to 06/2017. For training the word embeddings we used Gensim's implementation of word2vec. For preprocessing the tweets we used ekphrasis. Finally, we used the following parameteres for training the embeddings: window_size = 6, negative_sampling = 5 and min_count = 20.

We freely share our pre-trained embeddings:

Finally, you should put the downloaded embeddings file in the /embeddings folder.

3 - Update model configs

Our model definitions are stored in a python configuration file. Each config contains the model parameters and things like the batch size, number of epochs and embeddings file. You should update the embeddings_file parameter in the model's configuration in model/params.py.

Example - Sentiment Analysis on SemEval 2017 Task 4A

You can test that you have a working setup, by training a sentiment analysis model on SemEval 2017 Task 4A, which is used as a source task for transfer learning in Task 1.

First, start the visdom server, which is needed for visualizing the training progress.

python -m visdom.server

Then just run the experiment.

python model/pretraining/sentiment2017.py



Project Structure

In order to make our codebase more accessible and easier to extend, we provide an overview of the structure of our project. The most important parts will be covered in greater detail.

Note: Full documentation of the source code will be posted soon.