SQuAD reading comprehension dataset (Stanford Question Answering Dataset)

What is SQuAD?
Stanford Question Answering Dataset (SQuAD) is a new reading comprehension dataset, consisting of questions posed by crowd workers on a set of Wikipedia articles, where the answer to every question is a segment of text, or span, from the corresponding reading passage. It has 100,000+ question-answer pairs on 500+ articles and is significantly larger than previous reading comprehension datasets.

Research paper

Rajpurkar, P., Zhang, J., Lopyrev, K., & Liang, P. (2016). Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250.

Resources
You can download a copy of the dataset (distributed under the CC BY-SA 4.0 license):

Training Set v1.1 (30 MB)

Dev Set v1.1 (5 MB)

Visual Explorer

To evaluate your models, they have also made available the evaluation script to use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python evaluate-v1.1.py <path_to_dev-v1.1> <path_to_predictions>.

Evaluation Script v1.1

Sample Prediction File (on Dev v1.1)

Once you have a built a model that works to your expectations on the dev set, you can submit it to get official scores on the dev and a hidden test set. To preserve the integrity of test results, the researchers do not release the test set to the public. Instead, they require you to submit your model so that they can run it on the test set for you. Here’s a tutorial walking you through official evaluation of your model:

Submission Tutorial

Find the current ranking here and explore SQuAD and model predictions here.

Share your model
If you created a model or already created it before with this dataset, please share it in #projects and link to this topic in your posts. That way, your post will automatically show up linked below this post and we can all see each other’s work.

3 Likes