My notes on Visual Question Answering(VQA) papers
-
Updated
Sep 12, 2017
My notes on Visual Question Answering(VQA) papers
A crowd-sourcing system for Visual Question Answering
baselines and neural network models for Visual Question Answering task
TensorFlow implementation of the CNN-LSTM, Relation Network and text-only baselines for the paper "FigureQA: An Annotated Figure Dataset for Visual Reasoning"
Implementation of the visual question answering model from the paper "Exploring Models and Data for Image Question Answering".
Yet Another Visual Question Answering in MXNet
Co-attending Regions and Detections for VQA.
Code to reproduce results in our ACL 2018 paper "Did the Model Understand the Question?"
Visual Question Answering
PyTorch VQA implementation that achieved top performances in the (ECCV18) VizWiz Grand Challenge: Answering Visual Questions from Blind People
Pytorch implementation of winner from VQA Chllange Workshop in CVPR'17
📷 ❓ Visual Question Answering Demo and Algorithmia API
This is an PyTorch implementation of DMN+ model on MSCOCO VQA dataset.
IPython Notebook showing pytorch implementation of Google DeepMind paper on Relation Network
Add a description, image, and links to the visual-question-answering topic page so that developers can more easily learn about it.
To associate your repository with the visual-question-answering topic, visit your repo's landing page and select "manage topics."