VQA Demo

An introduction to VQA and my demo for it at www.apoorvesinghal.com/vqa

VQA stands for the task of visual question answering. Given an image, our system can answer question about it.

For example, consider that we would like to ask a few question about this image:

Input image - A photo of a man feeding an elephant.

Some questions that we can ask are -

  1. What is the colour of the person’s coat?
  2. What is the elephant doing?
  3. How many people are wearing green coats?
  4. What is the colour of the person’s hat?

to which the system would answer -

  1. green
  2. eating
  3. 1
  4. white

Interesting, isn’t it? But wait, there’s more!

We can also get an idea about the parts of the image that the system ‘looks’ at, to answer any question. The scientific term for this is attention.

As an example, for the question “What is the colour of the person’s coat?”, we can expect the system to look at the person and his clothes. Let’s see what the model is looking at:

As expected, the system is looking at the person's clothes.

Bingo! As we expected, the system is looking at the person and his coat to see what colour it is!

If I have your interest piqued, I suggest you to go to www.apoorvesinghal.com/vqa and play around with the demo yourself. It’s a lot of fun!

Code for the demo - https://github.com/apugoneappu/ask_me_anything


Kindly let me know if you liked (or disliked) the demo, it helps me improve :)