|
Subject
Explainable AI is a very important topic in the field of machine learning and deep learning. For AI models to be deployed in everyday life, explaining their decision-making process becomes critical for several reasons such as trust, accountability, and model bias understanding and correctness. Can we deploy a self-driving car if it stops just because it saw a red color, or did it really stop because it saw a stop sign? In fact, in one of the earlier competitions for object detection, it was discovered that models are learning to look at a watermark when detecting the object elephant, simply because all elephant images in the dataset had a watermark. And therefore when presented with new images outside of the test set (without any watermark), the model gave a wrong decision. Natural Language Explanations refer to explaining the decision-making process of an AI models with natural text rather than heatmaps. This text is human-friendly, detailed and fine-grained.
Kind of work
The student is required to develop a new models for natural language explanations for several tasks such as visual question answering, visual entailment, visual reasoning, visual recognition
etc. The student may have a look at examples in the following papers: NLX-GPT: A Model for Natural Language Explanations in Vision and Vision-Language Tasks Multimodal Explanations: Justifying Decisions and Pointing to the Evidence Faithful Multimodal Explanation for Visual Question Answering e-ViL: A Dataset and Benchmark for Natural Language Explanations in Vision-Language Tasks https://explainableml.github.io/CLEVR-X/
Number of Students
1-2
Expected Student Profile
Prior knowledge in Machine Learning Prior knowledge in Python and PyTorch
|
|