ETRO VUB
About ETRO  |  News  |  Events  |  Vacancies  |  Contact  
Home Research Education Industry Publications About ETRO

Master theses

Current and past ideas and concepts for Master Theses.

Explaining Deep Neural Models for Natural Language Processing

Subject

Deep neural networks models (e.g., RNN, LSTM, BERT) have achieved state-of-the-art performance across various natural language processing tasks (e.g., machine translation, sentiment analysis, named entity recognition). Though these models are very good in terms of performance, the models themselves are black boxes which make it difficult for us to understand how they make decisions. Explaining deep neural models is becoming more and more important for increasing their interpretability and trustworthiness in real-world applications, especially in essential areas like healthcare.
The two important factors for a deep neural model in NLP when making predictions are (1) the input text and (2) the model parameters. Many proposed methods are trying to explain model behavior from these two factors. For example, saliency maps aim to highlight parts of the input text that are most influential to the model predictions, and influence functions explore the change in model parameters when one training data is changed. However, there is no standard for model interpretability. We would like to review the existing methodologies to propose a general benchmark and explainable models.

Kind of work

In this thesis, we will work on different datasets to build explainable deep neural models (especially transformer-based models) for different NLP tasks (e.g., sentiment analysis, natural language inference). Specifically, we will focus on (i) exploring different interpretation techniques in NLP, (ii) identifying the limitations of the current state-of-the-art approaches, and (iii) addressing the limitations by providing new model architectures for better interpretability.

Framework of the Thesis

Sun X, Yang D, Li X, et al. Interpreting Deep Learning Models in Natural Language Processing: A Review[J]. arXiv preprint arXiv:2110.10470, 2021.
Han X, Wallace B C, Tsvetkov Y. Explaining Black Box Predictions and Unveiling Data Artifacts through Influence Functions[C]//Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020: 5553-5563.
Lei T, Barzilay R, Jaakkola T S. Rationalizing Neural Predictions[C]//EMNLP. 2016.

Number of Students

1 or 2 (preferably in the form of a duo thesis)

Expected Student Profile

The student should have a background in machine learning, natural language processing, python programming language and be familiar with one of the deep learning frameworks (PyTorch or TensorFlow).

Promotor

Prof. Dr. Ir. Nikos Deligiannis

+32 (0)2 629 1683

ndeligia@etrovub.be

more info

Supervisor

Mr. Xiangyu Yang

+32 (0)2 629 2930

xyanga@etrovub.be

more info

- Contact person

- IRIS

- AVSP

- LAMI

- Contact person

- Thesis proposals

- ETRO Courses

- Contact person

- Spin-offs

- Know How

- Journals

- Conferences

- Books

- Vacancies

- News

- Events

- Press

Contact

ETRO Department

info@etro.vub.ac.be

Tel: +32 2 629 29 30

©2024 • Vrije Universiteit Brussel • ETRO Dept. • Pleinlaan 2 • 1050 Brussels • Tel: +32 2 629 2930 (secretariat) • Fax: +32 2 629 2883 • WebmasterDisclaimer