ETRO VUB
About ETRO  |  News  |  Events  |  Vacancies  |  Contact  
Home Research Education Industry Publications About ETRO

Master theses

Current and past ideas and concepts for Master Theses.

Explain Predictions made by GNNs

Subject

Graph neural networks (GNNs) are a type of deep neural networks (DNNs) that works with data represented by means of graphs. GNNs benefit from existing correlations in the graph-structured data at hand. However, as many DNNs, GNNs are not interpretable-by-design, which means that their predictions are not explainable. The problem of explainability in deep learning (i.e., explaining a neural network decision) has recently gained a lot of importance due to the last GDPR regulation in the EU. Understanding GNNs predictions is relevant to give credibility to these models and allows to reduce biases. Hence, the general goal of an explainable GNN model (and of this master thesis) is to study and explain why the model outputs a specific prediction.

Kind of work

Although explainability techniques have been developed for other machine learning techniques, deep learning has few of them and GNNs even less. We would like to review the existing methodologies to provide a general benchmark and, ideally, an improved model.
In this master thesis, we will focus on explainable DNN and GNN models and tools. Specifically, the focus will be put on investigating existing techniques, developing novel explainability tools to be applied in GNNs or developing a novel explainable-by-design GNNs. Specifically, we will (i) investigate existing baselines for the task, (ii) identify issues that are present into current state-of-the-art approaches, and (iii) alleviate those issues by developing new neural network architectures for better explainability.
In this thesis, we will work on different datasets for analyzing and benchmarking explainable techniques for deep graph models. For instance, this can be applied in weather forecasting (rain/sun and why) or prediction of people density in a street (busy/not busy and why, e.g. for coronavirus measures), social media analysis and many other tasks. Explainable GNNs can also be applied in regression tasks as natural phenomena prediction (pollutant concentrations or temperature). The student will be able to choose from our existing datasets or of his/her choice.

Framework of the Thesis

Graph Neural Networks (GNNs):
• ‘Graph Neural Networks: A Review of Methods and Applications” Jie Zhou et al.
• "A Gentle Introduction to Graph Neural Networks", Sanchez-Lengeling, et al., Distill, 2021. (https://distill.pub/2021/gnn-intro/)
Explanation techniques for GNNs:
• “Higher-Order Explanations of Graph Neural Networks via Relevant Walks” Thomas Schnake et al.
• “Explainability Techniques for Graph Convolutional Networks” Federico Baldassarre and Hossein Azizpour, 2019.
• “Explainability Methods for Graph Convolutional Neural Networks” Phillip E. Pope et al, 2019.
• “GNNExplainer: Generating Explanations for Graph Neural Networks.” Rex Ying et al, 2019
• ‘Explainability in Graph Neural Networks: A Taxonomic Survey’ Hao Yuanet al, 2021.

Number of Students

1

Expected Student Profile

Good knowledge of Machine Learning, AI and data processing. Good programming skills in Python (particularly PyTorch)

Promotor

Prof. Dr. Ir. Nikos Deligiannis

+32 (0)2 629 1683

ndeligia@etrovub.be

more info

Supervisor

Miss Esther Rodrigo

+32 (0)2 629 2930

erodrigo@etrovub.be

more info

- Contact person

- IRIS

- AVSP

- LAMI

- Contact person

- Thesis proposals

- ETRO Courses

- Contact person

- Spin-offs

- Know How

- Journals

- Conferences

- Books

- Vacancies

- News

- Events

- Press

Contact

ETRO Department

info@etro.vub.ac.be

Tel: +32 2 629 29 30

©2024 • Vrije Universiteit Brussel • ETRO Dept. • Pleinlaan 2 • 1050 Brussels • Tel: +32 2 629 2930 (secretariat) • Fax: +32 2 629 2883 • WebmasterDisclaimer