ETRO VUB
About ETRO  |  News  |  Events  |  Vacancies  |  Contact  
Home Research Education Industry Publications About ETRO

Master theses

Current and past ideas and concepts for Master Theses.

Explaining Predictions made by GNNs

Subject

Deep neural networks (DNNs) have gained enormous interest in recent years thanks to their capability in learning latent representations and making accurate predictions with them. Graph neural networks (GNNs) are a special type of DNNs which allows to work with data that can be represented by means of graphs. Thanks to that, with GNNs we are able to benefit from additional correlations of the graph-based data at hand. Understanding how DNNs (and more specifically GNNs) obtain their predictions is relevant to give credibility to these models. Besides, the problem of explainability in machine learning (i.e., explaining a neural network decision) has recently gained a lot of importance due to the last GDPR regulation in the EU. Hence, the goal of an explainable deep GNN model in a classification task is to explain why the model choose a specific class as output.
In this master thesis, we will focus on explainable DNN and GNN models and tools. Specifically, the focus will be put on investigating existing techniques, developing novel explainability tools to be applied in GNNs or developing explainable-by-design GNNs. For instance, this can be applied in weather forecasting (rain/sun and why) or prediction of people density in a street (busy/not busy and why, e.g. for coronavirus measures), social media analysis and many other tasks. Explainable GNNs can also be applied in regression tasks as natural phenomena prediction (pollutant concentrations or temperature). Although explainability techniques have been developed for other machine learning techniques, deep learning has few of them and GNNs even less. We would like to review the existing methodologies to provide a general benchmark and an improved model.

Kind of work

In this thesis, we will work on different datasets for analyzing and benchmarking explainable techniques for deep graph models. Specifically, we will focus on (i) investigating existing baselines for the task, (ii) identify issues that are present into current state-of-the-art approaches, and (iii) alleviating those issues by developing new neural network architectures for better explainability. The student will be able to choose from our existing datasets or of his/her choice.

Framework of the Thesis

“Explainability Techniques for Graph Convolutional Networks” Federico Baldassarre and Hossein Azizpour, 2019.
“Explainability Methods for Graph Convolutional Neural Networks” Phillip E. Pope et al, 2019.
“GNNExplainer: Generating Explanations for Graph Neural Networks.” Rex Ying et al, 2019
‘Explainability in Graph Neural Networks: A Taxonomic Survey’ Hao Yuanet al, 2021.

Number of Students

1-2

Expected Student Profile

Proven programming experience (e.g., Python)
Background in machine learning
Prior experience with state-of-the-art machine learning frameworks (e.g., Tensorflow, PyTorch)

Promotor

Prof. Dr. Ir. Nikos Deligiannis

+32 (0)2 629 1683

ndeligia@etrovub.be

more info

Supervisor

Miss Esther Rodrigo

+32 (0)2 629 2930

erodrigo@etrovub.be

more info

- Contact person

- IRIS

- AVSP

- LAMI

- Contact person

- Thesis proposals

- ETRO Courses

- Contact person

- Spin-offs

- Know How

- Journals

- Conferences

- Books

- Vacancies

- News

- Events

- Press

Contact

ETRO Department

info@etro.vub.ac.be

Tel: +32 2 629 29 30

©2024 • Vrije Universiteit Brussel • ETRO Dept. • Pleinlaan 2 • 1050 Brussels • Tel: +32 2 629 2930 (secretariat) • Fax: +32 2 629 2883 • WebmasterDisclaimer