Subject
Deepfakes, realistic AI-generated synthetic media (including images and videos), pose significant threats to security, privacy, and trust in digital information. Detecting deepfakes is crucial, but equally important is ensuring that detection systems provide transparent and comprehensible explanations for their decisions. Explainable deepfake detection aims to bridge the gap between high detection accuracy and the need for interpretability, making the technology more trustworthy for various stakeholders, including non-experts.
Kind of work
For this thesis, students will learn state-of-the-art interpretable and explainable AI methods and apply them to assess how well users understand and trust the deepfake detection process. The thesis will leverage a vast code base on explainable and interpretable AI at ETRO as well as datasets generated already in prior research by the group in the context of national and European projects.
Framework of the Thesis
The thesis relates to national and European projects running in the research group. A vast amount of data is already collected from users in the http://www.qxviz.ai website which was also developed by the reseach group in collaboration with KULeuven and UAntwerp.
Some publications are listed below https://arxiv.org/pdf/1907.06831 https://dare.uva.nl/search?identifier=a004a1c5-942b-40b7-9f29-1467265debfc Yang, B. Joukovsky, J. Oramas, T. Tuytelaars, N. Deligiannis. SNIPPET: A Framework for Subjective Evaluation of Visual Explanations Applied to DeepFake Detection. ACM Transactions on Multimedia Computing Communications and Applications, 2024.
Number of Students
1
Expected Student Profile
Good knowledge of Machine Learning and Python programming. Experience in image / video processing is a big plus.
|