|
Subject
The integration of AI and ML in manufacturing has revolutionized the industry but also introduced new security challenges [1]. Adversarial attacks, which manipulate these models to cause errors, are a significant threat, potentially leading to safety compromises or mistakes. A recent concern is the emergence of adversarial patches. These patterns, when recognized by an AI or ML model, cause it to misinterpret its inputs. The ability to incorporate these patches into everyday items, such as t-shirts, presents a unique challenge [2]. The introduction of these patches could potentially lead to system irregularities. For instance, they might result in a security system not identifying an unauthorized entry, or a quality control system mistakenly approving a product with defects. Moreover, the misclassification of individuals not wearing proper safety gear due to adversarial patches could have serious safety implications. For example, an AI system might fail to identify a worker not wearing proper safety gear leading to unsafe conditions. These threats highlight the need for robust security measures in AI and ML applications in manufacturing. As the incorporation of these technologies progresses, its crucial that we stay alert in enhancing their robustness in the face of adversarial threats, striking a balance between technological advancement and safety.
Kind of work
The objective of this research is twofold. First, we aim to build a Transformer-based model that can effectively recognize safety equipment in manufacturing environments. Transformers, with their self-attention mechanism, have shown remarkable success in various fields, including natural language processing and computer vision [3]. Importantly, Vision Transformers have been demonstrated to be more robust against adversarial attacks [4]. Secondly, the aim is to design adversarial patches with the intent to deceive the Transformer model. This step is crucial as it provides a means to assess the resilience of the Transformer model against such attacks. The algorithms will be implemented in Python, using open- source image processing and machine learning. The project will involve: - Literature study - DL model development and performance evaluation - Adversarial patch generation - Thesis writing
Framework of the Thesis
This project relates to various industrial projects running within the ETRO department.
Below some references that the interested student can consult: [1] R. S. S. Kumaretal.,Adversarial machine learning-industry perspectives,in2020 IEEE security and privacy workshops (SPW), IEEE, 2020, pp. 6975. [2] K. Xuetal.,Adversarialt-shirt !evading person detectors in a physical world,in Computer VisionECCV 2020: 16th European Conference, Glasgow, UK, August 23 28, 2020, Proceedings, Part V 16, Springer, 2020, pp. 665681. [3] S. Khan, M. Naseer, M. Hayat, S. W. Zamir, F. S. Khan, and M. Shah, Transformersin vision: A survey, ACM Comput. Surv. CSUR, vol. 54, no. 10s, pp. 141, 2022. [4] K. Mahmood, R. Mahmood, and M. Van Dijk, On the robustness of vision transformers to adversarial examples, in Proceedings of the IEEE/CVF international conference on computer vision, 2021, pp. 78387847.
Number of Students
1
Expected Student Profile
Following an MSc in a field related to one or more of the following: Computer Science, Biomedical Engineering, Applied Computer Science - Strong programming skills (Python). - Experience with image processing and DL. - Interest/Motivation in developing state-of-the-art DL methods and conduct experiments. - Ability to write scientific reports and communicate research results in English.
|
|