Graph Neural Networks (GNNs) have demonstrated state-of-the-art performance in various graph representation learning tasks. Recently, studies revealed their vulnerability to adversarial attacks.
In this work, we will be conducting a theoretical examination of the robustness of message-passing GNNs. We aim to bridge the gap between defense methods and robustness certificates, specifically for Graph Convolutional Networks (GCNs). To do so, we first established a formal definition of robustness in the context of graphs and use it to derive an upper bound of the robustness of GCNs. Building on these findings, we will both create new attacks and defenses that prove to be theoritically valid. Our work will touch upon both node and graph classification.