Abstract:Most graph anomaly detection methods leverage graph neural network (GNN) to learn from relatively high-quality graph data. Unfortunately, such ideal scenarios are rare in real-world applications, where most data suffer from issues such as missing labels, dynamic changes, and structural incompleteness, collectively referred to as dynamic incomplete graph (DIG). To address the challenge of performance degradation of GNN under extreme conditions, this study proposes an explainable dynamic incomplete graph anomaly detection (EXDIG) method. The core is a graph masked autoencoder framework optimized with strong-weak mutual information. This framework simulates real-world DIG scenarios by masking graph structures (nodes/edges) and node features. In addition, through the strong-weak mutual information (SWMI) loss, it captures the relationship between structure and features while maintaining structural integrity, reducing overfitting, and improving generalization. Furthermore, EXDIG enhances the interpretability of anomaly detection in DIGs by incorporating masked perturbations on nodes, edges, and features, enabling the identification of key components and providing transparent, trustworthy explanations for anomaly detection results. This study evaluates EXDIG on nine real-world graph datasets, and the results demonstrate its superiority over state-of-the-art methods across different levels of DIG scenarios and across various downstream tasks and representation learning evaluations, both supervised and unsupervised. Specifically, on the Amazon anomaly detection dataset, EXDIG achieves improvements over 13% and 15% in NMI and ARI, respectively. It maintains F1-score fluctuations within 5% across dynamic incompleteness ratios from 25% to 99%. Notably, EXDIG is the first method to enable node-level interpretability in dynamic incomplete graphs.