Abstract:Open-world continual learning (OWCL) aims to simulate real-world scenarios where tasks evolve continuously, categories change dynamically, and unseen samples are encountered. A well-designed OWCL model is expected not only to retain knowledge of learned tasks while acquiring new tasks but also to recognize unknown categories, thus achieving continuous and robust knowledge accumulation and generalization. However, most existing continual learning methods are built upon the closed-world assumption and cannot effectively cope with the category uncertainty and inter-task interference introduced by open categories. In particular, they show clear limitations in balancing knowledge stability and plasticity. Therefore, based on the formal definition of the OWCL problem, this study proposes a task-aware prompt-driven mixture-of-experts model (TP-MoE), which realizes dynamic modeling of task semantics and efficient scheduling of expert modules, thus supporting knowledge transfer and knowledge update. Specifically, TP-MoE introduces a plug-and-play task prompt aggregation mechanism and improves the gating strategy for expert routing, enabling the continual integration of historical and current task knowledge during task increments. At the same time, an adaptive open-boundary thresholding strategy is incorporated, which dynamically adjusts the decision boundaries of open categories according to the transfer between new and old knowledge, thus enhancing both open-category detection capability and known-category classification accuracy. Experimental results demonstrate that TP-MoE achieves state-of-the-art performance across various metrics on the Split-CIFAR100 and Open-CORe50 benchmarks, exhibiting strong robustness and generalization. This study provides a scalable and transferable framework for knowledge modeling and task scheduling in open-world continual learning.