Abstract:In the era of artificial intelligence, efficiently completing the pre-training of large language models to meet requirements for scalability, performance, and stability presents a critical challenge. These systems leverage accelerators and high-speed network interfaces to execute parallel tensor computations and communications, significantly enhancing training efficiency. However, these advancements bring a series of unresolved system design challenges. Based on an analysis of the pre-training process, this study first outlines the training procedures and workload characteristics of large language models. It then reviews system technologies from the perspectives of scalability, performance, and reliability, covering their classifications, underlying principles, current research progress, and key challenges. Finally, this study provides an in-depth analysis of the broader challenges facing large language model pre-training systems and discusses potential directions for future development.