科技创新2030 —“新一代人工智能”重大项目(2018AAA0100500);国家自然科学基金(62272434);安徽省重点研究与开发计划标准化专项（No. 202104h04020039）
The development of deep learning technology has driven the rapid progress of autonomous driving. While the accuracy of the perception model based on deep learning is gradually improved, they still have room for improvement in terms of robustness and reliability. therefore, they need to be thoroughly validated in various scenarios to ensure to meet acceptable security levels. Scenario-based simulation testing is a crucial aspect of the development and deployment of autonomous vehicles. One key challenge is the creation of diverse and realistic simulation scenarios that accurately represent the physical environment and the various challenges that autonomous vehicles may encounter. Scenario description languages enable the description and instantiation of autonomous driving scenarios in virtual environments and obtain simulation data. However, most existing scene description languages lack the ability to provide high-level abstractions and descriptions of the road structure of the scene. In this paper, we present a road network property graph for representing the abstracted entities and their relationships within a road network. We also introduce SceneRoad, a language specifically designed to provide concise and expressive descriptions of the road structure in a scene. SceneRoad can build a road network feature query graph based on the described road structure features of a scene. In this way, the problem of searching the road structures in the road network is abstracted as a subgraph matching problem on the property graph, which can be solved by the VF2 algorithm. Additionally, we incorporate SceneRoad as an extension into the Scenic scenario description language. With this extended language, we are able to randomly generate a diverse set of static scenes and build a simulation dataset. Statistical analysis of the simulation dataset reveals the wide variety of scenes that have been generated. The results of training and testing various perception models on both real and simulated datasets show that the model's performance on the two datasets is consistently correlated. This indicates that the model's evaluation on the simulated dataset aligns with its performance in real-world scenarios. This is significant for evaluating perception models and research into improving the model's robustness and safety.