• Online First

    Select All
    Display Type: |
    • Point Cloud Registration Method Based on Local-global Dynamic Graph Learning and Complementary Fusion

      Online: September 03,2025 DOI: 10.13328/j.cnki.jos.007416

      Abstract (4) HTML (0) PDF 11.68 M (5) Comment (0) Favorites

      Abstract:Existing deep learning-based point cloud registration methods primarily focus on feature extraction and feature matching. However, the exploration of local and global graph structures during the feature extraction stage remains insufficient, and the investigation of difference information during the matching process is also limited. To address these issues, this study proposes a point cloud registration method based on local-global dynamic graph learning and complementary fusion. Specifically, the dynamic offset-based local graph learning module characterizes the underlying graph structure in the feature space by constructing proxy points that contain both geometric and semantic information, leading to more discriminative local features. In addition, a dynamic attention-based global graph learning module is designed, which adaptively adjusts attention weights based on the relationships between points, effectively capturing long-range dependencies in the point cloud. To further enhance the correspondence between the two point clouds, the attention-driven complementary fusion module utilizes the cross-attention mechanism to extract similar and distinctive information, while applying the self-attention mechanism to refine the relationships between features. Experimental results demonstrate that the proposed method achieves optimal registration performance on public datasets while maintaining acceptable computational efficiency.

    • Survey on Test Case Generation Methods for Intelligence Software Systems

      Online: September 03,2025 DOI: 10.13328/j.cnki.jos.007450

      Abstract (4) HTML (0) PDF 1.02 M (7) Comment (0) Favorites

      Abstract:With the rapid development of technologies such as deep learning and significant breakthroughs in areas including computer hardware and cloud computing, increasingly mature artificial intelligence (AI) technologies are being applied to software systems across various fields. Software systems that incorporate AI models as core components are collectively referred to as intelligence software systems. Based on the application fields of AI technologies, these systems are categorized into image processing, natural language processing, speech processing, and other applications. Unlike traditional software systems, AI models adopt a data-driven programming paradigm in which all decision logic is learned from large-scale datasets. This paradigm shift renders traditional code-based test case generation methods ineffective for evaluating the quality of intelligence software systems. As a result, numerous testing methods tailored for intelligence software systems have been proposed in recent years, including novel approaches for test case generation and evaluation that address the unique characteristics of such systems. This study reviews 80 relevant publications, classifies existing methods according to the types of systems they target, and systematically summarizes test case generation methods for image processing, natural language processing, speech processing, point cloud processing, multimodal data processing, and deep learning models. Potential future directions for test case generation in intelligence software systems are also discussed to provide a reference for researchers in this field.

    • Survey on RPKI Measurement Research

      Online: September 03,2025 DOI: 10.13328/j.cnki.jos.007459

      Abstract (4) HTML (0) PDF 1.25 M (3) Comment (0) Favorites

      Abstract:Resource public key infrastructure (RPKI) is a key technology for enhancing border gateway protocol (BGP) security, using cryptographic verification to prevent attacks such as prefix hijacking. Since its formal deployment in 2012, RPKI has grown to cover over half of Internet prefixes. Ongoing research on RPKI deployment helps to provide insights into current trends and identify security issues. This study reviews existing works on RPKI measurement from three perspectives: RPKI data object measurement, ROV measurement, and RPKI infrastructure measurement. It analyzes RPKI data object and ROV coverage metrics, deployment trends, and the effectiveness of different measurement approaches. Moreover, key security vulnerabilities and data quality issues are identified, and recommendations to promote large-scale RPKI deployment are proposed.

    • Enhancing Code Summarization with Dependency-aware Hierarchical Neural Networks

      Online: September 02,2025 DOI: 10.13328/j.cnki.jos.007504

      Abstract (6) HTML (0) PDF 2.55 M (8) Comment (0) Favorites

      Abstract:As an emerging technique in software engineering, automatic source code summarization aims to generate natural language descriptions for given code snippets. State-of-the-art code summarization techniques utilize encoder-decoder neural models; the encoder extracts the semantic representations of the source code, while the decoder translates them into human-readable code summary. However, many existing approaches treat input code snippets as standalone functions, often overlooking the context dependencies between the target function and its invoked subfunctions. Ignoring these dependencies can result in the omission of crucial semantic information, potentially reducing the quality of the generated summary. To this end, in this paper, we introduce DHCS, a dependency-aware hierarchical code summarization neural model. DHCS is designed to improve code summarization by explicitly modeling the hierarchical dependencies between the target function and its subfunctions. Our approach employs a hierarchical encoder consisting of both a subfunction encoder and a target function encoder, allowing us to capture both local and contextual semantic representations effectively. Meanwhile, we introduce a self-supervised task, namely the masked subfunction prediction, to enhance the representation learning of subfunctions. Furthermore, we propose to mine the topic distribution of subfunctions and incorporate them into a summary decoder with a topic-aware copy mechanism. Therefore, it enables the direct extraction of key information from subfunctions, facilitating more effective summary generation for the target function. Finally, we have conducted extensive experiments on three real-world datasets constructed for Python, Java and Go languages, which clearly validate the effectiveness of our approach.

    • Empowering Relational Database Systems with AI: Standardization, Technologies and Challenges

      Online: September 02,2025 DOI: 10.13328/j.cnki.jos.007506

      Abstract (54) HTML (0) PDF 1.28 M (20) Comment (0) Favorites

      Abstract:The advent of the big data era has introduced massive data applications characterized by four defining attributes—Volume, Variety, Velocity, and Value (4V)—posing revolutionary challenges to conventional data acquisition methods, management strategies, and database processing capabilities. Recent breakthroughs in artificial intelligence (AI), particularly in machine learning and deep learning, have demonstrated remarkable advancements in representation learning, computational efficiency, and model interpretability, thereby offering innovative solutions to these challenges. This convergence of AI and database systems has given rise to a new generation of intelligent database management systems, which integrate AI technologies across three core architectural layers: (1) natural language interfaces for user interaction, (2) automated database administration frameworks (including parameter tuning, index recommendation, diagnostics, and workload management), and (3) machine learning-based high-performance components (such as learned indexes, adaptive partitioning, query optimization, and scheduling). Furthermore, new intelligent component application programming interfaces (APIs) have lowered the integration barrier between AI and database systems. This work systematically investigates intelligent databases through an innovative standardization-centric framework, delineating common processing paradigms across core research themes—interaction paradigms, management architectures, and kernel design. By examining standardized processes, interfaces, and collaboration mechanisms, it uncovers the core logic enabling database self-optimization, synthesizes current research advancements, and critically assesses persistent technical challenges and prospects for future development.

    • Cold Restart for CDCL Algorithms

      Online: September 02,2025 DOI: 10.13328/j.cnki.jos.007509

      Abstract (7) HTML (0) PDF 829.85 K (8) Comment (0) Favorites

      Abstract:The CDCL algorithm for SAT solving is widely used in the field of hardware and software verification, with restart being one of its core components. Currently, mainstream CDCL solvers often employ the "warm restart" technique, which retains key search information such as variable order, assignment preferences, and learnt clauses, and has a very high restart frequency. The warm restart technique tends to make CDCL solvers more inclined to visit the search space that was explored before restarts, which may lead to being trapped in an unfavorable local search space for a long time, lacking exploration for another regions. This paper first tests the existing CDCL algorithms and confirms that under different initial search settings, the runtime for an instance of mainstream CDCL solvers exhibits significant fluctuations. To leverage this observation, the paper proposes the "cold restart" technique that forgets search information, specifically by periodically forgetting variable order, assignment preferences, and learned clauses. Experimental results demonstrate that this technique can effectively improve mainstream CDCL algorithms. Additionally, the paper further extends its parallel version, where each thread explores different search spaces, enhancing the performance of the parallel algorithm. Moreover, the cold restart technique primarily improves the performance of sequential and parallel solvers for the solving ability on satisfiable instances, providing new insights for designing satisfiable-oriented solvers. Specifically, our parallel cold restart techniques can improve 41.84% of the PAR2 score of Pakis on satisfiable instances. The parallel SAT solvers named ParKissat-RS including the ideas in this paper won the parallel track of SAT competitions with a significantly margin of 24% faster.

    • Cross Semi-supervised Semantic Segmentation Network Based on Differential Feature Extraction

      Online: August 27,2025 DOI: 10.13328/j.cnki.jos.007412

      Abstract (24) HTML (0) PDF 7.83 M (13) Comment (0) Favorites

      Abstract:Semi-supervised semantic segmentation methods typically employ various data augmentation schemes to ensure differentiation in the input of network branches, enabling mutual self-supervision. While successful, this approach faces several issues: 1) insufficient diversity in feature extraction leads to feature signal assimilation during inference; 2) inadequate diversity in supervision signals results in the assimilation of loss learning. These issues cause network branches to converge on similar solutions, degrading the functionality of multi-branch networks. To address these issues, a cross semi-supervised semantic segmentation method based on differential feature extraction is proposed. First, a differential feature extraction strategy is employed, ensuring that branches focus on distinct information, such as texture, semantics, and shapes, thus reducing reliance on data augmentation. Second, a cross-fusion pseudo-labeling method is introduced, where branches alternately generate neighboring pixel fusion pseudo-labels, enhancing the diversity of supervision signals and guiding branches toward different solutions. Experimental results demonstrate this method achieves excellent performance on the Pascal VOC 2012 and Cityscapes validation datasets, with scores of 80.2% and 76.8%, outperforming the latest methods by 0.3% and 1.3%, respectively.

    • Semantic Matching-based Cross-platform Mobile App Test Script Record and Replay via Large Language Models

      Online: August 27,2025 DOI: 10.13328/j.cnki.jos.007414

      Abstract (28) HTML (0) PDF 9.81 M (23) Comment (0) Favorites

      Abstract:GUI testing is one of the most important measures to ensure mobile application (App) quality. With the continuous development of the mobile ecosystem, especially the strong rise of the domestic mobile ecosystem, e.g., HarmonyOS, GUI test script recording and replay has become one of the prominent challenges in GUI testing. GUI test scripts must be migrated from traditional mobile platforms to emerging mobile platforms to ensure the reliability of App quality and consistency in user experience across diverse platforms. However, differences in underlying implementations across platforms have created substantial obstacles to the cross-platform migration of mobile App test scripts. This challenge is particularly pronounced in the testing migration for emerging domestic mobile ecosystem platforms. Cross-platform test script recording and replay is essential for maintaining consistency and a high-quality user experience across different platforms and devices. Current state-of-the-art approaches only address the “one-to-one” test event matching situations. However, due to inconsistencies in development practices across platforms, the replay of test events does not always map “one-to-one”; instead, “multiple-to-multiple” mapping situations are common. This means that some test events need to be mapped to a different number of test events to fulfill the same business logic. To address these issues and challenges, this study proposes a cross-platform mobile App test script recording and replay method based on large language model semantic matching (LLMRR). The LLMRR method integrates image matching, text matching, and large language model semantic matching technologies. During the recording phase, user operation information is captured using image segmentation algorithms and saved as recorded test scripts. During the replay phase, corresponding widgets on the replay App page are located using image matching and text matching modules to execute operations. When matching fails, the large language model semantic matching module is invoked for semantic matching, ensuring efficient operation across different platforms. This study presents the first exploration of testing for domestic HarmonyOS Apps, using 20 Apps and a total of 100 test scripts for migration testing across iOS, Android, and HarmonyOS platforms. The effectiveness of the LLMRR method is compared with the current state-of-the-art cross-platform test script recording and replay approaches, LIRAT and MAPIT. The results demonstrate that the LLMRR method exhibits significant advantages in test script recording and replay.

    • Analysis of Development Trend and Core Technology of Chinese Blockchain Software

      Online: August 27,2025 DOI: 10.13328/j.cnki.jos.007452

      Abstract (388) HTML (0) PDF 2.59 M (83) Comment (0) Favorites

      Abstract:Blockchain, as a distributed ledger technology, ensures data security, transparency, and immutability through encryption and consensus mechanisms, offering transformative solutions across various industries. In China, blockchain-based software has attracted widespread attention and application, demonstrating considerable potential in fields such as cross-border payments, supply chain finance, and government services. These applications not only enhance the efficiency and transparency of business processes but also reduce trust costs and offer new approaches for the digital transformation of traditional industries. This study investigates the development trends and core technologies of Chinese blockchain software, focusing on key technological breakthroughs, promoting integration and innovation, and providing a foundation for the formulation of technical standards. The aim is to enhance the competitiveness of Chinese blockchain technologies, broaden application scenarios, and support the standardized development of the industry. Three core research questions are addressed: (1) What are the development trends of Chinese blockchain software? (2) What are the core technologies involved? (3) What are the differences in core technologies between Chinese and foreign blockchain software? To address these questions, 1268 blockchain software entries have been collected through three channels. Based on information regarding affiliated companies and chief technology officers (CTOs), 103 Chinese blockchain software entries are identified. A statistical analysis of basic software attributes is conducted, examining development trends from three perspectives: software development history, distribution, and interrelationships. Given the importance of technical and development documentation, 39 high-quality blockchain software entries containing detailed technical information are further selected. Subsequently, a statistical and analytical evaluation of the core technologies of these 39 software systems is conducted across six technical layers of blockchain architecture. Based on this analysis, differences in core technologies between Chinese and foreign blockchain software are compared. In total, 28 phenomena and 13 insights are identified. These findings provide researchers, developers, and practitioners with a comprehensive understanding of the current state of Chinese blockchain development and offer valuable references for future adoption and improvement of Chinese blockchain software.

    • SAC-based Ensemble Framework for Multi-view Workload Forecasting in Cloud Computing

      Online: August 20,2025 DOI: 10.13328/j.cnki.jos.007424

      Abstract (25) HTML (0) PDF 1.50 M (21) Comment (0) Favorites

      Abstract:Accurate workload forecasting is essential for effective cloud resource management. However, existing models typically employ fixed architectures to extract sequential features from different perspectives, which limits the flexibility of combining various model structures to further improve forecasting performance. To address this limitation, a novel ensemble framework SAC-MWF is proposed based on the soft actor-critic (SAC) algorithm for multi-view workload forecasting. A set of feature sequence construction methods is developed to generate multi-view feature sequences at low computational cost from historical windows, enabling the model to focus on workload patterns from different perspectives. Subsequently, a base prediction model and several feature prediction models are trained on historical windows and their corresponding feature sequences, respectively, to capture workload dynamics from different views. Finally, the SAC algorithm is employed to integrate these models to generate the final forecast. Experimental results on three datasets demonstrate that SAC-MWF performs excellently in terms of effectiveness and computational efficiency.

    Prev 1 2 3 Next Last
    Result 60 Jump to Page GO
You are the firstVisitors
Copyright: Institute of Software, Chinese Academy of Sciences Beijing ICP No. 05046678-4
Address:4# South Fourth Street, Zhong Guan Cun, Beijing 100190,Postal Code:100190
Phone:010-62562563 Fax:010-62562533 Email:jos@iscas.ac.cn
Technical Support:Beijing Qinyun Technology Development Co., Ltd.

Beijing Public Network Security No. 11040202500063