• Volume 34,Issue 7,2023 Table of Contents
    Select All
    Display Type: |
    • >Special Issue's Articles
    • Preface

      2023, 34(7):2979-2980. DOI: 10.13328/j.cnki.jos.006864

      Abstract (1138) HTML (3253) PDF 530.35 K (4819) Comment (0) Favorites

      Abstract:

    • Safe Neural Network Controller Synthesis and Verification for Hybrid Systems

      2023, 34(7):2981-3001. DOI: 10.13328/j.cnki.jos.006857

      Abstract (1831) HTML (4672) PDF 2.37 M (4666) Comment (0) Favorites

      Abstract:Controller synthesis is a fundamental problem in hybrid system control. The synthesis of safe controllers is related to the use of hybrid systems in safety-critical fields. This study proposes a novel approach to synthesizing neural network controllers with safety guarantees for hybrid systems. The safety of neural network controllers is guaranteed by barrier certificates, which are simultaneously synthesized with the controllers. To learn safe neural network controllers, first, the network structures of the controllers are determined, and the training datasets are constructed based on the hybrid system. Then, the loss function of network training is encoded based on the barrier certificate conditions guaranteeing the safety of the controllers. When the training process completes, the learned controllers are safe on training datasets but may not be safe on the whole hybrid system. To verify the safety of the learned controllers on the whole system, this study transforms the certification of safety conditions into a group of mixed-integer programming problems and adopts the numerical optimization solver to get formally guaranteed results. The safe neural network controller synthesis tool SafeNC is implemented and its performance on 8 benchmark systems is evaluated. SafeNC successfully synthesizes large controllers with up to 6 hidden layers and 1 804 neurons. The experimental results show that SafeNC can deal with more complex systems, and is more effective and scalable than the existing methods.

    • Modeling and Verification Method of Intersection Test Scenario for Automated Driving

      2023, 34(7):3002-3021. DOI: 10.13328/j.cnki.jos.006855

      Abstract (2793) HTML (4416) PDF 3.49 M (5518) Comment (0) Favorites

      Abstract:Autonomous vehicles play an important role in easing traffic congestion and eliminating traffic accidents. In order to ensure the safety and reliability of the autonomous vehicle, there must be an all-around test before they are deployed on public roads. Most of the existing test scenario data come from traffic accidents and traffic violations. Furthermore, the most fundamental safety requirement of an autopilot system is that autonomous vehicles should comply with traffic law, which fully reflects the importance of autonomous vehicles complying with traffic rules. Nevertheless, there is a severe lack of test scenarios built for the traffic law. Therefore, in this study, the safety requirements of the autopilot system are extracted from the traffic law perspective, and a Petri net modeling and formal verification method for intersection test scenarios is proposed. Firstly, the traffic rules are classified according to the test scenarios of automated driving, the rule text suitable for the autonomous vehicle is extracted and semi-formalized. Secondly, with the aim of covering traffic law and function testing procedure of the test scenario, the interactive behavior of the intersection scene elements is integrated, the typical test scene elements are selected and combined to deploy the intersection test scenarios. Then, the test scenario based on traffic rules is modeled as a Petri net, in which places describe the state of the autonomous vehicle and transitions represent the trigger condition of the state. Moreover, the Clock Constraint Specification Language (CCSL) is chosen as the intermediate semantic language to convert the Petri net into an intermediate semantic model which can be formally verified. A specific conversion method is proposed. Finally, Tina is used to verify the activity, boundedness, and accessibility of the traffic-law scenario model, and the experimental results prove the validity of the model. Besides, the CCSL constraints are analyzed through the analysis tool MyCCSL which is based on the SMT and the consistency of the model is verified by the LTL formula.

    • Automatic Parallelization Framework for Complex Nested Loops Based on LLVM Pass

      2023, 34(7):3022-3042. DOI: 10.13328/j.cnki.jos.006858

      Abstract (1850) HTML (3389) PDF 2.96 M (4986) Comment (0) Favorites

      Abstract:With the popularization of multi-core processors, automatic parallelization of serial codes in embedded legacy systems is a research hotspot, while there are technical challenges in the automatic parallelization method for complex nested loops with imperfect nested structure and non-affine dependency characteristics. This study proposes an automatic parallelization framework (CNLPF) for complex nested loops based on LLVM Pass. Firstly, a representation model of complex nested loops, namely loop structure tree, is proposed, and the regular region of nested loops is automatically converted into a loop structure tree representation. Then, the data dependency analysis is carried out on the loop structure tree to construct intra-loop and inter-loop dependency relationship. Finally, the parallel loop program is generated based on the OpenMP shared memory programming model. For the 6 program cases in the SPEC2006 data set containing nearly 500 complex nested loops, the statistics of the proportion of complex nested loops and the parallel performance acceleration test were carried out respectively. The results show that the automatic parallelization framework proposed in this study can deal with complex nested loops that cannot be optimized by LLVM Polly, which enhances the parallel compilation and optimization capabilities of LLVM, and the method combined with Polly optimization improves the acceleration effect of Polly optimization alone by 9%-43%.

    • Target-oriented UAF Vulnerability Prediction Method of Multi-threaded Programs

      2023, 34(7):3043-3063. DOI: 10.13328/j.cnki.jos.006862

      Abstract (1655) HTML (3425) PDF 2.33 M (4873) Comment (0) Favorites

      Abstract:Use-after-free (UAF) vulnerability is a common concurrency defect in multi-threaded programs. Predictive UAF vulnerability detection methods have attracted much attention for their balance of false positives and misses. However, existing predictive UAF detection methods are not optimized for the target to be detected, which leads to unacceptable detection efficiency when the program is large or has complex behavior. To address the issue, proposes a target-oriented method to detect UAF vulnerabilities in multi-threaded programs. Firstly, the Petri net model of the program is mined from the program traces. Then, for each potential memory Free and Use operation pair that could constitute a UAF vulnerability. To add behavioural control structures that maintains causal constrains and data consistency between operations to the Petri net model of the program, with the target of triggering the vulnerability. On this basis, a UAF vulnerability detection method based on Petri net reverse unfolding is designed. This method verifies the authenticity of only one potential UAF vulnerability at a time, thus ensuring the efficiency of detection. This method verifies the validity of one potential UAF vulnerability at a time, thus ensuring the efficiency of detection. At the same time, in order to reduce the number of potential UAF vulnerabilities to be detected, a new vector clock is proposed in this paper to automatically identify the causal relationship between Free and Use operations, and to filter the potential UAF vulnerabilities accordingly. The proposed method is experimentally evaluated with several program examples. The experimental results show that the proposed method improves the efficiency and accuracy of detection compared to the mainstream methods.

    • Model Checking Tool for Concurrent Program Based on Constrained Dependency Graph

      2023, 34(7):3064-3079. DOI: 10.13328/j.cnki.jos.006856

      Abstract (1771) HTML (3333) PDF 2.37 M (5426) Comment (0) Favorites

      Abstract:Model checking is an automatic verification approach based on the state-space exploration strategy, which can effectively improve the quality of a program. However, due to the non-deterministic of thread scheduling and the complexity of data synchronization, the state-space explosion problem in concurrent program verification is more serious. At present, the independence analysis based partial-order reduction techniques have been widely applied to reduce the exploration space of concurrent program verification tasks. In the face of the problem that imprecise independence analysis will significantly increase the number of equivalent trace classes to be explored, a concurrent program model checking tool CDG4CPV that can refine the dependencies between thread transitions has been developed. Firstly, specification automata are constructed corresponding to the reachability property. Then, a constrained dependency graph is constructed according to the types of transition edges of threads and the access information of shared variables. Finally, the constrained dependency graph is utilized to prune the independent and enabled branches when unwinding the control-flow graph. The experiments have been carried out on the concurrency track of SV-COMP 2022 benchmark suite, and the efficiency of the proposed tool is also compared and analyzed. Empirical results show that the proposed tool can effectively improve the efficiency of model checking for concurrent programs. Specially, compared with the BDD-based program analysis algorithm, the proposed tool reduces the number of explored states by 91.38%, and the time and memory overheads are reduced by 86.25% and 69.80%, respectively.

    • SMT-based Formal Verification of Synchronous Reactive Model for Zone Controller

      2023, 34(7):3080-3098. DOI: 10.13328/j.cnki.jos.006861

      Abstract (1582) HTML (3157) PDF 2.82 M (4983) Comment (0) Favorites

      Abstract:Formal verification is a proven technique for improving product quality during software development of safety critical systems. However, the verification must be complete, both theoretically and in the interest of practicality. Data-flow verification is a pervading manifestation of verification of the software model in implementation level. Environmental input, generic function, high-order iterative operation, and intermediate variables are therefore crucial for analyzing usability of verification approaches. To verify a synchronous reactive model, engineers readily verify the control-flow model (i.e., safe state machine). Existing work shows that these approaches fall short of complete verification of synchronous reactive model of industrial software, which results in the loss of reaching the industrial requirements. It presents a significant pain point for adopting formal verification of industrial software. Thus, it is drawn on the insight that the synchronous reactive model of safety-critical systems should be verified completely, and the data-flow models should be considered. An approach is presented for automated, generic verification that tailor to verify the integration of safe state machines and data-flow models. Furthermore, a synthesis-based approach is adopted where the SCADE models describe functional requirements, safety requirements and environmental inputs that can be verified for an SMT-based model checker through program synthesis to Lustre model. The proposed technique promotes program synthesis as a general primitive for improving the integrity of formal verification. The proposed approach is evaluated on an industrial application (nearly two million lines of Lustre code) in rail transit. It is show that the proposed approach is effective in sidestepping long-standing and complex verification issues in large scale synchronous reactive model.

    • Efficient Approach for Solving Simple Preference in AI Planning

      2023, 34(7):3099-3115. DOI: 10.13328/j.cnki.jos.006859

      Abstract (1578) HTML (3718) PDF 2.27 M (5041) Comment (0) Favorites

      Abstract:AI planning, or planning for short, is an important branch of AI and widely applied in many fields, e.g., job shop scheduling, transportation scheduling, robot motion planning, aerospace mission planning, etc. A plan (a sequence of actions) must achieve all goals eventually in traditional planning, where such goals are called hard goals. Nevertheless, in many practical problems, the key focus is not only on the realization of goals as soon as possible and the reduction of the cost of plans as low as possible, but also on other factors, e.g., resource consumption or time constraint. To this end, the concept of simple preference which is also called soft goals is introduced. In contrast to hard goals, a simple preference is allowed to be violated by a plan. In essence, simple preferences are used to measure the quality of plans, without affecting the existence of plans. Current research on simple preferences makes less progress and the quality of plans are often unsatisfactory. This study proposes an efficient approach for solving simple preferences which are modeled as a part of classical planning models. Moreover, SMT (satisfiability modulo theories) solver is employed to recognize the mutual exclusion relations among simple preferences for the purpose of preference reduction, relieving the burden of planers. The major advantages of this approach lie in: on one hand, the state space is largely reduced due to the pre-tailoring of simple preferences, and on the other hand, the existing fast planners can be utilized and there is no need to design specialized planning algorithm. The experimental results on benchmarks show that the proposed approach has sound performance in improving the quality of plans, especially suited for the situation where simple preferences are not independent of each other.

    • Collaborative Verification Method of Uninterpreted Programs

      2023, 34(7):3116-3133. DOI: 10.13328/j.cnki.jos.006860

      Abstract (1179) HTML (2841) PDF 2.82 M (4484) Comment (0) Favorites

      Abstract:The verification of an uninterpreted program is undecidable in general. Recently, a decidable fragment (called coherent) of uninterpreted programs is discovered and the verification of coherent uninterpreted programs is PSPACE complete. Based on the results of coherent uninterpreted programs, a trace abstraction-based verification method in CEGAR (counterexample-guided abstraction refinement) style is proposed for general uninterpreted programs, and is very effective. Although that, the verification of uninterpreted programs sometimes needs many refinements. Especially when verify multiple programs with this method, the verifications of different programs are independent of each other and has high complexity. However, it is observed that those abstract models of infeasible counter-example traces are reusable and can benefit from each other’s verification when the programs to be verified are similar. In this work, a collaborative verification framework is proposed that accumulates the abstract models of infeasible traces during the programs’ verification. When a new program is to be verified, the program abstraction is refined with the accumulated abstract model first to wipe off those infeasible traces to improve the verification efficiency. Besides, an optimized congruence-based trace abstraction method is also proposed that compacting the states during the verification to enlarge the scope of the abstractions of the infeasible traces. The collaborative verification framework and the optimized trace abstraction method have been implemented, achieving on average 2.70x and 1.49x speedups on two representative benchmarks.

    • >Review Articles
    • Survey on Robustness Verification of Feedforward Neural Networks and Recurrent Neural Networks

      2023, 34(7):3134-3166. DOI: 10.13328/j.cnki.jos.006863

      Abstract (2366) HTML (4650) PDF 10.65 M (7313) Comment (0) Favorites

      Abstract:With the advent of the intelligent age, the applications of intelligent systems equipped with deep neural networks (DNNs) have penetrated into every aspect of our life. However, due to the black-box and large-scale characteristics, the predictions of the neural networks are difficult to be completely convincing. When neural networks are applied to security-critical fields such as autonomous driving, how to ensure their security is still a great challenge for the academia and industry. For this reason, the academia carried out much research on robustness—a kind of special security of neural networks, and proposed many algorithms for robustness analysis and verification. The verification algorithms for feedforward neural networks (FNNs) include precise algorithms and approximate algorithms, which have been developed relatively prosperously. The verification algorithms for other types of networks, such as recurrent neural networks (RNNs), are still in the primary stage. This study reviews the current development of DNNs and the challenges of deploying them into our life. It also exhaustively investigates the robustness verification algorithms of FNNs and RNNs, analyzes and compares the intrinsic connection among these algorithms. The security verification algorithms of RNNs in specific application scenarios are investigated, and the future research directions in the robustness verification field of neural networks are clarified.

    • Transparent Request Tracing and Sampling Method for Java-based Microservice System

      2023, 34(7):3167-3187. DOI: 10.13328/j.cnki.jos.006523

      Abstract (1503) HTML (2267) PDF 8.66 M (3622) Comment (0) Favorites

      Abstract:Microservice is becoming the mainstream architecture of the cloud-based software systems because of its agile development and rapid deployment. However, the structure of a microservice system is complex, it often has hundred of service instances. Moreover, the call relationship between services is extremely complex. When an anomaly occurs in the microservice system, it is difficult to locate the root causes of the anomaly. The end-to-end request tracing method becomes the standard configuration of a microservice system to solve this problem. However, current methods of distributed request tracing are intrusive to applications and heavily rely on the developers’ expertise in request tracing. Besides, it is unable to start or stop the tracing functionality at runtime. These defects not only increase the burden of developers but also restrict the adoption of distributed request tracing technique in practice. This study designs and implements a transparent request tracing system named Trace++, which can generate tracing code automatically and inject the generated code into the running application by using dynamic code instrumentation technology. Trace++ is low intrusive to programs, transparent to developers, and can start or stop the tracing functionality flexibly. In addition, the adaptive sampling method of Trace++ effectively reduces the cost of request tracing. The results of the experiments conducted on TrainTicket, a microservice system, show that Trace++ can discover the dependencies between services accurately and its performance cost is close to the source code instrumentation method when it starts request tracing. When the request tracing functionality is stopped, Trace++ incurs no performance cost. Moreover, the adaptive sampling method can preserve the representative trace data while 89.4% of trace data are reduced.

    • Malware Similarity Measurement Method Based on Multiplex Heterogeneous Graph

      2023, 34(7):3188-3205. DOI: 10.13328/j.cnki.jos.006538

      Abstract (1734) HTML (2173) PDF 3.14 M (4166) Comment (0) Favorites

      Abstract:Existing malware similarity measurement methods cannot accommodate code obfuscation technology and lack the ability to model the complex relationships between malware. This study proposes a malware similarity measurement method called API relation graph enhanced multiple heterogeneous proxembed (RG-MHPE) based on multiplex heterogeneous graph to solve the above problems. This method first uses the dynamic and static feature of malware to construct the multiplex heterogeneous graph and then proposes an enhanced proximity embedding method based on relational paths to solve the problem that proximity embedding cannot be applied to the similarity measurement of the multiplex heterogeneous graph. In addition, this study extracts knowledge from API documents on the MSDN website, builds an API relation graph, learns the similarity between Windows APIs, and effectively slows down the aging speed of similarity measurement models. Finally, the experimental results show that RG-MHPE has the best performance in similarity measurement performance and model anti-aging ability.

    • Stable Learning via Sample Reweighting and Dual Classifiers

      2023, 34(7):3206-3225. DOI: 10.13328/j.cnki.jos.006511

      Abstract (1087) HTML (2062) PDF 7.51 M (3519) Comment (0) Favorites

      Abstract:Stable learning aims to leverage the knowledge obtained only from a single training data to learn a robust prediction model for accurately predicting label of the test data from a different but related distribution. To achieve promising performance on the test data with agnostic distributions, existing stable learning algorithms focus on eliminating the spurious correlations between the features and the class variable. However, these algorithms can only weaken part of the spurious correlations between the features and the class variable, but can not completely eliminate the spurious correlations. Furthermore, these algorithms may encounter the overfitting problem in learning the prediction model. To tackle these issues, this study proposes a sample reweighting and dual classifiers based stable learning algorithm, which jointly optimizes the weights of samples and the parameters of dual classifiers to learn a robust prediction model. Specifically, to estimate the effects of all features on classification, the proposed algorithm balances the distribution of confunders by learning global sample weights to remove the spurious correlations between the features and the class variable. In order to eliminate the spurious correlations between some irrelevant features and the class variable and weaken the influence of irrelevant features on the weighting process of samples, the proposed algorithm selects and removes some irrelevant features before sample reweighting. To further improve the generalization ability of the model, the algorithm constructs two classifiers and learns a prediction model with an optimal hyperplane by minimizing the parameter difference between the two classifiers during learning the prediction model. Using synthetic and real-world datasets, the experiments have validated the effectiveness of the proposed algorithm.

    • Event Extraction Method Based on Dual Attention Mechanism

      2023, 34(7):3226-3240. DOI: 10.13328/j.cnki.jos.006520

      Abstract (1464) HTML (2174) PDF 7.72 M (4407) Comment (0) Favorites

      Abstract:In view of the fact that the syntactic relationship is not fully utilized and the argument role is missing in event extraction, an event extraction based on dual attention mechanism (EEDAM) method is proposed to improve the accuracy and recall rate of event extraction. Firstly, sentence coding is based on four embedded vectors and dependency relation is introduced to construct dependency relation graph, so that deep neural network can make full use of syntactic relation. Then, through graph transformation attention network, new dependency arcs and aggregate node information are generated to capture long-range dependencies and potential interactions, weighted attention network is integrated to capture key semantic information in sentences, and sentence level event arguments are extracted to improve the prediction ability of the model. Finally, the key sentence detection and similarity ranking are used to fill in the document level arguments. The experimental results show that the event extraction method based on dual attention mechanism can improve the accuracy rate, recall rate, and F1-score by 17.82%, 4.61%, and 9.80% respectively compared with the optimal baseline joint multiple Chinese event extractor (JMCEE) on ACE2005 data set. On the data set of dam safety operation records, the accuracy, recall rate, and F1 score are 18.08%, 4.41%, and 9.93% higher than the optimal baseline JMCEE, respectively.

    • Optimization of Equivalence Query Algorithm in Active Automata Learning

      2023, 34(7):3241-3255. DOI: 10.13328/j.cnki.jos.006532

      Abstract (963) HTML (2019) PDF 5.55 M (3185) Comment (0) Favorites

      Abstract:As an effective technique for black-box state machine models of software systems, model learning (a.k.a. automata learning) can be divided into active and passive learning. Based on given input and output alphabets, the minimum complete state machine of the target system can be obtained in polynomial time through active interaction with the black box system. And the algorithm of equivalence query is still a big obstacle to the development and application of active automata learning tools. This study discusses the influence of counterexamples on the learning algorithms with the discrimination tree, and defines the comparison rules of hypotheses, and proposes two principles of constructing test cases. According to the principle, the Wp-method equivalence query algorithm is improved to produce better hypotheses and effectively reduce the number of queries and symbols. Based on the LearnLib, three kinds of automata are used as experimental objects to verify the effectiveness of the principle and the improved algorithm.

    • Heterogeneous Network Representation Learning Method Fusing Mutual Information and Multiple Meta-paths

      2023, 34(7):3256-3271. DOI: 10.13328/j.cnki.jos.006535

      Abstract (1527) HTML (2693) PDF 9.42 M (4413) Comment (0) Favorites

      Abstract:Heterogeneous information networks can be used for modeling several applications in the real world. Their representation learning has received extensive attention from scholars. Most of the representation learning methods extract structural and semantic information based on meta-paths and their effectiveness in network analysis have been proved. However, these methods ignore the node internal information and different degrees of importance of meta-path instances. Besides, they can capture only the local node information. Thus, this study proposes a heterogeneous network representation learning method fusing mutual information and multiple meta-paths. First, a meta-path internal encoding method called relational rotation encoding is used, which captures the structural and semantic information of the heterogeneous information network according to adjacent nodes and meta-path context nodes. It uses an attention mechanism to model the importance of each meta-path instance. Then, an unsupervised heterogeneous network representation learning method fusing mutual information maximization and multiple meta-paths is proposed and mutual information can capture both global and local information. Finally, experiments are conducted on two real datasets. Compared with the current mainstream algorithms as well as some semi-supervised algorithms, the results show that the proposed method has better performance on node classification and clustering.

    • Fog Computing-based Three-party Authentication and Key Agreement Protocol for Smart Healthcare

      2023, 34(7):3272-3291. DOI: 10.13328/j.cnki.jos.006514

      Abstract (1104) HTML (3537) PDF 5.02 M (3472) Comment (0) Favorites

      Abstract:In smart healthcare, cloud computing and the Internet of Things are combined to solve the problem of real-time access to large-scale data. However, the data is uploaded to a remote cloud. It increases additional communication cost and transmission delay. Fog computing has been introduced into smart healthcare to solve this problem. The fog servers assist the cloud server to complete data storage and access locally. It contributes to low latency and high mobility. Since the medical data is highly sensitive, how to design a fog computing-based smart healthcare authentication protocol has become a research hotspot. If the data is tampered illegally, the consequences will be catastrophic. Hence, the authentication protocol should be secure against various attacks and realize the secure data transmission among users, fog nodes, and cloud servers. This study analyzes two schemes for smart healthcare, and points out that Hajian et al.’s scheme cannot resist stolen verifier attack, denial of service attacks, impersonation attacks, node capture attack, and session key disclosure attacks; Wu et al.’s scheme cannot resist offline password guessing attacks and impersonation attacks. Furthermore, a fog computing-based three-party authentication and key agreement protocol are proposed for smart healthcare. The security is proved by using the random oracle model, the BAN logic, and heuristic analysis. As result, it is secure against known attacks. The performance comparison with related schemes shows that the proposed scheme is more suitable for fog computing-based smart healthcare.

    • Supervised Identity Privacy Protection Scheme in Blockchain

      2023, 34(7):3292-3312. DOI: 10.13328/j.cnki.jos.006517

      Abstract (2257) HTML (2167) PDF 8.75 M (4770) Comment (0) Favorites

      Abstract:One of the main challenges of blockchain technology is to ensure the privacy protection of transaction identity under the condition of open ledger and multi-party consensus. At present, the identity privacy protection scheme based on anonymous authentication and transaction mixing in public blockchain is difficult to be popularized in the industry due to the lack of supervision. Based on the identity privacy protection scheme in Monero, this study introduces the role of the regulator, designs a supervised privacy protection scheme for the transaction receiver based on one-time address encryption and zero knowledge proof. It also designs a linkable revocable ring signature scheme based on linkable ring signature and revocable ring signature so as to implement the supervised privacy protection scheme for transaction sender based on autonomous mixing. The scheme can not only protect the identity privacy of the participants, but also support the offline transaction identity recovery for the regulator so as to achieve the regulatory purpose of “controllable anonymity”. The analysis and test results show that the algorithm operation time is millisecond in this scheme, which can meet the performance requirements of blockchain in non-high frequency transaction scenarios.

    • Enhancement of Textual Adversarial Attack Ability Based on Sememe-level Sentence Dilution Algorithm

      2023, 34(7):3313-3328. DOI: 10.13328/j.cnki.jos.006525

      Abstract (1177) HTML (2336) PDF 7.00 M (3656) Comment (0) Favorites

      Abstract:With machine learning widely applied to the natural language processing (NLP) domain in recent years, the security of NLP tasks receives growing natural concerns. Existing studies found that small modifications in examples might lead to wrong machine learning predictions, which was also called adversarial attack. The textual adversarial attack can effectively reveal the vulnerability of NLP models for improvement. Nevertheless, existing textual adversarial attack methods all focus on designing complex adversarial example generation strategies with a limited improvement of success rate, and the highly invasive modifications bring the decline of textual quality. Thus, a simple and effective method with high adversarial example quality is in demand. To solve this problem, the sememe-level sentence dilution algorithm (SSDA) and the dilution pool construction algorithm (DPCA) are proposed from a new perspective of improving the process of adversarial attack. SSDA is a new process that can be freely embedded into the classical adversarial attack workflow. SSDA first uses dilution pools constructed by DPCA to dilute the original examples, then generates adversarial examples through those diluted examples. It can not only improve the success rate of any adversarial attack methods without any limit of datasets or victim models but also obtain higher adversarial example quality compared with the original method. Through the experiments of different datasets, dilution pools, victim models, and textual adversarial attack methods, it is successfully verified the improvement of SSDA on the success rate and proved that dilution pools constructed by DPCA can further enhance the dilution ability of SSDA. The experiment results demonstrate that SSDA reveals more vulnerabilities of models than classical methods, and DPCA can help SSDA to improve success rate with higher adversarial example quality.

    • File Hierarchy CP-ABE Scheme Supporting Graded User Access

      2023, 34(7):3329-3342. DOI: 10.13328/j.cnki.jos.006526

      Abstract (1240) HTML (2099) PDF 10.55 M (3965) Comment (0) Favorites

      Abstract:The file hierarchy ciphertext policy attribute-based encryption (FH-CP-ABE) scheme realizes multi-level files encryption with the single access policy, which saves the computation cost of encryption and decryption and the storage cost of ciphertext. Nevertheless, the existing file hierarchy CP-ABE scheme cannot support graded user access, while suffers due to the unauthorized access. For this reason, a file hierarchy CP-ABE scheme that supports graded user access is proposed. In the proposed scheme, the graded user access tree is constructed, and the ciphertext subsections are reconstructed to support the access requirements of graded users, thus eliminate the possibility of users to conduct unauthorized access. The security analysis shows that the proposed scheme can resist selective chosen-plaintext attack. Both theoretical and experimental analyses show that the proposed scheme is more efficient in terms of computation and storage compared to related scheme.

    • Secure Intersection-sum Computation

      2023, 34(7):3343-3353. DOI: 10.13328/j.cnki.jos.006529