LI Wei-Nan , E Yue-Peng , GE Jing-Guo , QIAN Hua-Lin
2006, 17(12):2403-2415.
Abstract:This paper surveys the algorithms and hardware implementations of the multi-pattern matching. Firstly, two commonly used multi-pattern algorithms, Aho-Corasick’s automata based algorithm and Wu-Manber’s hash based suffix matching with skipping algorithm are introduced. And then, some improving ways are referred. Next, time and space complexity of these algorithms are analyzed, and the experimental results show their performances. Further, several hardware based implementations are taken as examples to demonstrate the general methods and strategies for the implementation on hardware. The developing trend is predicted in the end.
ZHAO Feng , LI Qing-Hua , JIN Li
2006, 17(12):2416-2424.
Abstract:A parallel sequence stream analysis algorithm named FTPSA (proactive fault-tolerant parallel sequence stream analysis algorithm) is proposed in order to deal with sequence stream’s adaptive analysis in noisy environment, which is based on proactive fault-tolerant knowledge learning. The algorithm utilizes learning network to describe sequence stream and stores those in 0-1 matrix, delaminates the low-proportion and the high-proportion noisy data and utilizes fault-tolerant and structure-optimize learning methods, utilizes global filtration to depress memory cost and communication cost. The experimental results on real stream show that FTPSA algorithm is more fault-tolerant, scaleable, accurate, and less memory.
2006, 17(12):2425-2437.
Abstract:In many reliability-critical applications, computers are required to have higher performance, lower power dissipation and fault tolerance simultaneously. Traditional software fault tolerance uses a great deal of branch instructions to detect errors, thus brings great overhead in both performance and power dissipation. In this paper, an error flow model is suggested, and it is used to explain the algorithm of error flow compressing. In error flow compressing algorithm, branch instructions are reduced greatly, while total instructions remain the same. The simulated results on Wattch of FFT benchmark from project StreamIT show that compared with the traditional EDDI error detection algorithm, the EFC can reduce total branch instructions by over 24%, improve IPC by over 12%, and at the same time, reduce the power dissipation by nearly 5%, at loop parameter n=225. Further reasoning shows that the reduction of branch instructions can be as much as over 43% when there are 8 store instructions in the innermost iteration.
CHEN Tsong-Yueh , KUO Fei-Ching , SUN Chang-Ai
2006, 17(12):2438-2449.
Abstract:Adaptive random testing (ART) is an enhanced version of random testing (RT). It has been observed that the compactness of failure regions is one of the factors that affect the performance of ART. However, this relationship has only been verified with rectangular failure regions. This paper further investigates the relationship between the compactness of failure regions and the performance of ART by conducting simulation experiments,where various regular and irregular failure regions are studied. The experimental results have shown that ART's performance improves as the compactness of failure regions increases. This study has provided further insights into the conditions where ART outperforms RT.
HUANG Meng , SHU Feng-Di , LI Ming-Shu
2006, 17(12):2450-2460.
Abstract:Prioritizing requirements priority is the action that stakeholders assign the orders of requirements to be implemented. It is the basement of the iteration plan. Existent methods of prioritizing requirements are inadequate to support stakeholders’ negotiation or the adjustment of requirements’ priorities. These shortages always lead to a rigid iteration plan that is difficult to be adjusted to adapt changes of requirements and environments. In this paper, a risk-driven adaptive method is put forward for prioritizing requirements which combines adaptive planning and risk-driven methodologies. Requirements are prioritized adaptively, and risks are used as the foundation of priorities decisions. The negotiability and adjustability of requirements priorities can be enhanced by the method. The negotiable and adjustable requirements priorities improve the developers’ capability of controlling requirements and reduce the faults of software project caused by requirements.
CHENG Shao-Wu , XU Xiao-Fei , WANG Gang , LI Quan-Long
2006, 17(12):2461-2470.
Abstract:To solve the issue of modeling for loosely coupled inter-organizational workflows (LCIOWF), based on the colored Petri nets, the colored multi-dimension inter-organizational workflow (CMD/IOWF) Nets is proposed through importing color-set, colored function, resource places, waiting places, busy places, begin transitions, end transitions, roles, organizations, time functions, resource functions and transition functions into IOWF. Based on CMD/IOWF, the issue of modeling for multiple workflow instances in LCIOWF (loosely coupled inter-organizational workflow) coupling through resources sharing and activity synchronizing is discussed, as well as the issue of modeling for resources constraint and time relative to simulation. By means of defining inputs/outputs, states, events, time advancing functions and state transferring functions, a simulation model of loosely coupled inter-organizational workflows is presented. Based on the proposed simulation model, the key performance indexes of loosely coupled inter-organizational workflows can be obtained, which are the average execution time, the average execution cost of workflow instances of a respective project, their distributions among organizations and the resource-utilization ratio of a respective project. Finally, to verify validity of the proposed model, a case of loosely coupled inter-organizational workflows simulation is demonstrated.
2006, 17(12):2471-2484.
Abstract:Due to line fragmentation, occlusion and projection of conjoint coplanar space straight lines, there are many “one-to-multiple” and even “multiple-to-multiple” mappings between two features sets in the process of stereo matching, but few reliable methods exist to deal with these cases. In this paper, an algorithm based on feature grouping is proposed to solve these problems. Different from the existing approaches, feature grouping is implemented among the feature set which is composed of linear features extracted from two images, and each feature group contains its associated matching relationships. Therefore, stereo matching becomes equivalent to extracting a set of mutually compatible feature groups from the two images. Two major steps involve in the whole matching process. As much putative feature groups as possible are constructed and their match measures are computed by exploiting some viable geometric and photometric constraints, and then a subset of feature groups is searched so that the sum of the associated match measures is the maximum under the condition that any extracted linear feature at most belongs to only a selected feature group. In order to solve the integer optimization problem, a two-stage method is devised. First, the whole problem is divided into many sub-problems. Second, for each sub-problem, a branch-and-bound method is implemented to find the optimal solution. The proposed algorithm is applied to match straight lines extracted from many pairs of real stereo images, and satisfying experimental results are obtained.
LUO Xiang-Yu , SU Kai-Le , YANG Jin-Ji
2006, 17(12):2485-2498.
Abstract:This paper presents an approach to the verification of temporal epistemic logic in synchronous multi-Agent systems via bounded model checking (BMC). By incorporating epistemic modalities into temporal logic CTL*, a temporal epistemic logic ECKLn is introduced, and it is interpreted under the semantics of synchronous interpreted system. The temporal epistemic expressive power of ECKLn is greater than that of Penczek and Lomuscio’s logic CTLK. Agents’ knowledge interpreted under synchronous semantics can be skillfully attained by the state position function, which avoids extending the encoding of the states and the transition relation of plain temporal epistemic model for time domain. The technical details and the correctness of the BMC method for logic AECKLn/EECKLn (the universal or existential fragment of ECKLn) are given. A case study of train controller system is presented to illustrate the processing of the BMC method.
MEI Zheng , LI Jin-Tao , NING Hua
2006, 17(12):2499-2507.
Abstract:Fine-Granularity-Scalability (FGS) coding can provide the flexibility and good performance for video streaming, and thus has been accepted in MPEG-4 and H.26L. It is noticed that FGS can be truncated anywhere to adapt the changes of the network bandwidth. However, the existing simple truncation methods would lead to the fluctuation of the quality of video sequence. Often, users would like to obtain a smooth quality. Based on the discussions of related works, this paper presents two equal-quality FGS video streaming algorithms with/without loss in the non-real-time streaming application scenarios. These algorithms are based on the piece-wise line R-D model and the slide window protocol. In the lossless case, the bisection method is exploited to allocate the network bandwidth among the frames in the current window so as to realize the equal-quality FGS video streaming; while in the loss case, an adaptive heuristic algorithm and the forward effort correction (FEC) technique are used for the same purpose. The experimental results demonstrate that the algorithms can have good performance in both of the two cases, thereby obtaining the smoother streaming quality of FGS video sequence.
ZHANG Wen-Chao , SHAN Shi-Guang , ZHANG Hong-Ming , CHEN Jie , CHEN Xi-Lin , GAO Wen
2006, 17(12):2508-2517.
Abstract:In this paper, a method for face description and recognition is proposed, which extracts the histogram sequence of local Gabor binary patterns (HSLGBP) from the magnitudes of Gabor coefficients. Since Gabor feature is robust to illumination and expression variations and has been successfully used in face recognition area. First, the proposed method decomposes the normalized face image by convolving the face image with multi-scale and multi-orientation Gabor filters to extract their corresponding Gabor magnitude maps (GMMs). Then, the local binary patterns (LBP) operates on each GMM to extract the local neighbor pattern. Finally, the input face image is described by the histogram sequence extracted from all these region patterns. The proposed method is robust to illumination, expression and misalignment by combing the Gabor transform, LBP and spatial histogram. In addition, this face modeling method does not need the training set for statistic learning, thus it avoids the generalizability problem. Moreover, how to combine the statistic method in the stage of classification and propose statistic Fisher weight HSLGBP matching method are discussed. The results compared with the published results on FERET face database of changing illumination, expression and aging verify the validity of the proposed method.
ZHANG Zhi-Zheng , ZHAI Yu-Qing x , XING Han-Cheng
2006, 17(12):2518-2528.
Abstract:In this paper, logical chain is introduced to be the uniform preference representation mean in iterative reasoning of qualitative preference description language LPD (logic preference description). The main contributions include: (1) A new general framework of several common decision making settings is proposed, and LPD is introduced in the framework; (2) LPD is translated into logical chains by constructing logical chains instead of preference operators. Consequently, the preference reasoning in LPD is reduced to several methods of constructing logical chains so that the complex preferences in LPD can be reused in subsequent handlings. Moreover, the conclusion and the prospective are presented in the end.
HUANG Hua , FAN Xin , QI Chun , ZHU Shi-Hua
2006, 17(12):2529-2536.
Abstract:Super-Resolution (SR) reconstruction is posed as a Bayesian estimation of the location and appearance parameters of a face model. Image registration and image fusion, the two steps for SR reconstruction, are combined into one unified probabilistic framework, in which the prior information about facial appearance and gray from the face model is incorporated into both of the steps. In addition, a particle filter based algorithm is proposed to achieve the estimation, i.e. SR reconstruction. The proposed approach avoids the inherent dilemma of the most traditional methods, in which it demands a high-resolution image to get an accurate and robust estimation of the motion field, while reconstructing a high-resolution image requires the accurate and robust estimation of motion field. Experiments performed on synthesized frontal face sequences show that the proposed approach gains superior performance both in registration and reconstruction.
WANG Li-Ming , HUANG Hou-Kuan , CHAI Yu-Mei
2006, 17(12):2537-2546.
Abstract:Multi-Issue negotiation between Agents is a complicated course in which negotiating Agents mutually exchange offers. Solving the problem of choosing seller before negotiation has important practical value in e-commerce. The problem is solved in this paper to improve accuracy of the multi-issue negotiation and buying Agent’s utility. In order to fully utilize negotiation history, tradeoff exploration and exploitation, the problem of choosing seller is transformed into a K-armed bandit problem. A model for measuring trust and reputation is presented, several improved algorithms, which are used to learn reward distribution and combine learning with technologies for K-armed bandit problem, are presented. Finally, the combination of the improved algorithms, the trust and reputation improves the accuracy and practicability of choosing a selling Agent. Several experiments prove validity of the work in application.
YAN Bin-Feng , ZHU Xiao-Yan , ZHANG Zhi-Jiang , ZHANG Fan
2006, 17(12):2547-2553.
Abstract:In this paper, an approach is presented for integrating several confidence measures to verify utterance based on support vector machine (SVM). Segmental filler-based posterior probability parameters and linear predictive coding (LPC) recognition difference measure are derived from the verified utterance. An SVM classifier is trained to integrate several confidence measures to make final decision. Experimental results show that confidence measures and the SVM classifier are effective for utterance verification.
LUO Ying-Wei , LIU Xin-Peng , PENG Hao-Bo , WANG Xigo-Lin , XU Zhuo-Qun
2006, 17(12):2554-2564.
Abstract:Around city applications such as E-Government, E-Business, etc, event handling becomes a common activity. When handling an event in a city, people should focus on the task of integrating, conforming and scheduling heterogeneous, distributed resources from multi-domains, as well as sharing knowledge among different organizations. This paper proposes a rule-based distributed resources integrating and scheduling model for event handling. The model has a hierarchical structure with four levels: resource layer, knowledge layer, business layer and representation layer. Detail work on the knowledge layer is explored, which includes the definition of formal business rule, reference to resources in a rule, as well as the design and implementation of the rule system. Finally, an event handling prototype system of unitive alarm taking in for iEMS (integrated emergency management system) is implemented to validate the model.
SUN Yan-Tao , WU Zhi-Mei , SHI Zhi-Qiang
2006, 17(12):2565-2576.
Abstract:In this paper, the connections reasoning technique (CRT) based on the predication logic is proposed to infer the connections between network nodes. This technique interprets the address forwarding tables (AFTs) as a set of predicate formulas and translates the topology discovery into a mathematic problem of logic reasoning, so that the topology discovery can be studied by resorting to mathematic tools. An algorithm for topology discovery based on CRT is proposed in this paper. Compared with current discovery algorithms, this method excels in: 1) Applying the redundancies in AFTs more effectively, so that the whole topology can be built up by just small part of AFTs; 2) Naturally resolving the problem of topology discovery for multi-subnet switched domain without any extensions. Furthermore, a method with little cost is proposed to discover the dynamic topology in this paper. This algorithm is successfully applied to the network management system for CBISN (Community Broadband Integrated Services Network).
LI Ping , LIN Ya-Ping , ZENG Wei-Ni
2006, 17(12):2577-2588.
Abstract:Security issue of sensor networks is greatly different from that of conventional networks, in terms of its specific requirements, constrained resources of nodes, and variety of network characteristics. The security architecture of sensor networks is proposed, trying to outline a general illustration on this area. The following three topics are discussed: 1) Security primitives such as SKE, MAC and PKC; 2) Various keying mechanisms for the key management issue; 3) Joint considerations on network-related issues including routing, energy and fault tolerance.Other open security issues and further challenges are also addressed.
XU Xiao-Fei , ZHAN De-Chen , XU Xiao-Fei
2006, 17(12):2589-2600.
Abstract:A parallel machine batch process scheduling problem (PBPSP) integrating batching decision is investigated. The problem is converted into the fixed charge transportation problem (FCTP). A genetic local search algorithm (GLSA) with intensification strategy of local search and escape strategy from local optimal solution is developed. The sorted edges attained by root-first search of spanning tree are used to encode spanning tree in the genetic local search algorithm. Efficient single point crossover operator appending edges to sub-tree is proposed. Network simplex method based local search is used to be the mutation of individual. To enhance the capacity of searching the global optimal solution, this paper presents an intensification strategy of local search that applies continuous random node local search to the current optimal solution and an escape strategy from local optimal solution based on random pivot mutation and random node local search. The results of computations demonstrate that the genetic local search algorithm is better than the permutation encoding genetic algorithm and the matrix encoding genetic algorithm on solution quality, and can find the optimal solution of all Benchmark problems. Moreover, the genetic local search algorithm is robust. Compared with the tabu heuristic search procedure, this algorithm can obtain more frequently the optimal solutions of the test problem instances.
CHEN Ji-Ming , SONG Ye-Qiong , SUN You-Xian
2006, 17(12):2601-2608.
Abstract:This paper presents the definition of a weakly hard real-time system, and summarizes the existent constraint specifications of this system and their relationships. A constraint specification (m ,p) is put forward. Theorems including their proofs are presented to compare the stringency between the constraint specifications. Moreover, a theorem proposed by Bernat for stringency compare is corrected in this paper.