Aiming at purchasing decision-making existing problems of supplier and order selection in E-business Logistics Management System(ELMS), this paper gets supplier selection scheduling algorithm according to ELMS purchase process and multi-attribute decision-making method. It realizes order selection strategy using multiple servers multi-queue model based on Stochastic Petri Net(SPN). Performance analysis results show that this method can effectively enhance throughput and reduce rate response time of ELMS.
Aiming at the problem of lacking mathematics model, level structure no clarity and application rang narrow in security authentication protocol, this paper presents protocol composition deduction system. It divides the design of security protocols into three levels. The first level is realizing basic key exchange and identity verification, the second level is the efficiency promotion and realization mechanism of defending denial of service attack, the third level is the security analysis of security protocols and verifying the security properties by automatic test tool. Simulation results show that this system can realize formal design for security authentication protocol, and it can be conveniently extended and transplanted according to need.
In order to meet the demands for large scale databases clustering, this paper proposes a parallel clustering algorithm based on density for computer colony. This algorithm goes on data partition according to database distribution feature, processes data block parallel clustering on every node, merges clustering result on main node. Experimental result shows that computing speed of this algorithm is linear increment with number of node increasing, and it has better extensibility.
Aiming at the problem of guard sub carrier and non-2n interval pilot insertion for OFDMA system based on IEEE 802.16d/e standard, this paper presents a DFT-based time domain LS channel estimation algorithm. A low-rank approximation estimator based on DFT is derived through the Singular Value Decomposition(SVD) process of matrix T, and matrix T is easier to get than autocorrelation matrix R. Simulation results show that the channel estimation performance and computational complexity of this algorithm are between LMMSE-SVD algorithm and IFFT/FFT algorithm, but the requirement for pilot is less critical of this algorithm, and the scope of application is more widely.
The safety technology platform of onboard ATP is a three computer channels of the fault-tolerant computer, and how to ensure the system synchronization of three computers is one of its key technologies. Aiming at the synchronization technology, this paper brings forward the mode, which combines the model of public time synchronization with the model of bounded variance ratio of drift of time to compensate the offset and drift to achieve the synchronization of local time of three systems. Based on this, software adopts scheduling-control to achieve the synchronization of tasks of three systems. According to several tests, the synchronization mechanism proves that it can satisfy the requirement of three computer channels of the fault-tolerant computer.
A power system transient stability assessment method is proposed based on an information fusion model. When accidents occur, much information obtained from power networks and generators is synthesized to diagnose power system transient stability by the method. D-S evidence theory is fused for further reducing the uncertainty of transient stability assessment. A 10-generator and 39-bus power system is simulated. The simulation result indicates that the proposed method is more precise than the old methods.
Aiming at the shortcomings of traditional image encryption algorithm in efficiency and security, a new image encryption algorithm named square is proposed, which is composed of square permutation and revised mixing operations. The new algorithm satisfies all requirements for parallel image encryption. It is faster and securer than MASK algorithm and can run on practical parallel computing platform.
In order to select suitable compensation service, this paper presents a Web transaction Quality of Service(QoS) framework based on compensation factor. It makes compensation cost and reliability of transaction coordination as a QoS extension. Transaction coordinator is based on built QoS rule base to progress transaction selection and restore. It can enhance transaction whole QoS attribute and satisfy QoS requirement of transaction request. Experimental result shows that this framework can reduce compensation cost of Web transaction, and enhance success rate of Web transaction.
To reduce the limited range when used to multiple kinds of music signals and high computing complexity of existing note onset automatic detection methods, a note onset detection algorithm based on differential all phase Mel Frequency Cesptrum Coefficients(MFCC) is presented. All phase preprocessing alleviates spectrum ambiguity because of spectrum leakage. And differential Mel frequency cesptrum considered nonlinearity of human ear when responding to different frequencies and dynamic property of music signal. Experimental result demonstrates this algorithm is fit for multiple kinds of music and has perfect general detection performance with lower computing complexity compared with those of High Frequency Content(HFC) and Independent Component Analysis(ICA).
A method which used wavelet package is put forward to extract the feature of Eectroencephalogram(EEG) signals more efficiently. With the help of wavelet, the original EEG signals are decomposed and recomposed at the related frequency range, which is in order to feature extraction, and computed with BP neural network technology. Experimental result shows that the wavelet can extract the feature waves efficiently, which are obtained with more than 80 percent identification rate for three participators, person identification can be used by persons with disabilities and the general public, it has better adaptation.
A two-tier service placement strategy based on the P2P overlay network framework is proposed. The optimization model is built for intra-domain Service Placement Problem(SPP) and an efficient algorithm is designed to solve the problem. Several cost calculation methods are adopted to fit different structures of the composed service. Simulation result shows the intra-domain service placement strategy decreases the cost for intra-domain composed services, and the two-tier service placement strategy decreases the total cost of composed services.
A transaction consists of a serial of operations, and has ACID properties. However, in large-scale distributed applications, the traditional transaction model is not applicable. A transaction model based on local closed world assumption, as well as data replication, is proposed. Its features, concurrency control and correctness are also investigated and demonstrated. This transaction model is illustrated with a teaching management system.
As the performance of the association mining algorithm can be changed by the property of the input data, the same algorithm can do different to the different data set. Therefore, this paper categorizes the input data into there groups, and takes different methods to deal with the first two data sets. The examples show that the above processing can improve the quantity of the output patterns. In order to develop the quantity of data mining, it simply introduces the interestingness of patterns, and the interestingness preprocessing of the association patterns. According to the above analysis, it proposes a new method of mining interesting association patterns.
A virtual machine-based transparent computing system called MMNC-VX is implemented so that heterogeneous operating systems can run unmodified on-demand in a transparent computing environment, but its performance has the very big disparity compared with PC. In order to solve this problem, this paper presents a lightweight virtual machine based transparent computing system LBTC which only virtualizes network devices and allows user operating system to access other devices except network device directly to decrease extra overhead because of virtualization. Meanwhile LBTC adds a virtual storage device model in service operating system to redirect storage I/O requests to be dealt with by the server to implement transparent computing. Test results show the prototype’s performance has the same level as PC with the same hardware and is improved compared with MMNC-VX.
In probabilistic database, aggregation query processes each possible world, but the number of possible may be exponentially growing with the increase of tuple number. So, aggregation query can not be calculated in linear time when there is a relatively large number of tuples. Three aggregation components are defined for each aggregation function. By encoding original probabilistic relation and using conversion, storage procedure and approximate calculation methods respectively, aggregation query can be implemented in linear time. Theoretical proof and experimental results show that the methodology used is correct and efficient.
According to the require of data model based on the memory and query function of Networked Automatic Test System(NATS), this paper compares the merits and defects from different data models such as the extended relational model, semantic data model and object oriented data model, proposes the basic frame of the database which is that static data information adopts relational data model and unstructured multimedia data adopts a method combing object-relational model with file system, and implements the total design of NATS. Experimental result shows the validity of the frame.
An algorithm of test case generation for combinatorial design is proposed in this paper. The conception of pair-index table is defined, based on which the adaptive genetic algorithm is introduced to generate test cases. The tool developed based on the proposed algorithm, GATG, is presented. Compared with the same kind of tools, GATG almost has the same performance in the experiment, and the algorithm is useful and improvable.
To solve weak semantics problem of integrating heterogeneous databases by semantic technology, which is caused by the scale of records, an algorithm is proposed. It gives consideration to efficiency and reasoning ability. Ontology and queries are expressed as graphics. Sub-graph is matched with semantics. The result is translated into database query language. This algorithm supports more rich semantics than the algorithm that rewrites semantic query into SQL does.
In view of current situation that the passenger’s transfer rate in public transportation system is quite high in city, a highly effective, reasonable transfer system for public transportation is studied and designed. This system uses ArcGIS Engine combined with C#.NET to carry on integrated secondary development, through a kind of improved Dijkstra algorithm, constructs a line-station transfer matrix contain information of the route and transfer station, and extracts the most feasible route plan from matrix that has the least number of transfer.
This paper presents a compiler’s back-end optimizing design for Transport Triggered Architecture(TTA) processor using linear scan algorithm, the algorithm is used to accomplish global register allocation. The application of this algorithm makes TTA compiler so many advantages, such as the quality of code obtained is high, time and space complexity is low, and implementation is easy. Experimental results show that the advantages of the algorithm are especially obvious when source programs contain large number of variables competing for same number of registers.
This paper reviews dominating Webpage ranking algorithms, improves HITSS algorithm among of them, and proposes a new algorithm——BHITS based on Webpage sub-block. BHITS algorithm uses the right values of different theme plates, the platea are calibrated by its topic and the right values of different subject sections are set according to the subjects of information to be collected, which improves the capacity of the hyperlinks distinguishing, while high efficiency is kept. From the contrastive experiment with the related algorithms, the result shows that the precision of BHITS algorithm is significantly higher than that of other algorithms.
Aiming at data structures complexity and hard construction of space-flight real-time data-driven software, the automated test data generation algorithm is proposed for space-flight real-time data-driven software function test which is based on Simulated Annealing(SA) multi-parent Genetic Algorithm(GA). The method of adapting function and the strategy of variation function are discussed. The analysis based on the application proves that the algorithm can find near 30% more software failures than common methods.
This paper presents Web as the object to be assigned to registers, constructs the interference graph of Webs by data flow analysis of Web liveness. Compared with interference graph of variables, the nodes based on variables are splitted into new nodes based on Web, over which the variables’ interference is distributed. The interference graph based on Webs has more nodes, but these nodes’ degree are less. Thus the graph can be colored with less colors. It can reduce the number of required registers to produce more efficient executive code and make register allocation more flexible.
Aiming at the limitation of Open Grid Services Architecture-Data Access and Interface(OGSA-DAI) middleware only face to JDBC, an unified heterogeneous data access interface middleware based on .net framework is proposed. XML Web Service cross-platform technology and enhanced ADO.net database access capability are used, which not only get a common interface and easy scalability, but also shield the bottom of the details on interface for data access, database connectivity, data format conversion, data transmission, database integration etc. Test result shows that the middleware is current, flexible and reliable.
Aiming at the problems of complex and low efficiency decision tree constructed by ID3, this paper proposes a decision tree classification algorithm based on rough set, which takes the weighted classification rough degree as the heuristic function of choosing attribute at a node. This heuristic function can synthetically measure contribution of an attribute for classification, and is simple in calculation. To eliminate the effect of noise data on choosing attributes and generating leaf nodes, a method using variable precision rough set model is used to optimize the algorithm. Experimental results show that the size of trees generated by the new algorithm is smaller and higher accuracy than ID3 algorithm.
For the question of persistent storage in J2ME programs, according to Mobile Information Device Profile(MIDP), this paper proposes an object-oriented scheme with caching mechanism for storage and management. It encapsulates data into object, separates the data access and data storage by using data manage level between program and persistent storage, and uses the caching mechanism to raise the data access efficiency. The feasibility of the scheme is verified by the experiments. Experimental result indicates the access efficiency of frequently access data is improved by use of caching mechanism.
This paper introduces and analyzes the OPC DA 2.05A specification, implements Component Object Model(COM) object based on the Active Template Library(ATL), and realizes OPC client applications in VC++6.0 IDE. It gives the general steps and key technologies of OPC client implementation. It is tested and verified that this OPC client can realize stable and fast data exchange with standard OPC server by using Kepware’s OPC Server KEPServerEx V4.0.
This paper introduces a model based on Principal Component Analysis(PCA), Independent Component Analysis(ICA) and multiple data stream model which supplies a new method of multiple data stream relation analysis and pattern discovery. As PCA/ICA can separate independent components from complicated information, a solution can be implemented with the help of PCA/ICA on multiple data stream relation analysis, pattern discovery and hidden variables. The robustness and real-time performance are also discussed in the experiment.
The workflow adaptability is restricted by active routing, the quantity and performance of resource, the role’s authority, and time. To resolve these constraints problems, this paper presents a workflow soft constraints network model. Based on policy-based workflow management system, it uses soft constraints network to quantize policy and optimize policy, which are used to guide policy decision-making. Analysis shows that this method of model and policy decision-making has obvious advantages on improving model description ability, decreasing model complexity and improving system adaptability.
This paper presents a novel semi-fragile watermark algorithm for image authentication and recovery. Watermark generation and watermark embedding are disposed in original image, and it needs no information about the original image or watermark when image is authenticating, so it increases the security and confidentiality of watermark. Experimental results show that the algorithm can distinguish intentional content tamper and incidental image processing well, and recover the content modified if image is tampered.
This paper researches a hybrid Particle Swarm Optimization(PSO) algorithm for solving travel salesman problem. In order to improve the capability of local searching, some local searching algorithms such as inversion and swapping are added. It takes advantage of strong global searching capability of Genetic Algorithm(GA) to further optimize the results got from PSO algorithm, which can further improve the performance of the hybrid algorithm. Besides, it adds an optimization to eliminate cross path of global optimum solution. Simulation result shows this hybrid algorithm can converge to a satisfactory result within a few generations for middle and small scale travel salesman problem.
The expert system of disease diagnosis is subjected to the bad efficiency, the low accuracy and the lack of contrast, by its only one time use of the field knowledge in a reasoning process, which is built on the base of classical model. Making example of goat, this paper designs the architecture of diagnosis system, and introduces the ideology of multi-mode composite reasoning scheme, constructs both Bayesian reasoning supporting learning by self which is based on probability. The method of measuring semantic is based on pattern recognition, and with different theoretical background. Experimental results show that the composite reasoning scheme enables to improve the utilization rate of knowledge, its accuracy of diagnosis reaches 85%, increasing the contrast, and achieves with an accepted macro effect of diagnosis.