Quantum Particle Swarm Optimization Algorithm Based on Dynamic Adaptive Search Strategy

The particle swarm system simulates the evolution of the social mechanism. In this system, the individual particle representing the potential solution flies in the multidimensional space in order to find the better or the optimal solution. But because of the search path and limited speed, it's hard to avoid local best and the premature phenomenon occurs easily. Based on the uncertain principle of the quantum mechanics, the global search ability of the quantum particle swarm optimization (QPSO) algorithms are better than the particle swarm optimization algorithm (PSO). On the basis of the fundamental quantum PSO algorithm, this article introduces the grouping optimization strategy, and meanwhile adopts the dynamic adjustment, quantum mutation and possibility acceptance criteria to improve the global search capability of the algorithm and avoid premature convergence phenomenon. By optimizing the test functions, the experimental simulation shows that the proposed algorithm has better global convergence and search ability.


Introduction
PSO algorithm is a new evolutionary computation technology, which belongs to the category of swarm intelligence algorithm.For PSO algorithm, the movement of particles fully embodies the characteristics of the swarm algorithm, and during the process of the movement, particles follow the optimal position found by them and the one of the entire population, eventually making the whole particle swarm gather at the optimal solution position.The particle swarm algorithm is simple in the concept and easy in adjusting parameters, so it has been widely used.However, because of the search path and limited speed, it's hard to avoid local best and the premature phenomenon occurs easily [1].The emergence of the quantum particle swarm optimization algorithm solves the problem of limited search scope.Based on the uncertain principle of the quantum mechanics, the global search ability of the quantum particle swarm optimization algorithm is better than the particle swarm algorithm [2].
For traditional PSO algorithms, the particle searches by flying.The flying process depends on the speed, however, with limited speed, the particle can only search within a limited search scope, and the limited search scope limits the particle in a fixed area without covering the whole feasible solution.Thus, the particle swarm can't search the global optimal solution with probability 1. QPSO bases on DELTA potential well model to determine that the particle state is similar with the quantum behavior [3].For quantum particle swarm optimization algorithm, the particle state is determined by wave function.The quantum space is the whole feasible solution space, which satisfies the wave principle of quantum mechanics.The particle has the characteristics of uncertainty in the search space and it can search in the whole feasible solution space, therefore, we conclude that the quantum particle swarm optimization algorithm has such advantages as the strong global search ability, etc [4], [5].
Based on QPSO algorithm, this article introduces a new search strategy.During the search process, each particle no longer updates its own position only by learning its current local optimal value and global optimal value, but by learning its current local optimal value and other particles' current local optimal value and global optimal value.This paper first introduces the basic particle swarm optimization algorithm, and then expounds the quantum theory, the principle of quantum particle swarm optimization algorithm and the implementing steps of this algorithm, and finally, by analyzing through the experimental simulation, this article proves that

Basic Particle Swarm Optimization Algorithm Model
Particle swarm optimization algorithm is the result of research on birds' feeding behavior, and it is an optimization tool based on iteration.We can imagine such a situation: there is a piece of food at one point in a region, and a flock of birds randomly searching for food.Every bird does not know the specific position where the food is, but knows how far the food is [6].The best way to find the food is to find the nearest area where the bird is from the food.First of all, imagine each bird into a particle and its position stands for one solution of the question.The merits of the particles are determined by the fitness value of the optimization function, and the particle movement is decided by its flight direction and distance.The particles follow the current optimal particles to search in the solution space [7].
For PSO algorithm, the random initialization is firstly conducted, and then the target space will be searched by the iterative condition of the algorithm till the optimal solution is found.During each iterative process, the algorithm updates itself by tracking two "extreme values".The first extreme value is the optimal solution found by the particle itself, and this optimal solution is called the optimal extreme value ( ) i p of the individual particle; the other is the current optimal solution found by the entire population, and this extreme value is called the global optimal extreme value   g p .Its mathematical model is: Suppose the target search space is D dimension, the group size is M , the potential solution particle form is x fitness value according to the preset fitness function (related to the problem to be solved), i.e. the merits of measurable particle position, is the flight speed of the particle i , i.e. the distance of the particle moves is the optimal position the particle has searched till now, is the optimal position the entire particle swarm has searched till now, in which g is the subscript of the particle located in the best global position, and . For the minimization problem, the better the fitness value of the function is, the smaller the objective function value is.During each iteration process, the particle updates the speed and position according to following formulas: In which, r and 2 r are random numbers between [0,1], which mainly implements the diversity of the population.
1 c and 2 c are learning factor of the algorithm ( acceleration factor), and their function lies in to make the particle able to study and summarize by itself, and also able to approach the best point of itself and the population [8], [9].In order to improve the convergence of particle swarm optimization algorithm, Shi and Eberhart proposed the inertia weight factor in 1998, and the calculation formula is changed into: In which,  is called the inertial factor which mainly weighs the search ability of the  PSO adopts the inertia weight to balance the global search and local search.Large inertia weight tends to be adopted in global search, while small inertia weight tends to local search.By dynamically changing the inertia weight, we can realize that the search ability is dynamically adjusted.

Quantum Particle Swarm Optimization Algorithm 3.1. Related quantum theory 3.1.1. Quantum's uncertainty principle
The quantum concept is usually used in the physics.The quantum is an inseparable fundamental individual and a mechanical unit of the energy in the micro system.Its basic concept is that all tangible property "can be quantized."Quantization"means that its physical quantity value would be some specific values rather than any value.
Quantum in the microscopic world has many features of micro objects that cannot be explained in the macroscopic world, and these features are mainly manifested on quantum's state properties, such as quantum state's superposition, quantum state's entanglement, quantum state of being unable to be cloned, quantum's "wave-particle dualism" and quantum state's uncertainty etc.These strange phenomena come from the interaction among micro objects in the microscopic world, i.e. the so-called quantum coherence characteristic.
Uncertainty principle is a core theory of quantum mechanics, and it is expressed as: it is impossible to know strength and position of the particle at the same time, and the more accurately we know one of them, the more inaccurate the other one, this is the inherent feature of the quantum world and the reflection of the wave-particle dualism contradiction of the matter.The quantum world is the world that is controlled by the probability, and there is no accurate prediction but the probability that one matter will happen.Because of such characteristics of quantum, the wave function is adopted to describe the quantum behavior in quantum mechanics [11], [12].

Schrodinger equation and wave function
Schrodinger equation mainly consists of the time-dependent Schrodinger equation and the time-independent Schrodinger equation.The time-dependent Schrodinger equation depends on the time, which is specially used to calculate how one quantum system's wave function evolves over the time.The time-independent Schrodinger equation does not depend on time, which is used to calculate a stationary state quantum system and corresponds to the eigen wave function of a certain eigen energy [13].
Wave function is the function used to describe the Broglie wave of the particle in the quantum mechanics.
Wave function   , x t   satisfies the following time-dependent Schrodinger equation: In which, ћ is Planck constant, H is Hamilton operator, and m is the quality,   V x  can be acquired by substituting the system's potential energy into the equitation( 6).In the one-dimension space, the time-dependent Schrodinger equation when one single particle moves at the potential   The time-independent Schrodinger equation is: In which, E is the energy, is the correspondent function of E .

QPSO search strategy
The particle swarm system simulates the evolution of the social mechanism.In this system, the individual particle representing the potential solution flies in the multidimensional space in order to find the better or the optimal solution.Particles fly according to the current position and speed, i.e. searches around itself by a fixed track.For PSO algorithm, the analysis of the particle flight track proves that when the inertia weight 0   , the PSO algorithm search process is essentially a local search algorithm, and at that time, the formula (1) becomes: Suppose c and 2 c are learning coefficients of PSO algorithm, 1 r and 2 r are random numbers that satisfy the uniform distribution among (0,1).So, the above formula can be abbreviated into: When i pbest and gbest are stationary, formula (11) is a simple linear difference equation and its solution is: Here, for , the formula (12) shows that when 1 1 , i.e. the algorithm convergence.
i p is referred to the local attractor of the particle i .If QPSO algorithm convergence can be guaranteed, it requires that each particle should converge to its own local attractor i p :   Here, overlap in one point.Therefore, it is assumed that during the algorithm iteration process, there exists the attract potential field in some form at the local attractor i p , and all particles in the population is attracted by i p , and approach gradually to i p with the algorithm iteration and eventually overlap with i p , that is also the reason why particle swarm is able to maintain the aggregation [14].

Algorithm description
For classical mechanics, the flying track of the particle is fixed, but for the quantum mechanics, we can see from the Heisenberg uncertainty principle that for a particle, its position and speed cannot be determined at the same time, and the track makes no sense.Therefore, if the particle in PSO algorithm has the quantum behavior in the quantum mechanics, then the PSO algorithm will work in different ways.The algorithm flow chart of this article is as follows [15], [16] as Figure 2.
For quantum mechanics, the particle state is described by the wave function   , X t which is the complex function of coordinate and time, in which,

 
, , X x y z  is the position vector of the particle in three-dimensional space.The wave function's physical meaning is: the square of the wave function module is the possibility density when one particle occurs at a certain point X in the space at the time point t , i.e.: In which, Q is the probability density function.Probability distribution density function satisfies the normalization condition: Suppose the particle swarm system is a quantum system, and each particle has the quantum behavior, and the wave function is used to describe the particle state.According to the analysis of the convergence behavior of particles in PSO algorithm, there must be attract potential in some form centered by i p .So,  potential well can be set up at i p and its potential energy function is expressed as: The particle's time-independent Schrodinger equation in  potential well is: In which, E is the particle energy.By solving the corresponding wave function of this equation, we can get: Its possibility density function Q is: Possibility distribution function F is:

 
Q Y when the particle occurs at i p .Monte Carlo stochastic simulation can be adopted to determine the particle position: In which,  is the random number among (0,1), and L is the characteristic length of  potential well.In order to make the particle position change along with the time and also able to converge, the characteristic length in formula ( 22) must also change along with the time, namely ( ) L L t  .In this way formula (22) can be rewritten as: From formula (22), we can see that L is the search scope of particles, and the larger L value is, the larger the particle search scope.However, if the L is too large, it will lead the entire particle swarm to diverge and lower the particle swarm convergence speed and ability, if L value is too small, it will lead the premature convergence of particle swarm and also fall into the local best.The following two methods can be adopted to evaluate L value choice: The first method is: Then, for 1 t  generation, the position evolution equation of number i in j dimension can turn into: The second method is to introduce the mean best position which is defined as the average of all individual particles' best position, namely: Then, ' L s evaluation method may be changed into: Corresponding particle evolution formula is changed into: In which,  is called the contraction-expansion coefficient, and it is the only parameter except the group size and iteration number in the QPSO algorithm. is used to control the convergence speed of particles, when 1.75

 
, the algorithm can guarantee the convergence, and generally the linear decreasing gradient from 1 to 0.5 is selected.

. Standard test function
Use the following four standard functions to validate the effectiveness of the improved algorithm, and four functions all have the minimum value, and the minimum value is 0.
(1) Sphere function Function optimal value is 0, and the corresponding global best solution is at the origin, namely, when 0 i x  , the value is minimum.The function feature is smooth, continuous, symmetrical and unimodal in the curved surface, and there is no interaction between function variables and the gradient information always points to the global best point. ( The function optimal value is 0, which is a unimodal function, and each component of the corresponding global optimal solution is 1, namely, when 1 i x  , the value is minimum.The characteristics of function is smooth in the curved surface, and there is narrow ridge area on the curved surface is smooth, and the ridge area is very sharp at the top, and the area nearby the best point is banana-shaped.There is a strong correlation between variables, and the gradient information often misleads the algorithm search direction, and the algorithm is difficult to search the global best solution. ( 10 cos 2 10 Function optimal value is 0, and the corresponding global best solution is at the origin, namely, when 0 i x  , the value is minimum.This function is a nonlinear multimodal function, and has many a local best point.There perhaps are 10n local best points within the scope of ( 5.12, 5.12) and it is difficult to find its global optimal solution.For the optimization algorithm, this function is such a function that is extremely difficult to be optimized.Optimization algorithm is vulnerable to fall into a certain local best point on the path to the global best point. ( Function optimal value is 0, and the corresponding global best solution is at the origin, namely, when

Test and analysis
Based on tests, this paper makes a contrast between the particle swarm optimization algorithm with the linear declining weight (LinWPSO) and the standard particle swarm optimization algorithm (Basic PSO).QPSO algorithm's parameter setting is as follows: learning coefficient .The parameter setting of LinWPSO algorithm is same with QPSO, and t is the current evolution algebra.In order eliminate the randomness of the algorithm, each optimization test function is calculated independently 50 times, and the average is taken as the final result.Figures 3-6 show the mean best fitness value evolution curve of 4 benchmark functions by three algorithms.
From the figures, we can see the improved algorithm QPSO in this section, to a certain extent, overcomes the defect of the particle swarm optimization algorithm in the premature convergence, and improves the algorithm's optimization searching ability; in addition, QPSO algorithm can better converge at the global optimal point with fast convergence speed.At the same time, with the increase of iteration times during the process of calculation, the global searching ability is gradually strengthened, and the convergence precision of the algorithm is better.

Conclusion
This article firstly introduces the basic principles and model analysis of PSO algorithm, and then expounds the quantum theory and the ideological source of QPSO algorithm, and deduces basic evolution formula of QPSO algorithm, compares the QPSO algorithm, LinWPSO and BPSO algorithm through the experimental simulation, and also analyzes QPSO algorithm features, with the view to show that quantum particle swarm optimization algorithm has good robustness and search efficiency.

2 
algorithm.The global search ability is stronger when 1. ; while the local search ability is weaker when 0.8   , and new area can always be searched.With the increase of the number TELKOMNIKA ISSN: 1693-6930  Quantum Particle Swarm Optimization Algorithm Based on Dynamic Adaptive .... (Jing Huo) 323 of iterations,  shall decrease continuously, and generally  is supposed as the linear decreasing function.Figure 1 shows the adjustment concept of one search point and the individual search idea in the solution space [10].

Figure 1 .
Figure 1.Search sketch chart of particle in solution space


From formula(13), we can see that the local attractor i p is located at the super rectangle with the individual best position i pbest and the group best position gbest the vertex, and the position of the local attractor i p changes along with the change of i pbest .When the algorithm converges, the particle swarm also converges to the local attractor i p , at that time, the particle individual best position i pbest , the group best position gbest and the local attractor i p

Figure 3 .
Figure 3. Sphere function Figure 4. Rosenbrock function the value is minimum.Product terms between variables of function have strong influence on each other, and it is a strongly nonlinear multimodal function.329 multimodal functions and have a large number of local extreme value, which can effectively test the global convergence of the algorithm.
F function are multimodal functions.1 F is the simple unimodal and can be used to test the algorithm's convergence precision, although 2 F is the unimodal function, its global best point is located at a narrow gap and extremely difficult to search, 3 F and 4 F are typical TELKOMNIKA ISSN: 1693-6930  Quantum Particle Swarm Optimization Algorithm Based on Dynamic Adaptive .... (Jing Huo)