2014 Multi-objective Optimization Based on Improved Differential Evolution

178038139@qq.com Abstract On the basis of the fundamental differential evolution (DE), this paper puts forward several improved DE algorithms to find a balance between global and local search and get optimal solutions through rapid convergence. Meanwhile, a random mutation mechanism is adopted to process individuals that show stagnation behaviour. After that, a series of frequently-used benchmark test functions are used to test the performance of the fundamental and improved DE algorithms. After a comparative analysis of several algorithms, the paper realizes its desired effects by applying them to the calculation of single and multiple objective


Introduction
In the scientific research and engineering design, many specific problems can be summarized as the problems of parameter optimization. However, in practice, these optimization problems usually have multiple design objectives, which contract with and restrict each other [1]. The performance optimization of one problem usually leads to the performance degradation of at least one of the other problems, which indicates that it is difficult to make many objectives to reach optimization simultaneously. Therefore, the research of multi-objective optimization algorithm has become a research hospot in current science and engineering design [2]. Evolutionary algorithm is the general term of heuristic research and optimization algorithms inspired and developed from natural biology and system and to solve multi-objective optimization problems with evolutionary algorithm has been widely used [3].
As an important part of evolutionary algorithm, differential evolution has been widely used in solving optimization problems because it has simple theory, simple operation and strong robustness [4]. The basic principle of differential evolution is to disturb a certain individual in the group and search the search space; however, it is too random in choosing the individuals generating differences and it is easy to cause algorithm prematurity or long-time optimization so as to make it unable to obtain global optimization solution [5]. Besides, when settling multiobjective optimization problems, differential evolution is affected by its own limitations, making the selection of mutation strategy and the setting of parameter values seriously restrict the performance of the algorithm [6].
In order to solve the above-mentioned problems, this paper has investigated the selection and parameter values of mutation strategy when using differential evolution in multiobjective optimization.First, the paper makes an numerical experiment on step length F and crossover operator CR of the fundamental DE algorithm. And then it makes a comparative analysis to get the range of the optimal value of the two. To avoid the shortcomings of DE algorithm in handling global optimization, we make some improvements to keep the variety of group and accelerate population convergence. Secondly, the paper makes an numerical experiment on the performance of the improved algorithms and makes a comparative analysis. Finally, the paper uses the improved DE algorithms to solve the optimization of multiple objective functions.

DE Algorithm 2.1 Basic Ideas of DE Algorithm
DE algorithm is an evolutionary algorithm based on real-number encoding to optimize the minimum value of functions. The concept was put forward on the basis of population differences when scholars tried to solve the Chebyshev polynomials. Its overall structure is analogous to that of genetic algorithms. The two both have the same operations such as mutation, crossover, and selection [7]. But there are still some differences. Here are the basic ideas of DE algorithm: the mutation between parent individuals gives rise to mutant individuals; crossover operation between parent individuals and mutant individuals is applied according to certain probability to generate test individuals; greedy selection between parent individuals and test individuals is carried out in accordance with the degree of fitness; the better ones are kept to realize the evolution of the population [8].

Mutation Operation
For each individual with the following expression: In the expression,  are three individuals randomly selected from the population;

Crossover Operation
We obtain test individual x in line with the following principle: In the expression, is a random number between [0,1]; CR , the crossover operator, is a constant at [0,1]; the bigger CR is, the more likely crossover occurs; _ j rand is an integer randomly chosen between [1,D], which guarantees that for test individual t i u , at least one element must be obtained from mutant individual t i v . The mutation and crossover operations are both called reproduction [9].

Selection
DE algorithm adopts the "greedy" selection strategy, which means selecting one that has the best fitness value from parent individual t i x and test individual t i u as the individual of the next generation. The selection is described as: Where fitness, the objective function to be optimized, is regarded to be the fitness function. Unless stated, the fitness function in the paper is an objective function with a minimal value [10].

Calculation Process of DEAlgorithm
From the principles of fundamental DE algorithm mentioned above, we can understand the calculation process of DE algorithm as follows: (1) Parameter initialization: NP : population size; F : scale factor; D : spatial dimension of mutation operator; evolution generation 0 t  . (2) Random initialization of the initial population  7) Test termination: the next generation of population generated from the above process is if it reaches the maximum evolution generation or meets the criteria of errors, merge and output 1 t best x  as the optimal solution; otherwise, make 1 t t   and return to step (3).

Parameter Selection of DEAlgorithm 2.3.1 Selection of Population Size NP
In view of computation complexity, the larger the population size is, the greater the likelihood of global optimal solution becomes. But it also needs more calculation amount and time. Nonetheless, the quality of the optimal solution does not simply gets better as the population size expands. Sometimes, it's the other way round. The accuracy of solutions even declines after population size NP increases to a certain number. This is because a larger population size reduces the rate of convergence, though it can keep the variety of population. Variety and the rate of convergence must be kept in balance. Hence, the accuracy will decrease if the population size gets larger but the maximum evolution generation remains unchanged. The larger the population size is, the greater the variety is. Therefore, a larger population size is needed to expand variety and prevent premature convergence of a population [11].
According to our previous research results, the appropriate population size for simple low-dimensional problems should lie between 15 and 35 in the case of given maximum evolution generation. In the same circumstance, the population size that maintains between 15 and 50 helps keep a good balance between variety and the rate of convergence [12].

Selection of Scale Factor F
Let us test the performance of scale factor F . Set the population size at 15. Make sure the crossover operator and the maximum evolution generation stay unchanged.
Based on the test on scale factor F of the banana function, we know that in the case of the same initial population, the results of every 30 times of running vary greatly from each other when 0.7 F  , and we can get better local optimization and faster rate of convergence at the expense of lower success rate of optimization and longer running time; when 0.7 F  , there are no significant differences between the results of every 30 times of running, and we can get better global optimization, shorter running time, and faster rate of convergence.
To sum up, F , to a certain degree, can regulate the local and global search of an algorithm. A bigger F helps keep the variety of population and increase the global search ability, while a smaller F helps increase the local search ability as well as the rate of convergence. Hence, the value of F should be neither too big nor smaller than a specific value [13]. This explains why the algorithm has good effects when [0.7,1] F  .

Selection of Crossover Operator CR
To test the effect of crossover operator on algorithm performance, we make the scale factor 0.9 F  and set population size at 20. Make the crossover operator lie between 0 and 1. Set the interval at 0.1. Let the maximum evolution generation remain the same.
The test shows the banana function can change CR . Thus, in the case of the same initial population, the results of every 30 times of running vary greatly from each other when 0.3 CR  , and we can get better local optimization at the expense of slower rate of convergence, lower success rate of optimization and longer running time; when 0.3 CR  , there are no significant differences between the results of every 30 times of running, and we can get better global optimization; but when 0.3 0.6 CR   , we get slower rate of convergence and longer running time; when 0.6 CR  , we get faster rate of convergence and shorter running time [14].

Simulation Testing of Five Improved DEAlgorithms 3.1 Five Improved DEAlgorithms
The fundamental DE algorithm can be described as: DE/rand/1/bin. "bin" means crossover operation. DE/x/y/z is used to differentiate the other DE deformations. x defines whether the variant vector is "random" or "optimal", y denotes the number of residual vectors used, and z stands for the method of crossover operation. Below are the DE deformations if we only consider the selection modes of base points and the number of difference vectors:
Below are the graphs of the above four test functions:

Test Results
Test the five algorithms using the test functions introduced in 3.2. 15 NP  , 0.9 F  , 0.9 CR  , and the maximum evolution generation is 200. Average the results of the 30 times of running and we can get the evolution curves shown as in Figure 2.
From the evolution curves and running results, we know that all the five improved DE algorithms can find their corresponding optimal solutions in the 30 times of running. The algorithms remain quite stable. However, they have different rate of convergence and running time.

Conclusion
This paper describes the design ideas of DE algorithm and further improves the parameters of the algorithm. After that, it compares the results of the numerical experiment and then analyzes the performance of the improved DE algorithms, thus providing the basis for the application of the algorithms. In the end, the paper adopts the penalty function method and the weighted strategy to deal with the constraint conditions of multiple-objective optimization problems and uses the improved DE algorithms to solve constrained optimization problems. Thereby, it helps expand the application areasof the DE algorithm.