You can edit almost every page by Creating an account. Otherwise, see the FAQ.

State transition algorithm

From EverybodyWiki Bios & Wiki


In global optimization, a state transition algorithm (STA) is an iterative method that generates a sequence of continually improved approximations to provide a solution to an optimization problem. It was first proposed by Zhou, et al.[1][2][3][4]

State transitions[edit]

STA is a stochastic global optimization method that aims to find a solution in a reasonable amount of time. In the context of the STA, a solution to an optimization problem is regarded as a state, and updating a solution can be regarded as a state transition.

Using the state–space representation,[5] STA uniformly describes solution updates, the execution operators to update solutions being expressed as state transition matrices, which make STA easy to understand and flexible to implement:

where:

stands for a current state, corresponding to a solution to an optimization problem;
is a function of and historical states;
is the fitness value at ;
are state transformation matrices, which can be considered as execution operators;
is the objective function or evaluation function.

As a stochastic global optimization method, STA has the following properties:

  • globality – the ability to search the whole space;
  • optimality – can guarantee an optimal solution;
  • convergence – the sequence generated converges;
  • rapidity – reduces computational complexity;
  • controllability – the search space can be flexibly controlled.

Continuous state transition algorithm (CSTA)[edit]

In continuous STA, is a continuous variable that four special state transformation operators act on to generate candidate solutions.

State transformation operators[edit]

(1) Rotation transformation operator (RT) is defined as

where is a positive constant, called the rotation factor; is a random matrix containing uniformly distributed random variables defined on the interval [-1,1]; and is the 2-norm of a vector.

This operator can search a hypersphere with a maximum radius of ().

(2) Translation transformation operator (TT) is defined as

where is a positive constant, called the translation factor, and is a uniformly distributed random variable defined on the interval [0,1].

This operator can search along a line from to from the starting point with a maximum length of .

(3) Expansion transformation operator (ET) is defined as

where is a positive constant, called the expansion factor, and is a random diagonal matrix whose entries obey the Gaussian distribution.

This operator can expand the entries in to , searching the entire space.

(4) Axesion transformation operator (AT) is defined as

where is a positive constant, called the axesion factor, and is a random diagonal matrix whose entries obey the Gaussian distribution and with only one random position having a nonzero value.

This operator searches along the axes.

Regular neighbourhood and sampling[edit]

For a given variable , a candidate solution is generated using one of the aforementioned state transformation operators. Since the state transition matrix for each state transformation is random, the generated candidate solution is not unique. It is not difficult to imagine that, for a particular candidate solution, a "regular neighbourhood" will be automatically formed when using certain state transformation operators.

Since, for any solution, the entries in a state transition matrix obey certain stochastic distributions, the new candidate becomes a random vector and its corresponding solution (the value of a random vector) can be regarded as a "sample". Considering that any two random state transition matrices in each state transformation operator are independent, several state transformations together (called the degree of search enforcement, for short), based on the given variable, are performed by a certain state transformation operator, yielding samples.

An update strategy[edit]

As mentioned above, based on the incumbent best solution, a total number of SE candidate solutions are sampled. A new best solution is selected from the candidate set by an evaluation function denoted as .

Then, an update strategy using the "greedy criterion" is used to update the incumbent best solution:

, if
, otherwise

Algorithm procedure of the basic continuous STA[edit]

Using the state transformation operators, sampling technique, and update strategy, the basic continuous STA can be described as follows:

Step 1: Assume a random incumbent solution; and set .

Step 2: Generate samples based on incumbent , using expansion transformation (ET), and then update the incumbent using the greedy criterion incorporating samples and incumbent . Let us denote the best solution in the samples. If , then perform translation transformation (TT) to update the incumbent .

Step 3: Generate samples based on incumbent using rotation transformation (RT), and then update the incumbent the using greedy criterion incorporating samples and incumbent . If , then perform translation transformation (TT) to update the incumbent .

Step 4: Generate samples based on incumbent using axesion transformation (AT), and then update the incumbent using the greedy criterion incorporating samples and incumbent . If , then perform translation transformation (TT) to update the incumbent .

Step 5: Set . If , set , else set , and return to Step 2 until the maximum number of iterations is done.

Philosophy behind the continuous STA[edit]

  • Expansion transformation contributes to globality by searching the entire space;
  • Rotation transformation benefits optimality, since when is sufficiently small, the incumbent best solution becomes a local optimal solution;
  • The update strategy, based on the greedy criterion, contributes to convergence: the sequence is convergent due to and the monotone convergence theorem;
  • The sampling technique, which obviates the need for complete enumeration, and the alternate use of state transformation operators help to reduce computational complexity;
  • Parameters such as make for easy control of the search space.

Applications of STA[edit]

STA has found a variety of applications, such as image segmentation,[6][7] task assignment,[8] energy consumption in the alumina evaporation process,[9] resolution of overlapping linear sweeps of voltametric peaks,[10] PID controller design,[11][12] volterra series identification,[13] and system modeling;[14] and it is shown that STA is comparable with most existing global optimization methods.

References[edit]

  1. X.J., Zhou; C.H., Yang; W.H., Gui (2012). "State transition algorithm". Journal of Industrial and Management Optimization. 8 (4): 1039–1056.
  2. X.J., Zhou; C.H., Yang; W.H., Gui (2014). "Nonlinear system identification and control using state transition algorithm". Applied Mathematics and Computation. 226: 169–179.
  3. X. J., Zhou; D.Y., Gao; A.R., Simpson (2016). "Optimal design of water distribution networks by discrete state transition algorithm". Engineering Optimization. 48 (4): 603–628.
  4. X. J., Zhou; D.Y., Gao; C.H., Yang; W.H., Gui (2016). "Discrete state transition algorithm for unconstrained integer optimization problems". Neurocomputing. 173: 864–874.
  5. Friedland, Bernard (2005). Control System Design: An Introduction to State-Space Methods. Dover. ISBN 0-486-44278-0. Search this book on
  6. J., Han; X.J., Zhou; C.H., Yang; W.H., Gui (2015). "A multi-threshold image segmentation approach using state transition algorithm". Proceedings of the 34th Chinese Control Conference: 2662–2666.
  7. J., Han; C., Yang; X., Zhou; W., Gui (2017). "A new multi-threshold image segmentation approach using state transition algorithm". Applied Mathematical Modelling. 44: 588–601.
  8. T.X., Dong; X.J., Zhou; C.H., Yang; W.H., Gui (2015). "A discrete state transition algorithm for the task assignment problem". Proceedings of the 34th Chinese Control Conference: 2692–2697.
  9. Y.L., Wang; H.M., He; X.J., Zhou; C.H., Yang; Y.F., Xie. "Optimization of both operating costs and energy efficiency in the alumina evaporation process by a multi-objective state transition algorithm". Canadian Journal of Chemical Engineering. 94: 53–65.
  10. G.W., Wang; C.H., Yang; H.Q., Zhu; Y.G., Li; X.W., Peng; W.H., Gui. "State-transition-algorithm-based resolution for overlapping linear sweep voltammetric peaks with high signal ratio". Chemometrics and Intelligent Laboratory Systems. 151: 61–70.
  11. G., Saravanakumar (2015). "Tuning of Multivariable Decentralized PIP Controller Using State Transition Algorithm". STUDIES IN INFORMATICS AND CONTROL. 24 (4): 367–378.
  12. G., Saravanakumar (2016). "Lagrangian-based state transition algorithm for tuning multivariable decentralised controller". International Journal of Advanced Intelligence Paradigms. 8 (3): 303–317.
  13. C., Wang (2016). "Volterra Series identification Based on State Transition Algorithm with Orthogonal Transformation". TELKOMNIKA (Telecommunication Computing Electronics and Control). 14 (1): 171–180.
  14. Y., Xie; S., Wei; X., Wang; S., Xie; C., Yang (2016). "A new prediction model based on the leaching rate kinetics in the alumina digestion process". Hydrometallurgy. 164: 7–14.

External links[edit]


This article "State transition algorithm" is from Wikipedia. The list of its authors can be seen in its historical and/or the page Edithistory:State transition algorithm. Articles copied from Draft Namespace on Wikipedia could be seen on the Draft Namespace of Wikipedia and not main one.