Welcome to EverybodyWiki 😃 ! Log in or ➕👤 create an account to improve, watchlist or create an article like a 🏭 company page or a 👨👩 bio (yours ?)...

# State transition algorithm

In global optimization, a state transition algorithm (STA) is an iterative method that generates a sequence of continually improved approximations to provide a solution to an optimization problem. It was first proposed by Zhou, et al.[1][2][3][4]

## State transitions

STA is a stochastic global optimization method that aims to find a solution in a reasonable amount of time. In the context of the STA, a solution to an optimization problem is regarded as a state, and updating a solution can be regarded as a state transition.

Using the state–space representation,[5] STA uniformly describes solution updates, the execution operators to update solutions being expressed as state transition matrices, which make STA easy to understand and flexible to implement:

${\displaystyle \mathbf {x} _{k+1}=A_{k}\mathbf {x} _{k}+B_{k}\mathbf {u} _{k}}$
${\displaystyle \mathbf {y} _{k+1}=f(\mathbf {x} _{k+1})}$

where:

${\displaystyle \mathbf {x} _{k}}$ stands for a current state, corresponding to a solution to an optimization problem;
${\displaystyle \mathbf {u} _{k}}$ is a function of ${\displaystyle \mathbf {x} _{k}}$ and historical states;
${\displaystyle \mathbf {y} _{k}}$ is the fitness value at ${\displaystyle \mathbf {x} _{k}}$;
${\displaystyle \mathbf {A} _{k},\mathbf {B} _{k}}$ are state transformation matrices, which can be considered as execution operators;
${\displaystyle f(\cdot )}$ is the objective function or evaluation function.

As a stochastic global optimization method, STA has the following properties:

• globality – the ability to search the whole space;
• optimality – can guarantee an optimal solution;
• convergence – the sequence generated converges;
• rapidity – reduces computational complexity;
• controllability – the search space can be flexibly controlled.

## Continuous state transition algorithm (CSTA)

In continuous STA, ${\displaystyle \mathbf {x} _{k}\in \mathbb {R} ^{n}}$ is a continuous variable that four special state transformation operators act on to generate candidate solutions.

### State transformation operators

(1) Rotation transformation operator (RT) is defined as

${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\alpha {\frac {1}{n\|\mathbf {x} _{k}\|_{2}}}R_{r}\mathbf {x} _{k}}$

where ${\displaystyle \alpha }$ is a positive constant, called the rotation factor; ${\displaystyle R_{r}\in \mathbb {R} ^{n\times n}}$ is a random matrix containing uniformly distributed random variables defined on the interval [-1,1]; and ${\displaystyle \|\cdot \|}$ is the 2-norm of a vector.

This operator can search a hypersphere with a maximum radius of ${\displaystyle \alpha }$ (${\displaystyle \|\mathbf {x} _{k+1}-\mathbf {x} _{k}\|_{2}\leq \alpha }$).

(2) Translation transformation operator (TT) is defined as

${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\beta R_{t}{\frac {\mathbf {x} _{k}-\mathbf {x} _{k-1}}{\|\mathbf {x} _{k}-\mathbf {x} _{k-1}\|_{2}}}}$

where ${\displaystyle \beta }$ is a positive constant, called the translation factor, and ${\displaystyle R_{t}\in \mathbb {R} }$ is a uniformly distributed random variable defined on the interval [0,1].

This operator can search along a line from ${\displaystyle \mathbf {x} _{k-1}}$ to ${\displaystyle \mathbf {x} _{k}}$ from the starting point ${\displaystyle \mathbf {x} _{k}}$ with a maximum length of ${\displaystyle \beta }$.

(3) Expansion transformation operator (ET) is defined as

${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\gamma R_{e}\mathbf {x} _{k}}$

where ${\displaystyle \gamma }$ is a positive constant, called the expansion factor, and ${\displaystyle R_{e}\in \mathbb {R} ^{n\times n}}$ is a random diagonal matrix whose entries obey the Gaussian distribution.

This operator can expand the entries in ${\displaystyle \mathbf {x} _{k}}$ to ${\displaystyle [-\infty ,+\infty ]}$, searching the entire space.

(4) Axesion transformation operator (AT) is defined as

${\displaystyle \mathbf {x} _{k+1}=\mathbf {x} _{k}+\delta R_{a}\mathbf {x} _{k}}$

where ${\displaystyle \delta }$ is a positive constant, called the axesion factor, and ${\displaystyle R_{a}\in \mathbb {R} ^{n\times n}}$ is a random diagonal matrix whose entries obey the Gaussian distribution and with only one random position having a nonzero value.

This operator searches along the axes.

### Regular neighbourhood and sampling

For a given variable ${\displaystyle \mathbf {x} _{k}}$, a candidate solution ${\displaystyle \mathbf {x} _{k+1}}$ is generated using one of the aforementioned state transformation operators. Since the state transition matrix for each state transformation is random, the generated candidate solution is not unique. It is not difficult to imagine that, for a particular candidate solution, a "regular neighbourhood" will be automatically formed when using certain state transformation operators.

Since, for any solution, the entries in a state transition matrix obey certain stochastic distributions, the new candidate becomes a random vector and its corresponding solution (the value of a random vector) can be regarded as a "sample". Considering that any two random state transition matrices in each state transformation operator are independent, several state transformations together (called the degree of search enforcement, ${\displaystyle SE}$ for short), based on the given variable, are performed by a certain state transformation operator, yielding ${\displaystyle SE}$ samples.

### An update strategy

As mentioned above, based on the incumbent best solution, a total number of SE candidate solutions are sampled. A new best solution is selected from the candidate set by an evaluation function denoted as ${\displaystyle newBest}$.

Then, an update strategy using the "greedy criterion" is used to update the incumbent best solution:

${\displaystyle {\text{Best}}={\text{newBest}}}$, if ${\displaystyle f({\text{newBest}})
${\displaystyle {\text{Best}}={\text{Best}}}$, otherwise

### Algorithm procedure of the basic continuous STA

Using the state transformation operators, sampling technique, and update strategy, the basic continuous STA can be described as follows:

Step 1: Assume a random incumbent ${\displaystyle Best}$ solution; and set ${\displaystyle \alpha =\alpha _{\max }=1,\alpha _{\min }=10^{-4},}$ ${\displaystyle \beta =1,\gamma =1,\delta =1,fc=2,k=0}$.

Step 2: Generate ${\displaystyle SE}$ samples based on incumbent ${\displaystyle Best}$, using expansion transformation (ET), and then update the incumbent ${\displaystyle Best}$ using the greedy criterion incorporating ${\displaystyle SE}$ samples and incumbent ${\displaystyle Best}$. Let us denote ${\displaystyle newBest}$ the best solution in the ${\displaystyle SE}$ samples. If ${\displaystyle f(newBest), then perform translation transformation (TT) to update the incumbent ${\displaystyle Best}$.

Step 3: Generate ${\displaystyle SE}$ samples based on incumbent ${\displaystyle Best}$ using rotation transformation (RT), and then update the incumbent ${\displaystyle Best}$ the using greedy criterion incorporating ${\displaystyle SE}$ samples and incumbent ${\displaystyle Best}$. If ${\displaystyle f(newBest), then perform translation transformation (TT) to update the incumbent ${\displaystyle Best}$.

Step 4: Generate ${\displaystyle SE}$ samples based on incumbent ${\displaystyle Best}$ using axesion transformation (AT), and then update the incumbent ${\displaystyle Best}$ using the greedy criterion incorporating ${\displaystyle SE}$ samples and incumbent ${\displaystyle Best}$. If ${\displaystyle f(newBest), then perform translation transformation (TT) to update the incumbent ${\displaystyle Best}$.

Step 5: Set ${\displaystyle k=k+1}$. If ${\displaystyle \alpha <\alpha _{\min }}$, set ${\displaystyle \alpha =\alpha _{\max }}$, else set ${\displaystyle \alpha =\alpha /fc}$, and return to Step 2 until the maximum number of iterations is done.

### Philosophy behind the continuous STA

• Expansion transformation contributes to globality by searching the entire space;
• Rotation transformation benefits optimality, since when ${\displaystyle \alpha }$ is sufficiently small, the incumbent best solution becomes a local optimal solution;
• The update strategy, based on the greedy criterion, contributes to convergence: the sequence ${\displaystyle \{f({\text{Best}}_{k})_{k=1}^{\infty }\}}$ is convergent due to ${\displaystyle f({\text{Best}}_{k+1})\leq f({\text{Best}}_{k})}$ and the monotone convergence theorem;
• The sampling technique, which obviates the need for complete enumeration, and the alternate use of state transformation operators help to reduce computational complexity;
• Parameters such as ${\displaystyle \alpha ,\beta ,\gamma ,\delta }$ make for easy control of the search space.

## Applications of STA

STA has found a variety of applications, such as image segmentation,[6][7] task assignment,[8] energy consumption in the alumina evaporation process,[9] resolution of overlapping linear sweeps of voltametric peaks,[10] PID controller design,[11][12] volterra series identification,[13] and system modeling;[14] and it is shown that STA is comparable with most existing global optimization methods.

## References

1. X.J., Zhou; C.H., Yang; W.H., Gui (2012). "State transition algorithm". Journal of Industrial and Management Optimization. 8 (4): 1039–1056.
2. X.J., Zhou; C.H., Yang; W.H., Gui (2014). "Nonlinear system identification and control using state transition algorithm". Applied Mathematics and Computation. 226: 169–179.
3. X. J., Zhou; D.Y., Gao; A.R., Simpson (2016). "Optimal design of water distribution networks by discrete state transition algorithm". Engineering Optimization. 48 (4): 603–628.
4. X. J., Zhou; D.Y., Gao; C.H., Yang; W.H., Gui (2016). "Discrete state transition algorithm for unconstrained integer optimization problems". Neurocomputing. 173: 864–874.
5. Friedland, Bernard (2005). Control System Design: An Introduction to State-Space Methods. Dover. ISBN 0-486-44278-0. Search this book on
6. J., Han; X.J., Zhou; C.H., Yang; W.H., Gui (2015). "A multi-threshold image segmentation approach using state transition algorithm". Proceedings of the 34th Chinese Control Conference: 2662–2666.
7. J., Han; C., Yang; X., Zhou; W., Gui (2017). "A new multi-threshold image segmentation approach using state transition algorithm". Applied Mathematical Modelling. 44: 588–601.
8. T.X., Dong; X.J., Zhou; C.H., Yang; W.H., Gui (2015). "A discrete state transition algorithm for the task assignment problem". Proceedings of the 34th Chinese Control Conference: 2692–2697.
9. Y.L., Wang; H.M., He; X.J., Zhou; C.H., Yang; Y.F., Xie. "Optimization of both operating costs and energy efficiency in the alumina evaporation process by a multi-objective state transition algorithm". Canadian Journal of Chemical Engineering. 94: 53–65.
10. G.W., Wang; C.H., Yang; H.Q., Zhu; Y.G., Li; X.W., Peng; W.H., Gui. "State-transition-algorithm-based resolution for overlapping linear sweep voltammetric peaks with high signal ratio". Chemometrics and Intelligent Laboratory Systems. 151: 61–70.
11. G., Saravanakumar (2015). "Tuning of Multivariable Decentralized PIP Controller Using State Transition Algorithm". STUDIES IN INFORMATICS AND CONTROL. 24 (4): 367–378.
12. G., Saravanakumar (2016). "Lagrangian-based state transition algorithm for tuning multivariable decentralised controller". International Journal of Advanced Intelligence Paradigms. 8 (3): 303–317.
13. C., Wang (2016). "Volterra Series identification Based on State Transition Algorithm with Orthogonal Transformation". TELKOMNIKA (Telecommunication Computing Electronics and Control). 14 (1): 171–180.
14. Y., Xie; S., Wei; X., Wang; S., Xie; C., Yang (2016). "A new prediction model based on the leaching rate kinetics in the alumina digestion process". Hydrometallurgy. 164: 7–14.