You can edit almost every page by Creating an account. Otherwise, see the FAQ.

# Leimkuhler-Matthews method

The Leimkuhler-Matthews method (or LM method in its original paper [1]) is an algorithm for finding discretized solutions to the Brownian dynamics

${\displaystyle \mathrm {d} X_{t}=-\nabla V(X_{t})\,\mathrm {d} t+\sigma \,\mathrm {d} W_{t},}$

where ${\displaystyle \sigma >0}$ is a constant and ${\displaystyle V(X)}$ can be thought of as a potential energy function commonly found in classical molecular dynamics, with ${\displaystyle X_{t}\in \mathbb {R} ^{N}}$ the ${\displaystyle N}$ dimensional state vector along with ${\displaystyle N}$ dimensional Wiener process denoted ${\displaystyle W_{t}}$. Given a time step ${\displaystyle \Delta t>0}$, the Leimkuhler-Matthews update scheme is compactly written as

${\displaystyle Y_{t+\Delta t}=Y_{t}-\nabla V(Y_{t})\Delta t+\sigma {\frac {\sqrt {\Delta t}}{2}}\,(R_{t}+R_{t+\Delta t}),}$

with initial condition ${\displaystyle Y_{0}:=X_{0}}$, and where ${\displaystyle R_{k}}$ is a vector of independent normal random numbers redrawn at each step so ${\displaystyle {\text{E}}[R_{t}\cdot R_{s}]=N\delta _{ts}}$ (where ${\displaystyle {\text{E}}[\bullet ]}$ denotes expectation). Despite being of equal cost to the Euler-Maruyama scheme (in terms of the number of evaluations of the function ${\displaystyle \nabla V(X_{t})}$ per update), given some assumptions on ${\displaystyle \Delta t,\,V(X)}$ and ${\displaystyle f(X)}$ solutions have been shown [2] to have a superconvergence property

${\displaystyle {\text{E}}[|f(Y_{t})-f(X_{t})|]\leq C_{1}e^{-\lambda t}\Delta t+C_{2}\Delta t^{2}}$

for constants ${\displaystyle C_{k}\geq 0,\,\lambda >0}$ not depending on ${\displaystyle t}$. This means that as ${\displaystyle t}$ gets large we obtain an effective second order with ${\displaystyle \Delta t^{2}}$ error in computed expectations. For small time step ${\displaystyle \Delta t}$ this can give significant improvements over the Euler-Maruyama scheme, at no extra cost.

## Discussion

### Comparison to other schemes

The obvious method for comparison is the Euler-Maruyama scheme as it has the same cost, requiring one evaluation of ${\displaystyle \nabla V(X)}$ per step. Its update is of the form

${\displaystyle {\hat {Y}}_{t+\Delta t}={\hat {Y}}_{t}-\nabla V({\hat {Y}}_{t})\Delta t+\sigma {\sqrt {\Delta t}}\,R_{t},}$

with error (given some assumptions [3] ) as ${\displaystyle {\text{E}}[|f({\hat {Y}}_{t})-f(X_{t})|]\leq C\Delta t}$ with constant ${\displaystyle C>0}$ independent of ${\displaystyle t}$. Compared to the above definition, the only difference between the schemes is the one-step averaged noise term, making it simple to implement.

For sufficiently small time step ${\displaystyle \Delta t}$ and large enough time ${\displaystyle t}$ it is clear that the LM scheme gives a smaller error than Euler-Maruyama. While there are many algorithms that can give reduced error compared to the Euler scheme (see e.g. Milstein, Runge-Kutta or Heun's method) these almost always come at an efficiency cost, requiring more computation in exchange for reducing the error. However the Leimkuhler-Matthews scheme can give significantly reduced error with minimal change to the standard Euler scheme. The trade-off comes from the (relatively) limited scope of the stochastic differential equation it solves: ${\displaystyle \sigma }$ must be a scalar constant and the drift function must be of the form ${\displaystyle \nabla V(X)}$. The LM scheme also is not Markovian, as updates require more than just the state at time ${\displaystyle t}$. However, we can recast the scheme as a Markov process by extending the space.

### Markovian Form

We can rewrite the algorithm in a Markovian form by extending the state space with a momentum vector ${\displaystyle P_{t}\in \mathbb {R} ^{N}}$ so that the overall state is ${\displaystyle (X_{t},P_{t})}$ at time ${\displaystyle t}$. Initializing the momentum to be a vector of ${\displaystyle N}$ standard normal random numbers, we have

${\displaystyle Y'_{t+\Delta t}=Y_{t}-\nabla V(Y_{t})\Delta t+\sigma {\frac {\sqrt {\Delta t}}{2}}\,P_{t},}$
${\displaystyle P_{t+\Delta t}\sim {\text{Normal}}(0,I),}$
${\displaystyle Y_{t+\Delta t}=Y'_{t+\Delta t}+\sigma {\frac {\sqrt {\Delta t}}{2}}\,P_{t+\Delta t},}$

where the middle step completely redraws the momentum so that each component is an independent normal random number. This scheme is Markovian, and has the same properties as the original LM scheme.

### Applications

The algorithm has application in any area where the weak (i.e. average) properties of solutions to Brownian dynamics are required. This applies to any molecular simulation problem (such as classical molecular dynamics), but also can apply to statistical sampling problems due to the properties of solutions at large times. In the limit of ${\displaystyle t\to \infty }$, solutions will become distributed according to the Probability distribution ${\displaystyle \pi (X)\propto \exp(-V(X))}$. Thus we can generate independent samples according to a required distribution by using ${\displaystyle V(X)=-\log(\pi (X))}$ and running the LM algorithm until large ${\displaystyle t}$. Such strategies can be efficient in (for instance) Bayesian inference problems.

3. Kloeden, P.E.; Platen, E. (1992). Numerical Solution of Stochastic Differential Equations. Springer, Berlin. ISBN 3-540-54062-8. Unknown parameter |name-list-style= ignored (help) Search this book on