In the previous post in this series parallelling our local discussion seminar on this review, we reminded ourselves of some basic ideas of Markov Chain Monte Carlo simulations. In this post, we are going to look at the Hybrid Monte Carlo algorithm.
To simulate lattice theories with dynamical fermions, one wants an exact algorithm that performs global updates, because local updates are not cheap if the action is not local (as is the case with the fermionic determinant), and which can take large steps through configuration space to avoid critical slowing down. An algorithm satisfying these demands is Hybrid Monte Carlo (HMC). HMC is based on the idea of simulating a dynamical system with Hamiltonian H = 1/2 p2 + S(q), where one introduces fictitious conjugate momenta p for the original configuration variables q, and treats the action as the potential of the fictitious dynamical system. If one now generates a Markov chain with fixed point distribution e-H(p,q), then the distribution of q ignoring p (the "marginal distribution") is the desired e-S(q).
To build such a Markov chain, one alternates two steps: Molecular Dynamics Monte Carlo (MDMC) and momentum refreshment.
MDMC is based on the fact that besides conserving the Hamiltonian, the time evolution of a Hamiltonian system preserves the phase space measure (by Liouville's theorem). So if at the end of a Hamiltonian trajectory of length τ we reverse the momentum, we get a mapping from (p,q) to (-p',q') and vice versa, thus obeying detailed balance: e-H(p,q) P((-p',q'),(p,q)) = e-H(p',q') P((p,q),(-p',q')), ensuring the correct fixed-point distribution. Of course, we can't actually exactly integrate Hamilton's in general; instead, we are content with numerical integration with an integrator that preserves the phase space measure exactly (more about which presently), but only approximately conserves the Hamiltonian. We make the algorithm exact nevertheless by adding a Metropolis step that accepts the new configuration with probability e-δH, where δH is the change in the Hamiltonian under the numerical integration.
The Markov step of MDMC is of course totally degenerate: the transition probability is essentially a δ-distribution, since one can only get to one other configuration from any one configuration, and this relation is reciprocal. So while it does indeed satisfy detailed balance, this Markov step is hopelessly non-egodic.
To make it ergodic without ruining detailed balance, we alternate between MDMC and momentum refreshment, where we redraw the fictitious momenta at random from a Gaussian distribution without regard to their present value or that of the configuration variables q: P((p',q),(p,q))=e-1/2 p'2. Obviously, this step will preserve the desired fixed-point distribution (which is after all simply Gaussian in the momenta). It is also obviously non-ergodic since it never changes the configuration variables q. However, it does allow large changes in the Hamiltonian and breaks the degeneracy of the MDMC step.
While it is generally not possible to prove with any degree of rigour that the combination of MDMC and momentum is ergodic, intuitively and empirically this is indeed the case. What remains to see to make this a practical algorithm is to find numerical integrators that exactly preserve the phase space measure.
This order is fulfilled by symplectic integrators. The basic idea is to consider the time evolution operator exp(τ d/dt) = exp(τ(-∂qH ∂p+∂pH ∂q)) = exp(τh) as the exponential of a differential operator on phase space. We can then decompose the latter as h = -∂qH ∂p+∂pH ∂q = P+Q, where P = -∂qH ∂p and Q = ∂pH ∂q. Since ∂qH = S'(q) and ∂pH = p, we can immediately evaluate the action of eτP and eτQ on the state (p,q) by applying Taylor's theorem: eτQ(p,q) = (p,q+τp), and eτP = (p-τS'(q),q).
Since each of these maps is simply a shear along one direction in phase space, they are clearly area preserving; so are all their powers and mutual products. In order to combine them into a suitable integrator, we need the Baker-Campbell-Hausdorff (BCH) formula.
The BCH formula says that for two elements A,B of an associative algebra, the identity
log(eAeB) = A + (∫01 ((x log x)/(x-1)){x=ead Aet ad B} dt) (B)
holds, where (ad A )(B) = [A,B], and the exponential and logarithm are defined via their power series (around the identity in the case of the logarithm). Expanding the first few terms, one finds
log(eAeB) = A + B + 1/2 [A,B] + 1/12 [A-B,[A,B]] - 1/24 [B,[A,[A,B]]] + ...
Applying this to a symmetric product, one finds
log(e1/2 AeBe1/2 A) = A + B + 1/24 [A+2B,[A,B]] + ...
where in both cases the dots denote fifth-order terms.
We can then use this to build symmetric products (we want symmetric products to ensure reversibility) of eP and eQ that are equal to eτh up to some controlled error. The simplest example is
(eδτ/2 Peδτ Qeδτ/2 P)τ/δτ = eτ(P+Q) + O((δτ)2)
and more complex examples can be found that either reduce the order of the error (although doing so requires one to use negative times steps -δτ as well as positive ones) or minimize the error by splitting the force term P into pieces Pi that each get their own time step δτi to account for their different sizes.
To simulate lattice theories with dynamical fermions, one wants an exact algorithm that performs global updates, because local updates are not cheap if the action is not local (as is the case with the fermionic determinant), and which can take large steps through configuration space to avoid critical slowing down. An algorithm satisfying these demands is Hybrid Monte Carlo (HMC). HMC is based on the idea of simulating a dynamical system with Hamiltonian H = 1/2 p2 + S(q), where one introduces fictitious conjugate momenta p for the original configuration variables q, and treats the action as the potential of the fictitious dynamical system. If one now generates a Markov chain with fixed point distribution e-H(p,q), then the distribution of q ignoring p (the "marginal distribution") is the desired e-S(q).
To build such a Markov chain, one alternates two steps: Molecular Dynamics Monte Carlo (MDMC) and momentum refreshment.
MDMC is based on the fact that besides conserving the Hamiltonian, the time evolution of a Hamiltonian system preserves the phase space measure (by Liouville's theorem). So if at the end of a Hamiltonian trajectory of length τ we reverse the momentum, we get a mapping from (p,q) to (-p',q') and vice versa, thus obeying detailed balance: e-H(p,q) P((-p',q'),(p,q)) = e-H(p',q') P((p,q),(-p',q')), ensuring the correct fixed-point distribution. Of course, we can't actually exactly integrate Hamilton's in general; instead, we are content with numerical integration with an integrator that preserves the phase space measure exactly (more about which presently), but only approximately conserves the Hamiltonian. We make the algorithm exact nevertheless by adding a Metropolis step that accepts the new configuration with probability e-δH, where δH is the change in the Hamiltonian under the numerical integration.
The Markov step of MDMC is of course totally degenerate: the transition probability is essentially a δ-distribution, since one can only get to one other configuration from any one configuration, and this relation is reciprocal. So while it does indeed satisfy detailed balance, this Markov step is hopelessly non-egodic.
To make it ergodic without ruining detailed balance, we alternate between MDMC and momentum refreshment, where we redraw the fictitious momenta at random from a Gaussian distribution without regard to their present value or that of the configuration variables q: P((p',q),(p,q))=e-1/2 p'2. Obviously, this step will preserve the desired fixed-point distribution (which is after all simply Gaussian in the momenta). It is also obviously non-ergodic since it never changes the configuration variables q. However, it does allow large changes in the Hamiltonian and breaks the degeneracy of the MDMC step.
While it is generally not possible to prove with any degree of rigour that the combination of MDMC and momentum is ergodic, intuitively and empirically this is indeed the case. What remains to see to make this a practical algorithm is to find numerical integrators that exactly preserve the phase space measure.
This order is fulfilled by symplectic integrators. The basic idea is to consider the time evolution operator exp(τ d/dt) = exp(τ(-∂qH ∂p+∂pH ∂q)) = exp(τh) as the exponential of a differential operator on phase space. We can then decompose the latter as h = -∂qH ∂p+∂pH ∂q = P+Q, where P = -∂qH ∂p and Q = ∂pH ∂q. Since ∂qH = S'(q) and ∂pH = p, we can immediately evaluate the action of eτP and eτQ on the state (p,q) by applying Taylor's theorem: eτQ(p,q) = (p,q+τp), and eτP = (p-τS'(q),q).
Since each of these maps is simply a shear along one direction in phase space, they are clearly area preserving; so are all their powers and mutual products. In order to combine them into a suitable integrator, we need the Baker-Campbell-Hausdorff (BCH) formula.
The BCH formula says that for two elements A,B of an associative algebra, the identity
log(eAeB) = A + (∫01 ((x log x)/(x-1)){x=ead Aet ad B} dt) (B)
holds, where (ad A )(B) = [A,B], and the exponential and logarithm are defined via their power series (around the identity in the case of the logarithm). Expanding the first few terms, one finds
log(eAeB) = A + B + 1/2 [A,B] + 1/12 [A-B,[A,B]] - 1/24 [B,[A,[A,B]]] + ...
Applying this to a symmetric product, one finds
log(e1/2 AeBe1/2 A) = A + B + 1/24 [A+2B,[A,B]] + ...
where in both cases the dots denote fifth-order terms.
We can then use this to build symmetric products (we want symmetric products to ensure reversibility) of eP and eQ that are equal to eτh up to some controlled error. The simplest example is
(eδτ/2 Peδτ Qeδτ/2 P)τ/δτ = eτ(P+Q) + O((δτ)2)
and more complex examples can be found that either reduce the order of the error (although doing so requires one to use negative times steps -δτ as well as positive ones) or minimize the error by splitting the force term P into pieces Pi that each get their own time step δτi to account for their different sizes.
ORLANING COLMENARES
C.I.V.- 18.991.089
Visitar mi BLOG:
Referencia Bibliografica:
No hay comentarios:
Publicar un comentario