In physics and astronomy, an Nbody simulation is a simulation of a dynamical system of particles, usually under the influence of physical forces, such as gravity (see nbody problem). Nbody simulations are widely used tools in astrophysics, from investigating the dynamics of fewbody systems like the EarthMoonSun system to understanding the evolution of the largescale structure of the universe.^{[1]} In physical cosmology, Nbody simulations are used to study processes of nonlinear structure formation such as galaxy filaments and galaxy halos from the influence of dark matter. Direct Nbody simulations are used to study the dynamical evolution of star clusters.
Contents

1 Nature of the particles

Direct gravitational Nbody simulations 2

General relativity simulations 3

Calculation optimizations 4

5 Twoparticle systems

Incorporating baryons, leptons and photons into simulations 6

Computational Complexity 7

See also 8

References 9
Nature of the particles
The 'particles' treated by the simulation may or may not correspond to physical objects which are particulate in nature. For example, an Nbody simulation of a star cluster might have a particle per star, so each particle has some physical significance. On the other hand a simulation of a gas cloud cannot afford to have a particle for each atom or molecule of gas as this would require on the order of 7023100000000000000♠10^{23} particles for each mole of material (see Avogadro constant), so a single 'particle' would represent some much larger quantity of gas (often implemented using Smoothed Particle Hydrodynamics). This quantity need not have any physical significance, but must be chosen as a compromise between accuracy and manageable computer requirements.
Direct gravitational Nbody simulations
Nbody simulation of 400 objects with parameters close to those of
Solar System planets.
In direct gravitational Nbody simulations, the equations of motion of a system of N particles under the influence of their mutual gravitational forces are integrated numerically without any simplifying approximations. These calculations are used in situations where interactions between individual objects, such as stars or planets, are important to the evolution of the system. The first direct Nbody simulations were carried out by Sebastian von Hoerner at the Astronomisches RechenInstitut in Heidelberg, Germany. Sverre Aarseth at the University of Cambridge (UK) has dedicated his entire scientific life to the development of a series of highly efficient Nbody codes for astrophysical applications which use adaptive (hierarchical) time steps, an AhmadCohen neighbour scheme and regularization of close encounters. Regularization is a mathematical trick to remove the singularity in the Newtonian law of gravitation for two particles which approach each other arbitrarily close. Sverre Aarseth's codes are used to study the dynamics of star clusters, planetary systems and galactic nuclei.
General relativity simulations
Many simulations are large enough that the effects of general relativity in establishing a FriedmannLemaitreRobertsonWalker cosmology are significant. This is incorporated in the simulation as an evolving measure of distance (or scale factor) in a comoving coordinate system, which causes the particles to slow in comoving coordinates (as well as due to the redshifting of their physical energy). However, the contributions of general relativity and the finite speed of gravity can otherwise be ignored, as typical dynamical timescales are long compared to the light crossing time for the simulation, and the spacetime curvature induced by the particles and the particle velocities are small. The boundary conditions of these cosmological simulations are usually periodic (or toroidal), so that one edge of the simulation volume matches up with the opposite edge.
Calculation optimizations
Nbody simulations are simple in principle, because they merely involve integrating the 6N ordinary differential equations defining the particle motions in Newtonian gravity. In practice, the number N of particles involved is usually very large (typical simulations include many millions, the Millennium simulation included ten billion) and the number of particleparticle interactions needing to be computed increases on the order of N^{2}, and so direct integration of the differential equations can be prohibitively computationally expensive. Therefore, a number of refinements are commonly used.
One of the simplest refinements is that each particle carries with it its own timestep variable, so that particles with widely different dynamical times don't all have to be evolved forward at the rate of that with the shortest time.
There are two basic approximation schemes to decrease the computational time for such simulations. These can reduce the computational complexity to O(N log N) or better.
Tree methods
In tree methods, such as a Barnes–Hut simulation, an octree is usually used to divide the volume into cubic cells in, so that only particles from nearby cells need to be treated individually, and particles in distant cells can be treated as a single large particle centered at the cell's center of mass (or as a loworder multipole expansion). This can dramatically reduce the number of particle pair interactions that must be computed. To prevent the simulation from becoming swamped by computing particleparticle interactions, the cells must be refined to smaller cells in denser parts of the simulation which contain many particles per cell. For simulations where particles are not evenly distributed, the wellseparated pair decomposition methods of Callahan and Kosaraju yield optimal O(n log n) time per iteration with fixed dimension.
Particle mesh method
Another possibility is the particle mesh method in which space is discretised on a mesh and, for the purposes of computing the gravitational potential, particles are assumed to be divided between the nearby vertices of the mesh. Finding the potential energy Φ is easy, because the Poisson equation

\nabla^2\Phi=4\pi G{\rho},\,
where G is Newton's constant and {\rho} is the density (number of particles at the mesh points), is trivial to solve by using the fast Fourier transform to go to the frequency domain where the Poisson equation has the simple form

\hat{\Phi}= 4\pi G\frac{\hat{\rho}}{k^2},\,
where \vec{k} is the comoving wavenumber and the hats denote Fourier transforms. The gravitational field can now be found by multiplying by \vec{k} and computing the inverse Fourier transform (or computing the inverse transform and then using some other method). Since this method is limited by the mesh size, in practice a smaller mesh or some other technique (such as combining with a tree or simple particleparticle algorithm) is used to compute the smallscale forces. Sometimes an adaptive mesh is used, in which the mesh cells are much smaller in the denser regions of the simulation.
Specialcase optimizations
Several different gravitational perturbation algorithms are used to get fairly accurate estimates of the path of objects in the solar system.
People often decide to put a satellite in a frozen orbit. The path of a satellite closely orbiting the Earth can be accurately modeled starting from the 2body elliptical orbit around the center of the Earth, and adding small corrections due to the oblateness of the Earth, gravitational attraction of the Sun and Moon, atmospheric drag, etc. It is possible to find a frozen orbit without calculating the actual path of the satellite.
The path of a small planet, comet, or longrange spacecraft can often be accurately modeled starting from the 2body elliptical orbit around the sun, and adding small corrections from the gravitational attraction of the larger planets in their known orbits.
Some characteristics of the longterm paths of a system of particles can be calculated directly. The actual path of any particular particle does not need to be calculated as an intermediate step. Such characteristics include Lyapunov stability, Lyapunov time, various measurements from ergodic theory, etc.
Twoparticle systems
Although there are millions or billions of particles in typical simulations, they typically correspond to a real particle with a very large mass, typically 10^{9} solar masses. This can introduce problems with shortrange interactions between the particles such as the formation of twoparticle binary systems. As the particles are meant to represent large numbers of dark matter particles or groups of stars, these binaries are unphysical. To prevent this, a softened Newtonian force law is used, which does not diverge as the inversesquare radius at short distances. Most simulations implement this quite naturally by running the simulations on cells of finite size. It is important to implement the discretization procedure in such a way that particles always exert a vanishing force on themselves.
Incorporating baryons, leptons and photons into simulations
Many simulations simulate only cold dark matter, and thus include only the gravitational force. Incorporating baryons, leptons and photons into the simulations dramatically increases their complexity and often radical simplifications of the underlying physics must be made. However, this is an extremely important area and many modern simulations are now trying to understand processes that occur during galaxy formation which could account for galaxy bias.
Computational Complexity
Reif et al.^{[2]} prove that if the nbody reachability problem is defined as follows  given n bodies satisfying a fixed electrostatic potential law, determining if a body reaches a destination ball in a given time bound where we require a poly(n) bits of accuracy and the target time is poly(n) is in PSPACE.
On the other hand if the question is whether the body eventually reaches the destination ball, the problem is PSPACEhard. These bounds are based on similar complexity bounds obtained for ray tracing.
See also
References

^ Trenti, Michele; Hut, Piet. "Nbody simulations (gravitational)". Scholarpedia. Retrieved 25 March 2014.

^ "The Complexity of Nbody Simulation".
Further reading




Bertschinger, Edmund (1998). "Simulations of structure formation in the universe". Annual Review of Astronomy and Astrophysics 36 (1): 599–654.

Binney, James; Tremaine, Scott (1987). Galactic Dynamics. Princeton University Press.

Callahan, Paul B.; Kosaraju, Sambasiva Rao (1992). "A decomposition of multidimensional point sets with applications to knearestneighbors and nbody potential fields (preliminary version)". STOC '92: Proc. ACM Symp. Theory of Computing. ACM. .
This article was sourced from Creative Commons AttributionShareAlike License; additional terms may apply. World Heritage Encyclopedia content is assembled from numerous content providers, Open Access Publishing, and in compliance with The Fair Access to Science and Technology Research Act (FASTR), Wikimedia Foundation, Inc., Public Library of Science, The Encyclopedia of Life, Open Book Publishers (OBP), PubMed, U.S. National Library of Medicine, National Center for Biotechnology Information, U.S. National Library of Medicine, National Institutes of Health (NIH), U.S. Department of Health & Human Services, and USA.gov, which sources content from all federal, state, local, tribal, and territorial government publication portals (.gov, .mil, .edu). Funding for USA.gov and content contributors is made possible from the U.S. Congress, EGovernment Act of 2002.
Crowd sourced content that is contributed to World Heritage Encyclopedia is peer reviewed and edited by our editorial staff to ensure quality scholarly research articles.
By using this site, you agree to the Terms of Use and Privacy Policy. World Heritage Encyclopedia™ is a registered trademark of the World Public Library Association, a nonprofit organization.