Visualizing the Large Scale Structure of the Cosmos

(This is the companion post to my personal project at cosmos.robrhinehart.com)

One of my favorite William Blake poems begins as follows:

To see the world in a grain of sand,
And heaven in a wild flower,
Hold infinity in the palm of your hand,
And the universe in a browser.

Or something like that.

What’s out there? Our perspective from earth is rather limited. Our most sensitive optics and antennas have given us beautiful images and a decent amount of data, but we don’t have much resolution outside our immediate cosmological neighborhood. Furthermore, we have no choice but to stare further in to the past the farther out we look. What does the universe look like right now a million light years away? It’s hard to know. Also, we can’t look through the dense milky way so about half of our potential observations are cut off until we leave the galaxy (which is unlikely to happen anytime soon). It’s immensely hard to test a theory involving black holes, quasars, dark matter, dark energy, or supernovae when they are so difficult to observe. Simply put, we are inconsequentially small and short-lived compared to the universe we are trying to understand.

However, assuming we know some of the fundamental laws of physics governing the universe, we know roughly how it began, and the Cosmological Principle holds, which states that the laws of physics are the same here as they are throughout the universe, perhaps we can simulate the whole thing and test our theories there. In fact, today, almost as many astrophysics papers are published using data from cosmological simulations as from actual observations.

The simulacrum is never that which conceals the truth — it is the truth which conceals that there is none. The simulacrum is true.
–Ecclesiastes

If you wish to simulate a universe from scratch, first you must solve the n-body problem. The cosmological simulation problem essentially transforms in to a huge math problem. All we need to do is track the state of every boson and fermion in the universe and each one’s 4 quantum numbers as well as their interaction with every other one in each fundamental force, not to mention dark matter and dark energy, whatever that is.

Perhaps we start top-down by only simulating huge clumps of dark matter above a certain mass threshold and focus on their gravitational interactions, since the other forces are relatively short range. This should give us a general picture of the cosmos, and as our supercomputers get better and our solutions more efficient we can improve resolution. The simulation data I used, MDR1, has the following parameters:

Box size1 Gpc/hside length of the cosmological cube
Number of particles20483total number of dark matter particles
Mass resolution8.721*109 Msun/hmass of one dark matter particle
Force resolution7 kpc/hphysical force resolution
Initial redshift65redshift at which the simulation started
Cosmology
h0.70Hubble parameter
ΩΛ0.73density parameter for dark energy
Ωm0.27density parameter for matter (dark matter+baryons)
Ωb0.0469density parameter for baryonic matter
n0.95normalization of the Power spectrum
σ80.82amplitude of mass density fluctuation in 8 Mpc/h sphere (at redshift z=0)
Other constants
G6.67428 * 10-8 cm3 g-1 s-2Gravitational constant (Rev. Mod. Phys. 80 (2008) 633-730)
1 Mpc30.85677 * 1023 cm1 Mpc in cm
Msun1.98892 ± 0.00025 1033 gSolar mass (Physics Reports 281 (1997) 309)

It was run on the Pleiades supercomputer cluster at NASA AMES, currently the 11th most powerful supercomputer in the world. More info here: http://www.nas.nasa.gov/hecc/resources/pleiades.html. The researchers told me it can take 3-6 months for a single run.

Looks a little biological doesn’t it?

I contacted the researchers and was able to downloaded some of the massive data set generated by the simulation. You can explore some of the raw data yourself at cosmosim.org. The whole MDR1 database is a little unwieldy. It consists of 38 snapshots (out of 82 simulated) of the state of the universe at different points in “time”, each several gigabytes of floats containing information about the position, rotation, velocity, and more of each particle, or “halo”. The most recent snapshot is the “current” state of the universe, containing 13,668,648 objects.

Each object is a Halo of Cold Dark Matter. What’s that? Good question. According to the most widely accepted cosmological model today, Lambda-CDM, the universe consists primarily of 3 components: a cosmological constant that helps define dark energy, cold dark matter, and ordinary matter. Halos of cold dark matter are thought to contain galaxies as well as smaller halos. These halos are huge. They are yet to be observed directly, but indirect observations using globular clusters and X-ray data indicate the Milky Way’s dark matter halo extends at least 300,000 light years, more than 3x the size of the ordinary matter disk of our galaxy.

What is dark matter made of? Good question. My favorite candidate is Axions, but they have not been detected. Also there may be “warm” and even “hot” dark matter. More on that later.

Ok now for the data. Let’s get the simulation tools, primarily GADGET ( (GAlaxies with Dark matter and Gas intEracT) running first. I have tested this on WSL and Linux. Should work on Mac too. Hang in there and you will be smashing galaxies in no time. First the easy prerequisites.

sudo apt update -y
sudo apt upgrade -y
sudo apt install -y mpich gfortran build-essential git zlib1g-dev

Now let’s compile the dependencies FFTW (Fastest Fourier Transform in the West), GSL (GNU Scientific Library), and HDF5 (Heirarchical Data Format). Do not use the versions from the package repos. Gadget2 requires older versions, hdf5-1.6.9, fftw-2.1, and gsl-1.9.

# HDF5
wget https://support.hdfgroup.org/ftp/HDF5/releases/hdf5-1.6/hdf5-1.6.9/src/hdf5-1.6.9.tar.gz
tar xzf hdf5-1.6.9.tar.gz
cd hdf5-1.6.9
./configure --prefix=/usr/local
make -j4
sudo make install
cd ..

wget ftp://ftp.gnu.org/gnu/gsl/gsl-1.9.tar.gz
tar xzf gsl-1.9.tar.gz
cd gsl-1.9
./configure
make -j4
sudo make install
cd ..

# FFTW
# installs for both single and double precision
wget www.fftw.org/fftw-2.1.5.tar.gz
tar xzf fftw-2.1.5.tar.gz
cd fftw-2.1.5
./configure --enable-mpi --enable-type-prefix
make -j4
sudo make install
make clean
./configure --enable-mpi --enable-type-prefix --enable-float
make -j4
sudo make install
cd ..

Next build the simulator (Gadget2) itself and the software to generate initial conditions (N-genIC).

# N-GenIC
wget http://www.h-its.org/wp-content/uploads/2014/11/n-genic.tar.gz
tar xzf n-genic.tar.gz
cd N-GenIC
make -j4
cd ..

sudo ldconfig

# Gadget
wget http://www.mpa-garching.mpg.de/gadget/gadget-2.0.7.tar.gz
tar xzf gadget-2.0.7.tar.gz
cd Gadget-2.0.7/
vim Makefile
# comment out OPT += -DPERIODIC 
# add correct include and lib directories to Makefile around line 100
# probably -I/usr/local/include and -L/usr/local/lib
# for GSL_INCL, etc...
#make -j4
#cd ../..

Finally let’s run a simulation. The easiest is two galaxies colliding using simple Newtonian physics. The default initial condition file generates 60,000 particles, 40,000 for the halo and 20,000 for the disk. This can be modified with N-genIC. On a laptop you should get continuous output and it may take more than an hour to run, or a few minutes on an EC2 compute instance.

mkdir galaxy
# edit galaxy.param as desired
mpirun -np 8 Gadget2/Gadget2 Gadget2/parameterfiles/galaxy.param

Next try the cluster example in the Gadget folder. Doing a whole universe is left as an exercise to the reader. It will need different initial conditions as well as simulation parameters. I recommend using an EC2 instance like c5d.4xlarge, or, if you’re feeling ambitious, CUDA on an east coast GPU unit. https://github.com/pseudotensor/cuda-gadget. If you do make it that far email me. Finally let’s install a visualizer.

# Gadget Viewer
sudo apt install libhdf5-serial-dev libgtk-2.0-dev 
wget https://github.com/jchelly/gadgetviewer/releases/download/1.0.10/gadgetviewer-1.0.10.tar.gz
tar xzf 1.0.10.tar.gz
cd gadgetviewer-1.0.10
export FCFLAGS=-fopenmp
./configure
make
sudo make install

And load in the snapshots generated above. If you are on WSL make sure you install VcXsrv and export DISPLAY=localhost:0 and export LIBGL_ALWAYS_INDIRECT=1

Finally use ffmpeg to sew the frames in to a video.

ffmpeg -framerate 20 -i frame_%05d.image.png output.mp4

For further study see
https://github.com/chaoticinflaton/D-GADGET-2/ and
https://github.com/chaoticinflaton/DN-N-GenIC which will generate initial conditions for and run simulations with dark energy and massive neutrino cosmologies.

To read more about the Adaptive Refinement Tree (ART) method of efficiently ‘solving’ this n-body problem: adaptive_refinement_tree (pdf)

The Adaptive Refinement Tree makes these simulations, which are basically running a few differential equations with numerical methods, computationally tractable. This is difficult since every particle effects every other particle. ART solves the local area relatively precisely and then solves the far areas via Fourier Transform, a model of the area based on frequencies.

We have a number of particles, or “bodies” with a given mass, and we need to efficiently compute each one’s gravitational interactions with a large number of other bodies of given mass. Gravitational potential is modeled differentially with Poisson’s equation for gravitational potential in a radially symmetric system. This can be derived from Newton’s law of gravitation:

\frac{1}{r^2}\frac{\partial}{\partial r}(r^2\frac{\partial \phi}{\partial r}) = 4\pi G \rho(r)

The ART method arose from the earlier Particle Mesh (PM) method. PM is a type of Ewald Summation, which itself is a type of Poisson summation. The basic idea is to break up the calculations needing to be performed in to two classes, short range, and long range. We can solve the near term efficiently in real space, and the long term efficiently in Fourier space, using the FFT with periodic boundary conditions. Adaptive Mesh Refinement is a method of solving PDE’s like this one by defining the root of your tree containing your solution space and splitting, or refining, only when necessary for complex regions. For example, in a sparse region without a lot of halos the mesh size can be large, but we refine the more complicated regions, so we don’t do more work than we have to. See the paper and source code for more.

Anyways, this method of study has its critics. This paper astutely points out that cosmological simulation data does not match observations of galactic satellites: missing-galactic-satellites (pdf). What else could be erroneous?

However, great progress has been made lately. One of my favorites is CLUES (Constrained Local UniversE Simulations), which seeks to use observational data, including radial velocities of galaxies, the position of nearby X-ray selected clusters of galaxies, and cosmic flows, to inform simulations of the local universe.

https://www.clues-project.org/cms/

Note the main author of this paper from 2001, which has been cited over 5000 times, is Volker Springel, who wrote the original Gadget code as his PhD project. in 2018 Springel became director at the Max-Planck-Institute for Astrophysics.

MDR1 paper: Halo_concentrations_standard_lambda_CDM_cosmology (pdf)
To read about the Cold Dark Matter (CDM) Halos, the structures you see in the simulation, read this: structure-of-CDM-halos (pdf)
The “Friend of Friend” (FOF) method of finding the halos: FOF_method (pdf)
Read about using MPI to parallelize the original ART code: parallel_art (pdf)

I’m interested in using GPU parallelization to further improve the performance of the code. Or perhaps bespoke hardware, like the GRAPE board that was used in the early days of the project. Email me if you’d like to help.

For cosmos.robrhinehart.com I used https://www.cosmosim.org/query to download snapshots of the 100,000 most massive dark matter halos from MDR1.BDMV, 12 total. You can navigate through them with the left and right arrow keys. Then I used three.js to plot them as WebGL particles with a custom shader so I can change size and color dynamically. I would like 1,000,000 or really 10,000,000 particles but it leads to challenges in data handling and rendering, especially when loading additional properties like radius, color coding by mass, and rendering velocity vectors which are included in the data. Also I plan to stream the data as binary instead of large inefficient json queries, make better use of the GPU, and improve the lighting. You are welcome to help improve it. All the code is client side javascript and many features are commented out.