CPSC 521 assignment 4

Back to assignment list.

Visualizations of the nbody program

In assignments 1 and 2, if VISUALIZE_OUTPUT is enabled, the assignment reports the position of bodies every so often. This output can be fed into my multithreaded visualization program through ssh, which will display the state of the system in real-time. Here are some screenshots and videos:

Screenshot of a hundred interacting bodies (the visualization was running at close to 60 FPS). Click to enlarge.

Screenshot of 1024 interacting bodies, running on 32 processes on four machines. The visualization was running at about 30 FPS.


Play video (2.3 MB)

A video of assignment 2 running with a set of bodies that looks like a solar system. There is a massive "sun" object in the centre with ten lighter bodies at random positions around it. The initial velocities of these bodies were calculated with some basic physics so that each would have a stable orbit. (Assignment 2 was modified to accept initial velocities if they were given in the input.)


Play video (7.0 MB)

My screen-capture program slowed down the visualization significantly in the previous video; it actually runs at about 30 FPS but the video shows more like 10 FPS. Here is another solar system visualization, video captured by saving the frames at each step. I carefully made this video run as fast as the real program did.

Notice the one body which is in a highly elliptical orbit. It has been affected by the other orbiting bodies.

The visualization program

The visualization program itself is quite simple. It has one thread which waits in the background for data coming over ssh, and it has a foreground thread that repaints the OpenGL display. The viewport is centred and zoomed automatically based on a weighted average of all the points.

Modifications to the nbody program

A few modifications to the nbody program were necessary to get the visualizations shown here. The program of course needed to periodically print the location of each particle, as done when VISUALIZE_OUTPUT is enabled. But for the solar-system simulation, the program also had to be modified to accept initial velocities. (It reads in the data one line at a time from the input file; if there are three numbers, it's assumed to be x, y, mass; if there are five, then x, y, mass, x-velocity, y-velocity.)

Finally, this simple Perl script was used to compute reasonable initial velocities for objects, to keep them in stable orbits. It takes in an input file with triplets, and some constants to add to the position and velocity, then outputs the 5-tuples including velocity. It is assumed that there is a supermassive object at LaTeX, and that objects should be rotating counter-clockwise.

#!/usr/bin/perl
# crit_velocity = sqrt( 2*G*m1 / dist )
# where G=6.6738480e-11

use strict;

my $G = 6.6738480e-11;
my ($x_add, $y_add, $vx_add, $vy_add) = @ARGV;

while(my $line = <STDIN>) {
    my ($x, $y, $mass) = split ' ', $line;
    my $dist = ($x*$x + $y*$y) ** 0.5;
    my $speed = (2 * $G * $mass / $dist) ** 0.5;

    my $vx = -$y * $speed / $dist * 10;
    my $vy = $x * $speed / $dist * 10;

    $x += $x_add;
    $y += $y_add;
    $vx += $vx_add;
    $vy += $vy_add;

    print "$x $y $mass $vx $vy\n";
}

This method was used to create several example input files for visualization.

Getting the visualization to run with qsub

The visualization program can be run on the same machine as the n-body program, but through an ssh -X connection it is very slow. So it's usually better run the n-body program remotely and have the results directed into the visualization program locally:

$ ssh cyclops '. ~/.bash_profile ; mpiexec -n 4 ~/a2/nbody 10000 100 ~/a2/in-gen1' | ./vis

Note: sourcing the bash profile is just to get mpiexec to work correctly.

This works fine for one machine but for multiple machines, qsub and PBS must be used to allocate the nodes. Unfortunately the easy way of allocating them, with ique1 or ique2 etc, does not allow the input to be directed from a script (which would be easiest since the output is going to ./vis). I tried several methods of getting around this:

  1. The obvious solution is just to have PBS direct the output (stdout or stderr) to a file which is NFS-mounted, so that you can read the file as it is created. However, PBS accumulates the output remotely and only copies the data when it feels like it, and no amount of flushing will change its mind, so this doesn't work for real-time visualization.
  2. I tried overloading isatty() and tcgetattr() using LD_PRELOAD to trick PBS into thinking it had a terminal (yes, I checked the source first to make sure this was reasonable). But no luck, in addition to a bunch of junk output, the PBS jobs hang at exit time, they don't know how to close the tty I guess. Not a big surprise.
  3. I also tried creating a named pipe and directing PBS to output to that file so I could cat it; of course the same delay issues arose. However, if I freopen() the program's stderr to point to the named pipe, I can write to it at my leisure and read it in real time. But, PBS still hangs on exit. It may be because I don't close the file descriptor but I think it has something to do with pipes in general.
  4. So instead: I freopen() the nbody program's stderr to an ordinary file, run the PBS job in non-interactive batch mode, and then use ssh cyclops 'tail -F stderr' to watch for changes to the file as they happen. This is definitely not optimal but it allows the visualizations to proceed in near real time. And then the PBS jobs exit correctly.

Yeah, it might have been easier to just use MPE for visualizations instead.

Downloads

For downloads, see Assignment 2.

Page generated on Tue Oct 24 00:37:12 2017