Gmandel runs on Linux (or equivalent) and may be run on a single computer, a multiprocesor computer (most new multi-core PCs), or a cluster of computers.
In each case, Gmandel can take advantage of multi-core technology by the use of shared memory, fine grained and distributed memory, coarse grain parallel computing techniques. The first is accomplished with posix threads and the latter by means of MPI message passing (currently tested with mpich2, from ANL).
In each case, Gmandel may be configured to use compute resources to the limit.
If run on a cluster, Gmandel requires that the user has permision to remote execute in realtime on the compute nodes. Unfortunately, this is not always the case.
When your computer cluster administrator institutes the policy that the computer cluster is only good for running batch jobs via torque or pbs, you're out of luck. But be thankful that Mordac doesn't insist that you use punch cards to input the source code to your application.
Gmandel requires GTK+2 and all that implies (glibc, pango, atk, cairo, etc.). If these are not installed in the target computer, just go ahead and download them and configure them with the --prefix=$HOME option to create your very own GTK+ system. You can find more about this at the GTK site (library to build graphical user interfaces (GUIs) originally for X Window, small, efficient, flexible) and the GNU operating system.
Also note that if you are using MPI support, only the master or login node requires GTK+. The programs that will be spawned on the computation nodes do not require anything special. And if you're not tunneling X over an ssh connection, don't forget to set the DISPLAY environment variable accordingly. Try opening a remote xterm window to see that everything is functioning properly.
And the very latest development version is always available via CVS. More information on how to use CVS can be found here.
# ./configure --prefix=$HOMEfollowed by a
make && make installThis should put the Gmandel executable at $HOME/bin/gmandel and the remote MPI routines at $HOME/bin/gmandel_mpi_slave.
If you do not specify an install directory via the
--prefix= option at the configure script, this will be set to /usr/local.
This is probably not what you want. So please set the --prefix= option.
If mpich2 is not installed on the build computer, you can just run Gmandel from the build directory without the make install part.
If mpich2 version is built, then you must copy the executable files in the build directory to the path specified by $prefix/bin. This is why setting the --prefix option to configure is important. Let make install do copying to the correct path for you.
If you don't know how to set the --prefix option, please execute:
# ./configure --helpfor further instructions.
# gmandelin an xterm, as long as $HOME is in your PATH environment variable (this is usually the case). This is the case with either the threaded version or the MPI-threaded version. If Gmandel was compiled with MPI support (mpich2), and the mpd daemon is not running, Gmandel with fork a subprocess to try and start the mpd daemon and will terminate it when exiting. If mpd is already running, Gmandel will use the running daemon and leave it alone when exiting. The check takes a time, since a subprocess must be spawned and communication established, so if you want to view images faster, make sure mpd is already running and configured for all the hosts you're going to use before Gmandel is invoked.
--- 1.3.1 notes: PVM version has been dropped. Julia set generations are not active (only Mandelbrot set). This version has been tested with: - posix threads and posix shm - mpich2 mpich2 details 1. Make sure ssh works without password for all cluster nodes. Individual ssh keys for all nodes should be in the known_hosts file 2. Hostname of each node should be listed in /etc/hosts at each node. 3. Problem with mpich2: If mpd works but mpdboot does not (failing with a "no_port" message), this may be due to the fact that "mpd" is in your path but not "mpd.py". This occurs when distros (like gentoo) create a directory for all the mpich2 python files. The problem is easily fixed by creating a symlink in the directory where mpd is located (/usr/bin, /usr/local/bin) to wherever mpd.py really is. Example: # ln -s /usr/bin/mpd /wherever/mpich2/files/are/located/mpd.py 4. Another problem with mpich2: Gmandel may fail with a "no msg recvd from mpd (when expecting ack of request). This occurs when the default value for a mpi-message timeout is too low for the hardware you are using. Either use a faster network for communication (infiniband or myrinet, for example) or change the value for recvTimeout from 20 to something larger. This is done in file "mpiexec.py" located whereever mpich2 files are (see point 3 above). 5. If mpd is not running when gmandel is invoked, gmandel will try to start mpd and will terminate mpd when exiting. If mpd is already running, gmandel will use the running daemon and leave it alone when exiting. The check for a running mpd takes a bit of time, so if you want to view images faster, make sure mpd is running and configured for all the hosts you need before gmandel is invoked. For instalation, do a ./configure followed by a make && make install. Latest CVS version for gmandel is always available at CVS site hosted as sourceforge.net. To run the threaded version: $ "prefix"/bin/gmandel To run the MPI version (mpich2): $ mpdboot [optional] $ "prefix"/bin/gmandel -------------------------------------------------------------------- Points to remember: -- You should have the DISPLAY environment variable correctly defined for remote display of the gmandel program. SSH tunnels work fine. -- You can also do a ./autogen.sh to regenerate the configure script. Enjoy.
Edscott Wilson García, who can be reached for comments, patches, suggestions at edscott(at)users.sourceforge.net
page last updated: Dec. 03, 2009