Wednesday, February 15, 2017
CIS(theta), 2016-2017 Feb Meeting: openMPI & FORTRAN!
Feb Meeting: openMPI & FORTRAN!
We decided to install a native openMPI stack over our Ubuntu OS on all the hard drives in our cluster and skip using the pelicanHPC DVD. The idea is to login from home and ssh into any node to login on the cluster as needed! We already covered steps 1-4. We're on step 4, installing openMPI and testing with flops.f!
InstantCluster Step 1:
Infrastructure - Power, Wiring and AC
InstantCluster Step 2:
Hardware - PCs
InstantCluster Step 3:
Firmware - Ubuntu
InstantCluster Step 4:
Software Stack I - openSSH:
01) Install openSSH-server from USC or
02) Create a the same new user on every box of the cluster
03) login as the new user, we used
userid: jaeger, passwd: galaga
04) If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.
05) From your home directory, make .ssh secure by entering:
chmod 700 .ssh
06) Next, make .ssh your working directory by entering:
07) To list/view the contents of the directory, enter:
ls -a [we used ls -l]
08) To generate your public and private keys, enter:
ssh-keygen -t rsa
The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
09) To compare the previous output of ls and see what new files have been created, enter:
ls -a [we used ls -l]
You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
10) To make your public key the only thing needed for you to ssh to a different machine, enter:
cat id_rsa.pub >> authorized_keys
NOTE: The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys for each temp file to generate one master authorized_keys file for all nodes that we could just download to each node's .ssh dir.
[optional] To make it so that only you can read or write the file containing your private key, enter:
chmod 600 id_rsa
[optional] To make it so that only you can read or write the file containing your authorized keys, enter:
chmod 600 authorized_keys
InstantCluster Step 5:
Software Stack II - openMPI
We finally have openSSH installed with public key authentication on 10.5.129.17-10.5.129.20. We tested that today.
Today we also installed openmpi-bin, libopenmpi-dev and gfortran on the same machines:
sudo apt-get install openmpi-bin
sudo apt-get install libopenmpi-dev
sudo apt-get install gfortran
Then we compiled flops.f:
mpif77 -o flops flops.f
Then we ran flops on our quadcores:
mpirun -np 4 flops
We got about 8 GFLOPS! So, we have multicore working on individual PCs now it's time scale our job over the cluster!
mpirun -np 16 --hostfile machines flops
We got up to nearly 32GFLOPS! We made sure all four PCs are identical COTS, have identical firmware, have public key authenticated ssh for the user jaeger, and have these 3 files:
The /home/jaeger/machines file is a txt file that looks like this:
InstantCluster Step 6:
Coding I - Quadrature
InstantCluster Step 7:
Coding II - Mandelbrot
InstantCluster Step 8:
Coding III - Mandel Zoom
InstantCluster Step 9:
Coding IV - POVRay
InstantCluster Step 10:
Coding V - Blender
InstantCluster Step 11:
Coding VI - 3D Animation
2016-2017 MANDATORY MEETINGS
09/14/2016 (organizational meeting)
10/26/2016 (installing Ubuntu 16.10 64bit)
11/09/2016 (installing Ubuntu 16.10 64bit)
12/14/2016 (Pelican HPC DVD)
01/11/2017 (openSSH Public Keys)
02/08/2017 (openMPI Software Stack)
03/22/2017 (Fractal Plots + Zoom Movie)
(03/29/2017 is a make up day)
04/26/2017 (POVRAY 3D Stills + Animation)
05/10/2017 (Blender 3D Animation)
(05/24/2017 is a make up day)
So, what's all this good for aside from making a Fractal Zoom or Shrek Movie?
Mersenne Prime Search
Computer Aided Design (CAD)
Computer Algebra Systems (CAS)
These are but a few examples of using Computer Science to solve problems in Mathematics and the Sciences (STEAM). In fact, many of these applications fall under the heading of Cluster Programming or Super Computing. These problems typically take too long to process on a single PC, so we need a lot more horse power. Next time, maybe we'll just use Titan!
Membership (alphabetic by first name):
DanielD(12), JevanyI(12), JuliaL(12), MichaelC(12) , MichaelS(12), YaminiN(12)
BenR(11), BrandonL(12), DavidZ(12), GabeT(12), HarrisonD(11), HunterS(12), JacksonC(11), SafirT(12), TimL(12)
BryceB(12), CheyenneC(12), CliffordD(12), DanielP(12), DavidZ(12), GabeT(11), KeyhanV(11), NoelS(12), SafirT(11)
BryanS(12), CheyenneC(11), DanielG(12), HarineeN(12), RichardH(12), RyanW(12), TatianaR(12), TylerK(12)
Graham Smith(12), George Abreu(12), Kenny Krug(12), LucasEager-Leavitt(12)
David Gonzalez(12), Herbert Kwok(12), Jay Wong(12), Josh Granoff(12), Ryan Hothan(12)
Arthur Dysart(12), Devin Bramble(12), Jeremy Agostino(12), Steve Beller(12)
Marc Aldorasi(12), Mitchel Wong(12)
Chris Rai(12), Frank Kotarski(12), Nathaniel Roman(12)
A. Jorge Garcia, Gabriel Garcia, James McLurkin, Joe Bernstein, ... too many to mention here!