- If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.
- From your home directory, make .ssh secure by entering:
chmod 700 .ssh
- Next, make .ssh your working directory by entering:
- To list/view the contents of the directory, enter:
ls -a [we used ls -l]
- To generate your public and private keys, enter:
ssh-keygen -t rsaThe first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
- To compare the previous output of ls and see what new files have been created, enter:
ls -a [we used ls -l]You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
- To make your public key the only thing needed for you to ssh to a different machine, enter:
cat id_rsa.pub >> authorized_keys
[The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to
10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these
files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys
for each temp file to generate one master authorized_keys file for all nodes that we could
just download to each node's .ssh dir.]
- [optional] To make it so that only you can read or write the file containing your private key, enter:
chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys
InstantCluster Step 5: Software Stack II
We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.htmlThese instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
We compiled flops.f on the Master Node (any node can be a master):mpif77 -o flops flops.fand tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):mpirun -np 2 flopsNext, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):Every node has the same "machines" text file in /home/jobslisting all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.sshmpirun -np 4 --hostfile machines flops
Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.
What we are researching III(look at some clustering environments we've used in the past):We used PVM PVMPOV in the 1990s.openMOSIX and kandel were fun in the 2000s.What we are researching II(look what other people are doing with MPI):MPI intro, nice!MPI on UbuntuSample MPI codehttp://www.cc.gatech.edu/projects/ihpcl/mpi.html===================================================What we are researching I(look what this school did in the 80s and 90s):Thomas Jefferson High coursesThomas Jefferson High paperThomas Jefferson High ftpThomas Jefferson High teacherhttp://www.tjhsst.edu/~rlatimer/===================================================Today's Topic:CIS(theta) 2011-2012 - Install openMPI and Stress the Cluster! - Meeting VToday's Attendance:CIS(theta) 2011-2012: GeorgeA, GrahamS, KennyK, LucasEToday's Reading:Chapter 3 Building Parallel Programs (BPP) using clusters and parallelJava===================================================Well, that's all folks, enjoy!