- If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.
- From your home directory, make .ssh secure by entering:
chmod 700 .ssh
- Next, make .ssh your working directory by entering:
- To list/view the contents of the directory, enter:
ls -a [we used ls -l]
- To generate your public and private keys, enter:
ssh-keygen -t rsaThe first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
- To compare the previous output of ls and see what new files have been created, enter:
ls -a [we used ls -l]You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
- To make your public key the only thing needed for you to ssh to a different machine, enter:
cat id_rsa.pub >> authorized_keys
[The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to
10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these
files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys
for each temp file to generate one master authorized_keys file for all nodes that we could
just download to each node's .ssh dir.]
- [optional] To make it so that only you can read or write the file containing your private key, enter:
chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys
InstantCluster Step 5: Software Stack II (Meeting V)
We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.htmlThese instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
We compiled flops.f on the Master Node (any node can be a master):mpif77 -o flops flops.fand tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):mpirun -np 2 flopsNext, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):Every node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.sshmpirun -np 4 --hostfile machines flops
Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.
InstantCluster Step 6: Scaling the cluster
UPDATE: 2011.1126 (Meeting VI)
Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! This is how we streamlined the process:
(1) adduser jobs and login as jobs
(2) goto http://packages.ubuntu.com and install openssh-server from the natty repository
(3) create /home/jobs/.ssh dir and cd there
(4) run ssh-keygen -t rsa
(5) add new public keys to /home/jobs/.ssh/authenticated_keys to all nodes
(6) add new IPs to /home/jobs/machines to all nodes
(7) goto http://packages.ubuntu.com and install openmpi-bin, gfortran and libopenmpi-devfrom the natty repository
(8) download flops.f to /home/jobs from our ftpsite compile and run:
mpif77 -o flops flops.f and
mpirun -np 2 flops or
mpirun -np 4 --hostfile machines flops
NB: since we are using the same hardware, firmware and compiler everywhere, we don't need to recompile flops.f on every box, just copy flops from another node!
(9) The secret is setting up each node identically:
UPDATE: 2011.1214 (Meeting VII)
We had 5 members slaving away today. Nodes 19-25 were running at about 5 GFLOPS last meeting. Today we added nodes 10, 11, 12, 17 and 18. However, we ran into some errors when testing more than the 14 cores we had last time. We should now have 24 cores and nearly 10 GFLOPS but that will have to wait until next time when we debug everything again....
UPDATE: 2012.0111 (Meeting VIII)
We found that some nodes did not have gfortran installed and many of the authorized_keys files were in consistent. So, we made sure every node had a user called jobs. Then we made sure we installed openssh-server, openmpi-bin, openmpi-doc (optional), libopenmpi-dev and gfortran installed. We generated all the public ssh keys and copied then over to one master authorized_keys file on shadowfax using ssh. Then we copied the master file back to each node over ssh to /home/jobs/.ssh and we tested everything with flops, and then I wrote this on FB:
What we are researching V(we used to use the BCCD liveCD, look at their openMPI code):Conway's 2D Game of LifeN-Body Orbits
What we are researching IV(maybe we can use Python on our MPI cluster?):Parallel PythonIPython
What we are researching III(look at some clustering environments we've used in the past):We used PVM PVMPOV in the 1990s.openMOSIX and kandel were fun in the 2000s.What we are researching II(look what other people are doing with MPI):MPI intro, nice!MPI on UbuntuSample MPI codehttp://www.cc.gatech.edu/projects/ihpcl/mpi.html===================================================What we are researching I(look what this school did in the 80s and 90s):Thomas Jefferson High coursesThomas Jefferson High paperThomas Jefferson High ftpThomas Jefferson High teacherhttp://www.tjhsst.edu/~rlatimer/===================================================Today's Topic:CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VIIIToday's Attendance:CIS(theta) 2011-2012: GeorgeA, KennyK, LucasE; CIS(theta)Today's Reading:Chapter 5: Building Parallel Programs (BPP) using clusters and parallelJava===================================================Well, that's all folks, enjoy!