Saturday, November 13, 2010

CIS(theta) Meeting V (2010-2011) - pelican HPC CD on Overdrive!

Aim: 
pelican HPC CD on Overdrive!

Attending: 
DavidG, HerbertK, JayW, JoshG, RyanH

Reading: 
Building Parallel Programs, Chapter5

Research5: 
pelican HPC website
http://idea.uab.es/mcreel/ParallelKnoppix
pelican HPC tutorial
http://pareto.uab.es/mcreel/PelicanHPC/Tutorial/PelicanTutorial.html
pelican HPC forum
http://pelicanhpc.788819.n4.nabble.com
pelican HPC article
http://www.linux-magazine.com/w3/issue/103/030-035_pelicanHPC.pdf
pelican HPC based
http://birg1.fbb.utm.my/birghpc
pelican HPC based
http://kestrelhpc.sourceforge.net
pelican HPC like
http://www.ehu.es/AC/ABC.htm
pelican HPC like
http://www.dirigibleflightcraft.com/index.html




This week we went into pelicanHPC overdrive!  pelicanHPC was developed by Michael Creel at the University of Barcelona for his research in Econometrics using Octave and MPI.  We downloaded the ISO and burned one CD.  We then booted one of our 64bit dualcore AMD Athlons to try out pelicanHPC.  We booted the remaining 24 PCs (total of 50 2GHz cores) from that CD via PXE or netboot.  The video above follows the tutorial on the pelicanHPC website.  Take a look at the BIRG website with a version for Bioinformatics and protein folding.  It also has a tutorial.  Kestrel looks interesting too but it has to be installed on a hard drive and I don't think I want to give up the Ubuntu desktop we use in class during the day.  Our clustering experiments will be conducted after hours.  Take a look at CLuster By Night which uses ssh instead of PXE and Ruby instead of C++ for MPI jobs!



We ran into a bit of a SNAFU, however, as the client PCs got stuck somewhere in the middle of the boot process via PXE (so PXE boot works at least).  So, I dragged out a CrossOver Ethernet cable (http://www.littlewhitedog.com/content-8.html) I had spliced together ahead of time for just such an eventuality.  



This way we can fool 2 PCs (4 cores on the Athlons or 8 cores on the Xeons) into "thinking" they were still on an isolated LAN (PXE boot only works over Ethernet on a network with no other DHCP servers). 



We attached the Ethernet cards on eth1 (we also have eth0 for a public Windows LAN) that we usually use for the private Linux LAN, but ran into the same problem!




We also discussed using pelicanHPC or parallel Java "out of the box."  In other words these clustering solutions work fine on multicore PCs, the problem is networking more than one node.  We should have no problem making pelican or PJ use 1 or 2 cores on one of our AMD Athlons and 1, 2 or 4 cores on one of our Intel Xeons!


Well, "That's All Folks," for this meeting. We'll tackle the new server again next meeting in 2 weeks. Check our new Facebook Group, CIS(theta), for our next event!  BTW, this 1 hour meeting every 2 weeks is mandatory.  






Happy Clustering,

No comments:

Post a Comment