Computing Independent Study 2012-2013
MAY UPDATE II
Eureka!!! We finally got the new Shadowfax Cluster running up to 24 nodes, 96 cores and 48 GigaFLOPs! The fastest cluster in the world, Titan, has 150000 cpu cores and 150000 gpu cores running at 20 PetaFLOPs. Maybe we could run 100 cpu cores and 100 gpu cores next year at 100 GigaFLOPs!
Next year, CIS(2013-2014) has 8 members! We are going to look into making a little MandelZoom, POV-Ray or Blender movie!
BTW, we'll never catch up to Titan, especially since they are going to upgrade it to 20 ExaFLOPs. They ran into a SNAFU, however, as they will need a nuclear reactor right next door just to boot up this monster!
We didn't get to meet in April due to Easter break. We just met today, the day after returning to Room 429. 429 had to be rebuilt from the roof down for the last 6 months since Hurricane Sandy destroyed the cluster. We have all new lighting, wiring and PCs. We tested these Linux Boxes today for the first time using flops.f and MPIF77. We now have Intel quad-core i5s. We clocked each core at over 530 MFLOPS. We also got all 4 cores running at over 2.1 GFLOPS per node. However, there's a DHCP server conflict on the LAN, so we couldn't make the jump from multi-core to grid today. We are hoping to have 100 cores running simultaneously next time. Let's see if we can break 50 GFLOPS for a ShadowFax/CIS(theta) all time record!
MARCH UPDATE
We only had only one meeting and all we did was set up for Game Day! We had an OpenArena LAN Party all day yesterday. Thanx to KyleS (period 1), fun was had by one and all! All the brownies and cup cakes were great too! Thanx to JessicaP (period 6) and LeslieM (period 8), noone left hungry!
Yesterday (3/22/13) was the Friday before break. It had the feeling of a celebration like the day before XMas break. I suppose that's because we haven't had a break since then.
Not only did Hurricane Sandy close the school for 10 days and condemn the Math building, but it cancelled Midterm Week and February Break too! Thanx a lot, Sandy....
Yesterday (3/22/13) was the Friday before break. It had the feeling of a celebration like the day before XMas break. I suppose that's because we haven't had a break since then.
Not only did Hurricane Sandy close the school for 10 days and condemn the Math building, but it cancelled Midterm Week and February Break too! Thanx a lot, Sandy....
FEBRUARY UPDATE
We finally got around to writing mandelSeq.py and mandelMPI.py using MPI4PY, mathplotlib() and spectral but only on 3 cores. Our work is based on Lisandro Dalcin's excellent materials listed below. We ran mandelSeq.py as an executable file from the terminal just like any standard python program. To accomplish this we added:
#!/usr/bin/python
as the first line of the file and ran:
chmod 755 mandelSeq.py
to make an executable. mandelMPI.py is executed like helloMPI.py:
mpiexec -n 3 python helloMPI.py
JANUARY UPDATE #3
OOPS, we got our signals crossed and had to postpone this meeting! Next time I think we'll try to render some Mandelbrot Fractals. If we ever get back to our old room, we could make a FractalZoom movie using 100 cores instead of just 3....
JANUARY UPDATE #2
Finally, we got helloMPI.py working today on a single node with 3 cores! We wrote the following code:
1 from mpi4py import MPI
2
3 rank = MPI.COMM_WORLD.Get_rank()
4 size = MPI.COMM_WORLD.Get_size()
5 name = MPI.Get_processor_name()
6
7 print ("Hello, World! "
8 "I am process %d of %d on %s" %
9 (rank, size, name))
We saved this text in the file helloMPI.py in the pelicanHPC home dir. Then, from a shell in the home dirm we executed this code by:
We downloaded the new ISO with mpi4py from http://www.pelicanhpc.org and followed this tutorial: http://pareto.uab.es/mcreel/PelicanHPC/Tutorial/PelicanTutorial.html and this one too: http://pareto.uab.es/mcreel/PelicanHPC/pelicanhpc.pdf and http://pelicanhpc.788819.n4.nabble.com/How-can-i-find-out-which-are-processes-are-being-sent-to-which-node-td3034955.html and http://www.linuxpromagazine.com/Issues/2009/103/CLUTTER-TO-CLUSTER
1 from mpi4py import MPI
2
3 rank = MPI.COMM_WORLD.Get_rank()
4 size = MPI.COMM_WORLD.Get_size()
5 name = MPI.Get_processor_name()
6
7 print ("Hello, World! "
8 "I am process %d of %d on %s" %
9 (rank, size, name))
We saved this text in the file helloMPI.py in the pelicanHPC home dir. Then, from a shell in the home dirm we executed this code by:
mpiexec -n 3 python helloMPI.py
We downloaded the new ISO with mpi4py from http://www.pelicanhpc.org and followed this tutorial: http://pareto.uab.es/mcreel/PelicanHPC/Tutorial/PelicanTutorial.html and this one too: http://pareto.uab.es/mcreel/PelicanHPC/pelicanhpc.pdf and http://pelicanhpc.788819.n4.nabble.com/How-can-i-find-out-which-are-processes-are-being-sent-to-which-node-td3034955.html and http://www.linuxpromagazine.com/Issues/2009/103/CLUTTER-TO-CLUSTER
JANUARY UPDATE #1
We tried to isolate our LAN from other DHCP servers but could not. So we still only have individual 3-core SMP boxes running at about 1.2 GFLOPs. We also tried our hand at mpi4py, but we have a way to go in that department too!
UPDATE: mpi4py was left out of the current pelicanHPC by mistake! No wonder mpi4py didn't work! Look here:
http://pelicanhpc.788819.n4.nabble.com/pelicanhpc-2-9-amd64-and-mpi4py-td4650446.html
UPDATE: mpi4py was left out of the current pelicanHPC by mistake! No wonder mpi4py didn't work! Look here:
http://pelicanhpc.788819.n4.nabble.com/pelicanhpc-2-9-amd64-and-mpi4py-td4650446.html
We finally had a chance to meet this month in our new PC Lab! We got pelicanHPC to run 3 cores at about 1.2 GFLOPS. We have AMD Phenom IIs. These CPUs are supposed to be quad-cores. It seems one core is dead. Even so, that's not a bad start. However, we could only run pelican in SMP mode. We could not PXE boot any other nodes. We are also looking into Flame Fractals.
********************NOVEMBER UPDATE
Sorry to say that we had no meetings this month. I was hoping for 3 meetings, but Hurricane Sandy changed everything! We are in a new room where we may be able to try out liveLinux CD based MPI clusters next month. Stay tuned!
********************OCTOBER UPDATE #2
RELATED POST: http://shadowfaxrant.blogspot.com/2012/11/pc-lab-period-of-reconstruction-at.html
Here's the steps we followed for a minimal install of the Student Stations (64bit Athlons):
Step1:
Reboot each Linux box with the current CD. Answer some basic questions about time zone, userid, passwd, no login on bootup, etc.
Step2:
Reboot each Linux box without the CD. Make sure to configure the gigE cards and proxy server:
IP: 10.5.129.x
NetMask:255.255.0.0
GateWay: 10.5.0.254
DNS: 10.5.0.254
Proxy: 10.0.0.125
Step3:
Configure System Settings as desired (unit circle trig calculator background, no screensaver, etc).
Step4:
We had to switch Software Sources in the Ubuntu Software Center (edit/source) to Main before this would work:
sudo apt-get update
sudo apt-get upgrade
Step5:
Now, we could use the Ubuntu Software Center to install WINE.
Step6:
I copied my VTI83 and VTI89 directories from my memory stick to the Desktop. Then, after editing preferences to have VTI open with WINE, I configured each calculator.
Step7:
I will edit my /etc/crontabs tomorrow....
Step8:
We haven't decided what else we may have to install (local SAGE server, JRE, openSSH, openMPI,etc). We'll have to think about that! Here's some info on install fests from prior years:
http://shadowfaxrant.blogspot.com/2011/05/2-so-many-hard-drives-so-little-time.html
http://shadowfaxrant.blogspot.com/2011/01/then-god-mage-midterm-week-and-saw-that.html
http://shadowfaxrant.blogspot.com/2010/06/so-many-linux-distros-so-little-time.html
OCTOBER UPDATE
SEPTEMBER UPDATE
We decided to try out the new Ubuntu Linux 64bit Desktop 12.04 nicknamed Precise Pangolin. So we surfed on over to http://www.ubuntu.com and downloaded the latest ISO. We burned the CD, rebooted a guinea pig box and reinstalled it. This should be a simple procedure as we no longer use dualboot or dualnic boxes. However, we ran into a SNAFU right away! Intranet gigE works fine, but we can't get on the Internet? OOPs, we forgot the network proxy. If at first you don't succeed, try, try again!
Ubuntu Release History
4.10 Warty Warthog (mammal)
5.04 Hoary Hedgehog (mammal)
5.10 Breezy Badger (mammal)
6.06 Dapper Drake (bird)
6.10 Edgy Eft (amphibian)
7.04 Feisty Faun (mammal)
7.10 Gutsy Gibbon (mammal)
8.04 Hardy Heron (bird)
8.10 Intrepid Ibex (mammal)
9.04 Jaunty Jackalope (mythical beast)
9.10 Karmic Koala (mammal)
10.04 Lucid Lynx (mammal)
10.10 Maverick Meerkat (mammal)
11.04 Natty Narwahl (mammal)
11.10 Oneiric Ocelot (mammal)
12.04 Precise Pangolin (mammal)
12.10 Quantal Quetzal (bird) release: 10/18
Guardian, our ssh server, is running 10.04 32bit. Guardian has a dualcore 32bit intel Xeon processor with 2GB RAM and a 512GB RAID drive.
Caprica, our ftp server, is running 10.04 32bit. Caprica has a dualcore 32bit intel Xeon processor with 2GB RAM and a 512GB RAID drive.
Shadowfax, our teacher station, is running 11.10 32bit. Shadowfax has a dualcore 64bit amd Athlon processor with 2GB RAM and a 256GB hdd. We use a 32bit OS here as SmartNotebook doesn't run on 64bit....
Alpha-Omega, our student stations, are running 11.04 64bit. These Linux boxes, like Shadowfax, have dualcore 64bit amd Athlon processors with 2GB RAM and a 256GB hdd.
We are only upgrading Alpha-Omega to 12.04 (or 12.10 if it's available when we upgrade in a couple of weeks). We are also waiting for a hardware upgrade for Alpha-Omega to amd quadcore Phenoms!
********************Ubuntu Release History
4.10 Warty Warthog (mammal)
5.04 Hoary Hedgehog (mammal)
5.10 Breezy Badger (mammal)
6.06 Dapper Drake (bird)
6.10 Edgy Eft (amphibian)
7.04 Feisty Faun (mammal)
7.10 Gutsy Gibbon (mammal)
8.04 Hardy Heron (bird)
8.10 Intrepid Ibex (mammal)
9.04 Jaunty Jackalope (mythical beast)
9.10 Karmic Koala (mammal)
10.04 Lucid Lynx (mammal)
10.10 Maverick Meerkat (mammal)
11.04 Natty Narwahl (mammal)
11.10 Oneiric Ocelot (mammal)
12.04 Precise Pangolin (mammal)
12.10 Quantal Quetzal (bird) release: 10/18
Guardian, our ssh server, is running 10.04 32bit. Guardian has a dualcore 32bit intel Xeon processor with 2GB RAM and a 512GB RAID drive.
Caprica, our ftp server, is running 10.04 32bit. Caprica has a dualcore 32bit intel Xeon processor with 2GB RAM and a 512GB RAID drive.
Shadowfax, our teacher station, is running 11.10 32bit. Shadowfax has a dualcore 64bit amd Athlon processor with 2GB RAM and a 256GB hdd. We use a 32bit OS here as SmartNotebook doesn't run on 64bit....
Alpha-Omega, our student stations, are running 11.04 64bit. These Linux boxes, like Shadowfax, have dualcore 64bit amd Athlon processors with 2GB RAM and a 256GB hdd.
We are only upgrading Alpha-Omega to 12.04 (or 12.10 if it's available when we upgrade in a couple of weeks). We are also waiting for a hardware upgrade for Alpha-Omega to amd quadcore Phenoms!
(1) Wreath of the Unknown Server:
We visited our first ssh server, Colossus, which is still in the switch room though dormant. I set it up for the first time in 1995 running Slackware Linux. Colossus ran for 12 years straight, 24x7 never having to shut down, reboot or even have anything re-installed! Colossus would not die. We finally just replaced Colossus with a dual-core Intel Xeon box complete with a RAID drive running 1TB. Old Linux boxes never die, they just fade away...
(2) Display Case Unveiled:
We took down a ton of fractal prints and ray tracings from Room 429 to the 2 cases on the 1st floor near the art wing. We decorated both cases as best we could and left before anyone saw us. Must have been gremlins.
(3) Recruiting 2012:
We decided that we did not have a good pool of candidates to recruit more CIS(theta) members for this year's Geek Squad, so we tabled that topic.
(4) Planing 2012:
Next meeting would have been 9/28 but that's Yum Kipur. So, we have to wait another 2 weeks after that for 10/10 at which point Ubuntu Precise Pangolin 64bit release 12.10 Desktop Edition should be available for a mini install fest. After that, we may use bootable cluster Linux CD distros to learn MPI.
==================================
What we are researching I (Sept)
(look what this school did in the 80s):
Thomas Jefferson High courseshttp://academics.tjhsst.edu/compsci/parallel/
Thomas Jefferson High paper
http://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdf
Thomas Jefferson High ftp
http://www.tjhsst.edu/~rlatimer/techlab07/
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
What we are researching II (Oct)
(clustering environments):
Parallel Virtual Machinehttp://www.csm.ornl.gov/pvm/
openMOSIX
http://openmosix.sourceforge.net/instant_openmosix_clusters.html
Message Passing Interface
http://www.open-mpi.org/
What we are researching III (Dec)
(instant MPI clusters via liveCDs):
http://en.wikipedia.org/wiki/Fractal_flame
Cluster By Night
BCCD
pelicanHPC
Flame Fractals
What we are researching IV (Jan)
==================================
Today's Topic:
CIS(theta) 2012-2013 - flops.f and Intel i5s
Today's Attendance:
CIS(theta) 2012-2013: Kyle Seipp
Today's Reading:
Chapter 8: Building Parallel Programs (BPP) using clusters and parallelJava
==================================
Membership (alphabetic by first name):
CIS(theta) 2012-2013:
Kyle Seipp
CIS(theta) 2011-2012:
Graham Smith, George Abreu, Kenny Krug, LucasEager-Leavitt
CIS(theta) 2010-2011:
David Gonzalez, Herbert Kwok, Jay Wong, Josh Granoff, Ryan Hothan
CIS(theta) 2009-2010:
Arthur Dysart*, Devin Bramble, Jeremy Agostino, Steve Beller
CIS(theta) 2008-2009:
Marc Aldorasi, Mitchel Wong*
CIS(theta) 2007-2008:
Chris Rai, Frank Kotarski, Nathaniel Roman
CIS(theta) 1988-2007:
A. Jorge Garcia, Gabriel Garcia, James McLurkin, Joe Bernstein, ... too many to mention here!
*nonFB
==================================
Applied Math, Physics and CS
http://shadowfaxrant.blogspot.com
http://www.youtube.com/calcpage2009
2013 NYS Secondary Math PAEMST Nominee
http://www.youtube.com/calcpage2009
2013 NYS Secondary Math PAEMST Nominee
Happy Clustering,
No comments:
Post a Comment