## Monday, March 26, 2012

### Quarter III, Week 7: ScreenCasts, SmartNotes and Code, oh my!

This week saw the end of several big units. So, I ended up blogging about those classes in other blog posts! Now, this week's post is a little short as it points to my other posts which are not so short! BTW, this week's Ignite Tuesday was about a kid and his 3D Printer (see above)?

AP Calculus BC finished Chapter 10 talking about Improper Integrals, p-Integrals, Convergence Tests for Series with all positive constants including the Comparison Test and the Integral Test, http://shadowfaxrant.blogspot.com/2012/03/teaching-ap-calculus-unit-10-series.html

preCalculus for Seniors almost finished Chapter 11 on Matrix Algebra, http://shadowfaxrant.blogspot.com/2012/03/teaching-precalculus-chapter-11.html

AP Computer Science finished a review project called Chess960.java about using 1D and 2D static arrays vs. ArrayLists, http://shadowfaxrant.blogspot.com/2012/03/teaching-apcs-chapter-8-arrays-and.html
Calculus Research Lab continued Riemann Sums! We finished the Derivatives text and started the AntiDerivatives text. We will also use the Differential Equations text. These texts are all free PDFs from http://www.sagemath.org
CRL 1.3 Reimann Sums Intro
https://sage.math.clemson.edu:34567/home/pub/297

HTH and I hope you enjoyed this week's notes!

Learning with Technology,

### Teaching preCalculus: Chapter 11 Matrices!

We are almost finished with a unit in my preCalculus for Seniors class. We did Chapter 11 on Matrices in the Sullivan & Sullivan text pictured above. After reviewing the Elimination Method and Points of Intersection for Linear-Linear Systems. We had a lot of fun with Augmented Matrices, Cramer's Rule and Matrix Algebra. We even did some nonLinear Systems and next week we will do Linear Programming to finish up the chapter. Enjoy!

Well, that's all folks!

## Sunday, March 25, 2012

### Teaching AP Calculus: Unit 10 Series Convergence Tests!

We just finished a 2 week unit on convergence tests of series with all positive constant terms in the fabulous Finney, Demanna, Waits, Kennedy text. We are warming up to Power Series convergence and LaGrange Error terms next week. That's another 2 week (before April breal) unit followed by 2 weeks on Vector/Polar/Parametrics (after April break) and AP Review already! How time flies when you are having fun: Morituri te salutamus!

Well, that's all folks!

### Teaching APCS: Chapter 8 Arrays and ArrayLists!

After studying Chapter 8 about static arrays and the ArrayList class in Cay Horstmann's great text "Big Java," I thought it was time for a little Case Study of my own. Unfortunately, with the advent of Case Studies on the AP Exam, I find I have little time for my own labs. However, we went ahead and wrote a program called Chess960 to create random chess boards according to Bobby Fischer's rules. Wow, this project surely took on a life of its own! We wrote V1 which generated one such board. Then we wrote V2 which refactored or simplified V1's code. Then V3 generated all 960 combinations printing them on screen and storing them to a file 960.txt. We just finished V4 which reads one line from the 960 lines in 960.txt to set up a board with Half-ASCII graphics. V4 is actually playable as we added a move() method too! With the AP Exam looming, we have to stop this project now and finish the real Case Study: GridWorld. So, the move() method is very simple. You can play "Jr. Chess" without castling, en passant or piece promotion. So, there's much more we could do with this project such as checking for valid moves, recording unfinished games, resuming unfinished games and checking for checkmates or stalemates. However, I met my goal of reinforcing concepts related to creating and using 1D and 2D static arrays and 1D ArrayLists. I hope you find this helpful whether you are an APCS student or teacher. Enjoy!

Chess960V1:

Chess960V2:
Chess960V3:
Chess960V4:

Well, that's all folks!

## Monday, March 19, 2012

### CIS(theta) 2011-2012 - Compiling MPI Code! - Meeting XI

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. Our new Shadowfax Cluster is coming along quite well. We have a nice base OS in the 64bit Ubuntu 11.04 Natty   Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! We installed openMPI and we used flops.f to scale our cluster up to 50 cores! Remember, we needed openSSH public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a user common to all cluster nodes called "jobs" in memory of Steve Jobs so the cluster user can simply log into one node and be logged into all nodes at once (you can actually ssh into each node as "jobs" without specifying a userid or passwd)!
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 2GB of RAM. We are still waiting for our quad-core AMD Phenom upgrade!

InstantCluster Step 3: Firmware (Meeting III)
We wasted way too much time two years ago (2009-2010 CIS(theta)) trying out all kinds of Linux distros looking for a good 64bit base for our cluster. Last year (2010-2011 CIS(theta)) we spent way too much time   testing out different liveCD distros. Last year, we also downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Two years ago, we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu! This year, 64bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster!

InstantCluster Step 4: Software Stack I (Meeting IV)
On top of Ubuntu we need to add openSSH, public-key authentication (step 4) and openMPI (step 5). Then we have to scale the cluster (step 6). In steps 7-10, we can discuss several applications to scatter/gather over   the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com, then:
1. If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files.
2. From your home directory, make .ssh secure by entering:
`chmod 700 .ssh`
3. Next, make .ssh your working directory by entering:
`cd .ssh`
4. To list/view the contents of the directory, enter:
`ls -a [we used ls -l]`
5. To generate your public and private keys, enter:
`ssh-keygen -t rsa`
The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
6. To compare the previous output of ls and see what new files have been created, enter:
`ls -a [we used ls -l]`
You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
7. To make your public key the only thing needed for you to ssh to a different machine, enter:
`cat id_rsa.pub >> authorized_keys`
`[The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to `
`10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these `
`files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys `
`for each temp file to generate one master authorized_keys file for all nodes that we could `
`just download to each node's .ssh dir.]`
8. [optional] To make it so that only you can read or write the file containing your private key, enter:
`chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys`
`InstantCluster Step 5: Software Stack II (Meeting V)`
`We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.htmlThese instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!`
`We compiled flops.f on the Master Node (any node can be a master):mpif77 -o flops flops.fand tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):mpirun -np 2 flopsNext, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):mpirun -np 4 --hostfile machines flopsEvery node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.ssh `
`Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.`
`InstantCluster Step 6: Scaling the cluster`
`UPDATE: 2011.1126 (Meeting VI)`
`Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! This is how we streamlined the process: `
`(1) adduser jobs and login as jobs `
`(2) goto http://packages.ubuntu.com and install openssh-server from the natty repository `
`(3) create /home/jobs/.ssh dir and cd there `
`(4) run ssh-keygen -t rsa `
`(5) add new public keys to /home/jobs/.ssh/authenticated_keys to all nodes `
`(6) add new IPs to /home/jobs/machines to all nodes`
`(7) goto http://packages.ubuntu.com and install openmpi-bin, gfortran and libopenmpi-devfrom the natty repository`
`(8) download flops.f to /home/jobs from our ftpsite compile and run: `
`mpif77 -o flops flops.f and `
`mpirun -np 2 flops or `
`mpirun -np 4 --hostfile machines flops `
`NB: since we are using the same hardware, firmware and compiler everywhere, we don't need to recompile flops.f on every box, just copy flops from another node!`
`(9) The secret is setting up each node identically:`
`/home/jobs/flops`
`/home/jobs/machines`
`/home/jobs/.ssh/authenticated_keys`
`UPDATE: 2011.1214 (Meeting VII)`
`We had 5 members slaving away today. Nodes 19-25 were running at about 5 GFLOPS last meeting. Today we added nodes 10, 11, 12, 17 and 18. However, we ran into some errors when testing more than the 14 cores we had last time. We should now have 24 cores and nearly 10 GFLOPS but that will have to wait until next time when we debug everything again....`
`UPDATE: 2012.0111 (Meeting VIII)`
`We found that some nodes did not have gfortran installed and many of the authorized_keys files were in consistent. So, we made sure every node had a user called jobs. Then we made sure we installed openssh-server, openmpi-bin, openmpi-doc (optional), libopenmpi-dev and gfortran installed. We generated all the public ssh keys and copied then over to one master authorized_keys file on shadowfax using ssh. Then we copied the master file back to each node over ssh to /home/jobs/.ssh and we tested everything with flops, and then I wrote this on FB:`
###### Eureka, success! By Jove I think we've done it! I give you joy, gentlemen! After you guys left, I was still trying to find the bottleneck in the cluster network. So, I just ran "mpirun -np 2 flops" on each box without the ---hostfile machines. Guess, what I found: all the cores ran at about 388 MFLOPS except the 2 cores in PC12. These cores were only running half that speed, about 194 MFLOPS each. So, all I had to do was delete the PC12 line from the machines file on the Master Node and run "mpirun -np 48 --hostfile machines flops" and every core ran over 300 MFLOPS! What's weird is is that when I ran "mpirun -np 2 flops" on any PC other that PC12, the cores would always yield over 380 MFLOPS no matter how many times I tried it. However, when running "mpirun -np 48 --hostfile machines flops" the yield varied greatly. Each core ran somewhere between 310 and 410 MFLOPS. I ran this several times as well. I'd say I got a mean of about 15.5 GFLOPS with a standard deviation of 2.25 GFLOPS. In other words, a typical run would fall in the range of 13.25 to 17.75 GFLOPS. What's even weirder is that the pelicanHPC liveCD ran all 50 cores last week at nearly 20 GFLOPS with each core running more than 350 MFLOPS. What's up with that? So, we are back on track. We got the cluster fully operational before midterms! Our next meeting (2/8/12) will be about what applications we can run over openMPI. You each should google for beginners' projects using mpicc, mpiCC, mpif77 or mpif90 and maybe even mpi4py. Simple number crunching or graphics would do! Maybe I'll also show you how pelicanHPC liveCD can PXE boot the whole room into an openMPI cluster in 5-10 minutes!
`UPDATE: 2012.0209 (Meeting IX)`
`(1) While testing the cluster, we got up to 32 cores and 12 GFLOPS. Anything more than 16 nodes had very power efficiency (the more cores we added, the less GFLOPS we got)! We will need to isolate the bottle necks. We isolated node #12, but there must be more. Is it the Ethernet cards or cables? Do we have defective cores? Did we install some nodes inconsistently?`
`(2) There are recruiting screencasts for CSH, CSAP and CRL, but none for CIS? So, we made one:`
`http://shadowfaxrant.blogspot.com/2012/01/cis-computing-independent-study.html`
`(3) Then we booted the whole room in about 5 minutes using the pelicanHPC liveCD. We got up to 48 cores and 18 GFLOPS. This tells us that probably not the cores or the Ethernet causing problems. We may have to reinstall a couple of nodes.`
```
```
```InstantCluster Step 7: Compiling MPI Code! (2012.0229, Meeting X)We downloaded the Life folder from BCCD and ran "make -f Makefile" to generate Life.serial, Life.MPI, Life.MP, Life.hybrid with just one snafu. Only PC25 had all the X11 libraries to compile properly! There must be some way to get those libraries installed on the other nodes. So, I wrote the following email to the folks at BCCD: Hello Skylar, et al:

I only get to meet with my Computing Independent Study class once or
twice a month after school. So, we hadn't had a chance to try some of
your code until now.

If you recall, I asked if the MPI code on BCCD could be used on any
openMPI cluster and you encouraged me to try some of the code here,
http://bccd-ng.cluster.earlham.edu/svn/bccd-ng/tags/bccd-3.1/trees/home/bccd

Yesterday, we were playing around with the Life folder. When I tried
"make -f Makefile" on my linux box, everything compiled fine. However,
when my students did so they got:
gcc -o Life.serial Life.c pkit.o -O3 -ggdb -pg -lm -lX11
-L/usr/X11R6/lib
In file included from Life.h:39:0,
from Life.c:41:
XLife.h:40:62: fatal error: X11/Xlib.h: No such file or directory
compilation terminated.
make: *** [Life.serial] Error 1

Both my PC and theirs are setup identically for clustering in that we
are running 64bit Ubuntu 11.04 + openMPI + openSSH with public key
authentication. Apparently their machines don't have the X11 libraries.
I'm thinking that I installed something more on my box to enable
various other programs (SmartNotebook, VLC, etc) to work that may have
automagically installed those libraries on my box? Do you know how to
get those libraries?

TIA,
A. Jorge Garcia
Applied Math and CompSci
http://www.youtube.com/calcpage2009And the reply was:I think you'll want to install libx11-dev on the systems generating the
error. You can either use "apt-get install" on the command line, or the
package manager GUI.
-- Skylar Thompson (skylar.thompson@gmail.com)
-- http://www.cs.earlham.edu/~skylar/ ```
`So, that's what we are going to do next time! THANX, SKYLAR!!`
` `
`UPDATE: 2012.0314 (Meeting XI - PI Day!) `
`We wrote to Skylar once again (no replay as of yet):`
```Thanx for your help regarding libx11-dev. We installed it on every Linux box we have in the classroom in case we need it again and we ran "make -f Makefile" in the Life directory we downloaded from http://bccd-ng.cluster.earlham.edu/svn/bccd-ng/tags/bccd-3.1/trees/home/bccd and we got Life.serial working.

Now, we tried to run Life.mpi and ran into another SNAFU. Just to recap, we're trying to run some of your BCCD openMPI code on our cluster. Our cluster has a common user on every Linux box (64bit dual-core Athons running Ubuntu 11.04) called jobs that has openSSH public-key authentication. In other words, a user need only login in once on any of our 25 boxes and can ssh to any other box without providing a userid or passwd. Of course we also have openMPI installed with gfortran. We ran "scp Life.mpi jobs@10.5.129.x:Life.mpi to populate all the /home/jobs directories. Each node also has the same /home/jobs/machines file listing the static IPs of all the nodes in the cluster as well as the same /home/jobs/.ssh/authorized-keys file.

So, we figured we could run Life.mpi on one node (2 cores, 5 threads
per core):
mpirun -np 10 Life.mpi
or on five nodes (10 cores,1 process per core)
mpirun -np 10 --hostfile machines Life.mpi
using any box as the master node, running from /home/jobs.

Single node worked great. Many nodes gave: "Error: Could not open display Life.h:93..." Do you have any idea how to remedy this issue?

Also, -r, -c and -g seem to work fine as commandline input for the Life executables, but -t doesn't affect the output. We didn't try file input yet.

TIA,
A. Jorge Garcia
Applied Math and CompSci
```And David Joiner, a developer for the http://BCCD.net team replied:

OK, in that case the tricky thing is that each node has its own account and its own .bashrc file, so you will need to make changes to all of them.

Assuming your head node has an IP www.xxx.yyy.zzz, this would mean each .bashrc file would need a line like

(in each .bashrc on every client node)
export DISPLAY=www.yyy.yyy.zzz:0.0

and on your head node you would want to allow other machines to display to your xhost using the command "xhost +".

(at command line on head node before running mpirun command)
xhost +

Let me know if that fixes your display problems.

Dave```
`===================================================`
What we are researching VI+VII (Feb+Mar)
(let's learn from other's MPI code!):
pelicanHPC liveCD
BCCD liveCD
NVidia's CUDA and GPUs?
`===================================================`
```What we are researching V (Jan)(we used to use the BCCD liveCD, look at their openMPI code):HelloWorldhttp://bccd.net/wiki/index.php/Hello_World_for_MPIConway's 2D Game of Lifehttp://bccd.net/wiki/index.php/Game_of_LifeN-Body Orbitshttp://bccd.net/wiki/index.php/GalaxSee Code

http://bccd-ng.cluster.earlham.edu/svn/bccd-ng/tags/bccd-3.1/trees/home/bccd/```
`===================================================`
```What we are researching IV (Dec)(maybe we can use Python on our MPI cluster?):MPI4pyhttp://blog.mikael.johanssons.org/archive/2008/05/parallell-and-cluster-mpi4py/Parallel Pythonhttp://www.parallelpython.com/IPythonhttp://ipython.scipy.org/moin/

===================================================```
```What we are researching III (Nov)(look at some clustering environments we've used in the past):We used PVM PVMPOV in the 1990s.http://www.csm.ornl.gov/pvmhttp://pvmpov.sourceforge.netopenMOSIX and kandel were fun in the 2000s.http://openmosix.sourceforge.net/instant_openmosix_clusters.htmlhttps://zworykin.elec.uow.edu.au/~daniel/Projects/Software/kandelLet's try out openMPI and Blender now!http://www.open-mpi.orghttp://www.blender.org

===================================================What we are researching II (Oct)(look what other people are doing with MPI):MPI intro, nice!http://www.ualberta.ca/CNS/RESEARCH/Courses/2001/PPandV/Intro_to_MPI.pdfMPI on Ubuntuhttp://ubuntuforums.org/showthread.php?t=1327253Sample MPI codehttp://www.cc.gatech.edu/projects/ihpcl/mpi.html

===================================================What we are researching I (Sept)(look what this school did in the 80s and 90s): Thomas Jefferson High courseshttp://academics.tjhsst.edu/compsci/parallel/Thomas Jefferson High paperhttp://www.tjhsst.edu/~rlatimer/techlab07/BWardPaperQ3-07.pdfThomas Jefferson High ftphttp://www.tjhsst.edu/~rlatimer/techlab07/Thomas Jefferson High teacherhttp://www.tjhsst.edu/~rlatimer/

===================================================Today's Topic:CIS(theta) 2011-2012 - Compiling MPI Code! - Meeting XIToday's Attendance:CIS(theta) 2011-2012: GeorgeA, GrahamS, KennyK, LucasEToday's Reading:Chapter 7: Building Parallel Programs (BPP) using clusters and parallelJava

===================================================Well, that's all folks, enjoy!Happy Clustering,calcpage@aol.comhttp://shop.ebay.com/items/_ti_calculator_activehttp://mathforum.org/kb/profile.jspa?userID=32552```