Wednesday, May 30, 2012

CIS(theta) 2011-2012 - Compiling MPI Code! - Meeting XIII (last one)

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. Our new Shadowfax Cluster is coming along quite well. We have a nice base OS in the 64bit Ubuntu 11.04 Natty Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! We installed openMPI and we used flops.f to scale our cluster up to 50 cores! Remember, we needed openSSH public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a user common to all cluster nodes called "jobs" in memory of Steve Jobs so the cluster user can simply log into one node and be logged into all nodes at once (you can actually ssh into each node as "jobs" without specifying a userid or passwd)!
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 2GB of RAM. We are still waiting for our quad-core AMD Phenom upgrade!

InstantCluster Step 3: Firmware (Meeting III)
We wasted way too much time two years ago (2009-2010 CIS(theta)) trying out all kinds of Linux distros looking for a good 64bit base for our cluster. Last year (2010-2011 CIS(theta)) we spent way too much time     testing out different liveCD distros. Last year, we also downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Two years ago, we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu! This year, 64bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster! See this post: http://shadowfaxrant.blogspot.com/2011/05/2-so-many-hard-drives-so-little-time.html

InstantCluster Step 4: Software Stack I (Meeting IV)
On top of Ubuntu we need to add openSSH, public-key authentication (step 4) and openMPI (step 5). Then we have to scale the cluster (step 6). In steps 7-10, we can discuss several applications to scatter/gather over    the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. 
  1. Install openSSH-server from http://packages.ubuntu.com
  2. Create a the same new user on every box of the cluster
  3. login as the new user, we used 'shadowfax'
  4. If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files. 
  5. From your home directory, make .ssh secure by entering:
    chmod 700 .ssh
  6. Next, make .ssh your working directory by entering:
    cd .ssh
  7. To list/view the contents of the directory, enter:
    ls -a [we used ls -l]
  8. To generate your public and private keys, enter:
    ssh-keygen -t rsa
    The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
  9. To compare the previous output of ls and see what new files have been created, enter:
    ls -a [we used ls -l]
    You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
  10. To make your public key the only thing needed for you to ssh to a different machine, enter:
    cat id_rsa.pub >> authorized_keys
    [The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 
    10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these 
    files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys 
    for each temp file to generate one master authorized_keys file for all nodes that we could 
    just download to each node's .ssh dir.]
  11. [optional] To make it so that only you can read or write the file containing your private key, enter:
    chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys
InstantCluster Step 5: Software Stack II (Meeting V)
We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html
These instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!
Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
We compiled flops.f on the Master Node (any node can be a master):
mpif77 -o flops flops.f
and tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):
mpirun -np 2 flops
Next, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):
mpirun -np 4 --hostfile machines flops
Every node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.ssh 
Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.
InstantCluster Step 6: Scaling the cluster
UPDATE: 2011.1126 (Meeting VI)
Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! This is how we streamlined the process: 
(1) adduser jobs and login as jobs 
(2) goto http://packages.ubuntu.com and install openssh-server from the natty repository 
(3) create /home/jobs/.ssh dir and cd there 
(4) run ssh-keygen -t rsa 
(5) add new public keys to /home/jobs/.ssh/authenticated_keys to all nodes 
(6) add new IPs to /home/jobs/machines to all nodes
(7) goto http://packages.ubuntu.com and install openmpi-bin, gfortran and libopenmpi-devfrom the natty repository
(8) download flops.f to /home/jobs from our ftpsite compile and run: 
mpif77 -o flops flops.f and 
mpirun -np 2 flops or 
mpirun -np 4 --hostfile machines flops 
NB: since we are using the same hardware, firmware and compiler everywhere, we don't need to recompile flops.f on every box, just copy flops from another node!
(9) The secret is setting up each node identically:
/home/jobs/flops
/home/jobs/machines
/home/jobs/.ssh/authenticated_keys
UPDATE: 2011.1214 (Meeting VII)
We had 5 members slaving away today. Nodes 19-25 were running at about 5 GFLOPS last meeting. Today we added nodes 10, 11, 12, 17 and 18. However, we ran into some errors when testing more than the 14 cores we had last time. We should now have 24 cores and nearly 10 GFLOPS but that will have to wait until next time when we debug everything again....
UPDATE: 2012.0111 (Meeting VIII)
We found that some nodes did not have gfortran installed and many of the authorized_keys files were in consistent. So, we made sure every node had a user called jobs. Then we made sure we installed openssh-server, openmpi-bin, openmpi-doc (optional), libopenmpi-dev and gfortran installed. We generated all the public ssh keys and copied then over to one master authorized_keys file on shadowfax using ssh. Then we copied the master file back to each node over ssh to /home/jobs/.ssh and we tested everything with flops, and then I wrote this on FB:
Eureka, success! By Jove I think we've done it! I give you joy, gentlemen!

After you guys left, I was still trying to find the bottleneck in the cluster network. So, I just ran "mpirun -np 2 flops" on each box without the ---hostfile machines. Guess, what I found: all the cores ran at about 388 MFLOPS except the 2 cores in PC12. These cores were only running half that speed, about 194 MFLOPS each. So, all I had to do was delete the PC12 line from the machines file on the Master Node and run "mpirun -np 48 --hostfile machines flops" and every core ran over 300 MFLOPS!

What's weird is is that when I ran "mpirun -np 2 flops" on any PC other that PC12, the cores would always yield over 380 MFLOPS no matter how many times I tried it. However, when running "mpirun -np 48 --hostfile machines flops" the yield varied greatly. Each core ran somewhere between 310 and 410 MFLOPS. I ran this several times as well. I'd say I got a mean of about 15.5 GFLOPS with a standard deviation of 2.25 GFLOPS. In other words, a typical run would fall in the range of 13.25 to 17.75 GFLOPS.

What's even weirder is that the pelicanHPC liveCD ran all 50 cores last week at nearly 20 GFLOPS with each core running more than 350 MFLOPS. What's up with that?

So, we are back on track. We got the cluster fully operational before midterms! Our next meeting (2/8/12) will be about what applications we can run over openMPI. You each should google for beginners' projects using mpicc, mpiCC, mpif77 or mpif90 and maybe even mpi4py. Simple number crunching or graphics would do! Maybe I'll also show you how pelicanHPC liveCD can PXE boot the whole room into an openMPI cluster in 5-10 minutes!
UPDATE: 2012.0209 (Meeting IX)
(1) While testing the cluster, we got up to 32 cores and 12 GFLOPS. Anything more than 16 nodes had very power efficiency (the more cores we added, the less GFLOPS we got)! We will need to isolate the bottle necks. We isolated node #12, but there must be more. Is it the Ethernet cards or cables? Do we have defective cores? Did we install some nodes inconsistently?
(2) There are recruiting screencasts for CSH, CSAP and CRL, but none for CIS? So, we made one:
http://shadowfaxrant.blogspot.com/2012/01/cis-computing-independent-study.html
(3) Then we booted the whole room in about 5 minutes using the pelicanHPC liveCD. We got up to 48 cores and 18 GFLOPS. This tells us that probably not the cores or the Ethernet causing problems. We may have to reinstall a couple of nodes.
InstantCluster Step 7: Compiling MPI Code! (2012.0229, Meeting X)
We downloaded the Life folder from BCCD and ran "make -f Makefile" to generate Life.serial, Life.MPI, Life.MP, Life.hybrid with just one snafu. Only PC25 had all the X11 libraries to compile properly! There must be some way to get those libraries installed on the other nodes. So, I wrote the following email to the folks at BCCD:
 
Hello Skylar, et al:

I only get to meet with my Computing Independent Study class once or 
twice a month after school. So, we hadn't had a chance to try some of 
your code until now.

If you recall, I asked if the MPI code on BCCD could be used on any 
openMPI cluster and you encouraged me to try some of the code here,
http://bccd-ng.cluster.earlham.edu/svn/bccd-ng/tags/bccd-3.1/trees/home/bccd

Yesterday, we were playing around with the Life folder. When I tried 
"make -f Makefile" on my linux box, everything compiled fine. However, 
when my students did so they got:
gcc -o Life.serial Life.c pkit.o -O3 -ggdb -pg -lm -lX11 
-L/usr/X11R6/lib
In file included from Life.h:39:0,
                 from Life.c:41:
XLife.h:40:62: fatal error: X11/Xlib.h: No such file or directory
compilation terminated.
make: *** [Life.serial] Error 1

Both my PC and theirs are setup identically for clustering in that we 
are running 64bit Ubuntu 11.04 + openMPI + openSSH with public key 
authentication. Apparently their machines don't have the X11 libraries. 
I'm thinking that I installed something more on my box to enable 
various other programs (SmartNotebook, VLC, etc) to work that may have 
automagically installed those libraries on my box? Do you know how to 
get those libraries?

TIA,
A. Jorge Garcia
Applied Math and CompSci
http://shadowfaxrant.blogspot.com
http://www.youtube.com/calcpage2009
And the reply was:
I think you'll want to install libx11-dev on the systems generating the
error. You can either use "apt-get install" on the command line, or the
package manager GUI.
-- Skylar Thompson (skylar.thompson@gmail.com)
-- http://www.cs.earlham.edu/~skylar/ 
So, that's what we are going to do next time! THANX, SKYLAR!!
 
UPDATE: 2012.0314 (Meeting XI - PI Day!) 
We wrote to Skylar once again:
Thanx for your help regarding libx11-dev. We installed it on every Linux box we have in the classroom in case we need it again and we ran "make -f Makefile" in the Life directory we downloaded from http://bccd-ng.cluster.earlham.edu/svn/bccd-ng/tags/bccd-3.1/trees/home/bccd and we got Life.serial working.

Now, we tried to run Life.mpi and ran into another SNAFU. Just to recap, we're trying to run some of your BCCD openMPI code on our cluster. Our cluster has a common user on every Linux box (64bit dual-core Athons running Ubuntu 11.04) called jobs that has openSSH public-key authentication. In other words, a user need only login in once on any of our 25 boxes and can ssh to any other box without providing a userid or passwd. Of course we also have openMPI installed with gfortran. We ran "scp Life.mpi jobs@10.5.129.x:Life.mpi to populate all the /home/jobs directories. Each node also has the same /home/jobs/machines file listing the static IPs of all the nodes in the cluster as well as the same /home/jobs/.ssh/authorized-keys file.

So, we figured we could run Life.mpi on one node (2 cores, 5 threads 
per core):
mpirun -np 10 Life.mpi
or on five nodes (10 cores,1 process per core)
mpirun -np 10 --hostfile machines Life.mpi
using any box as the master node, running from /home/jobs.

Single node worked great. Many nodes gave: "Error: Could not open display Life.h:93..." Do you have any idea how to remedy this issue?

Also, -r, -c and -g seem to work fine as commandline input for the Life executables, but -t doesn't affect the output. We didn't try file input yet.

TIA,
A. Jorge Garcia
Applied Math and CompSci
http://shadowfaxrant.blogspot.com
http://www.youtube.com/calcpage2009
And David Joiner, a developer for the http://BCCD.net team replied:

OK, in that case the tricky thing is that each node has its own account and its own .bashrc file, so you will need to make changes to all of them.

Assuming your head node has an IP www.xxx.yyy.zzz, this would mean each .bashrc file would need a line like

(in each .bashrc on every client node)
export DISPLAY=www.yyy.yyy.zzz:0.0

and on your head node you would want to allow other machines to display to your xhost using the command "xhost +".

(at command line on head node before running mpirun command)
xhost +

Let me know if that fixes your display problems.

Dave
 
UPDATE: 2012.0425 (Meeting XII) 
OK, we added the export command: export DISPLAY=10.5.129.11:0.0 to every node's bashrc file making PC11 (10.5.129.11) our master node. Also, our machines file looks like this:
10.5.129.5
10.5.129.6
10.5.129.7
...
10.5.129.25

Then we ran from the master node:
xhosts +
jobs@lambda:~$ mpirun -np 4 --hostfile ~/machines Life.mpi

and we got this error:
Error: Could not open display at XLife.h:93
Error: Could not open display at XLife.h:93
Error: Could not open display at XLife.h:93
Error: Could not open display at XLife.h:93
--------------------------------------------------------------------------
mpirun has exited due to process rank 2 with PID 6704 on
node 10.5.129.7 exiting without calling "finalize". This may
have caused other processes in the application to be
terminated by signals sent by mpirun (as reported here).
--------------------------------------------------------------------------
jobs@lambda:~$ 

Do you know how to fix this?

TIA,
A. Jorge Garcia
Applied Math and CompSci
http://shadowfaxrant.blogspot.com
http://www.youtube.com/calcpage2009

Now, David joiner replied that we should add the line "export DISPLAY=10.5.129.11:0.0" to the .bashrc file for each worker node, if 10.5.129.11 is the master node. Adding that line to the master will only mess up Life.mpi on that single node. Then, when running Life.mpi on the cluster from the master node, we run:
xhost +
mpirun -np 4 --hostfile ~/machines Life.mpi

We did all that and still there's no joy....
NB: ssh -Y xxx.xxx.xxx.xxx doesn't work anymore when sshing into a worker node!
PS: next time try NSF?

LAST UPDATE: 2012.0530 (Meeting XIII) 
We had our last meeting today! We burned a bunch of BCCD DVDs and ran GalaxSee without a problem on several cores. This year's team actually achieved several milestones. We got pelicanHPC running on all 50 cores from one CD via PXEboot for the first time. We got BCCD with Life and GalaxSee running for the first time. We even installed openMPI over public key authenticated openSSH natively on our lan and ran flops.f at 20 GFLOPs. Congrats on a job well done!

BTW, the best results we had this year hardware wise were:
BCCD: 4 nodes, 8 cores, 3.2 GFLOPS, 8 GB RAM, 1 TB HDD
LAN: 16 nodes, 32 cores, 12.8 GFLOPS, 32 GB RAM, 4 TB HDD
HPC: 25 nodes, 50 cores, 20 GFLOPS, 50 GB RAM, 6.25 TB HDD


Many thanx go also go to the developers of the following projects who helped us:
1995-2005: openMOSIX
ClusterKNOPPIX (with fractals)
http://www.ibm.com/developerworks/linux/library/l-clustknop/index.html#Resources
QUANTIAN (with povray)
http://dirk.eddelbuettel.com/quantian.html
ParallelKNOPPIX (old version of pelicanHPC)
http://idea.uab.es/mcreel/ParallelKnoppix/
Bootable Cluster CD 2.x
http://bccd.net/

2005-2012: openSSH
Bootable Cluster CD 3.x (with c++)
http://bccd.net/
Cluster by Night (with ruby)
http://www.dirigibleflightcraft.com/CbN/
pelicanHPC (new version of ParallelKNOPPIX with c++, python and fortran)
http://pareto.uab.es/mcreel/PelicanHPC/

==============================================
What we are researching VI-IX (Feb+Mar+Apr+May)
(let's learn from other's MPI code!):
pelicanHPC liveCD
BCCD liveCD
NVidia's CUDA and GPUs?
==============================================
What we are researching V (Jan)
(we used to use the BCCD liveCD, look at their openMPI code):
==============================================
What we are researching IV (Dec)
(maybe we can use Python on our MPI cluster?):
Parallel Python
IPython
http://ipython.scipy.org/moin/
==============================================
What we are researching III (Nov)
(look at some clustering environments we've used in the past):
We used PVM PVMPOV in the 1990s.
Let's try out openMPI and Blender now!
http://www.blender.org
==============================================
What we are researching II (Oct)
(look what other people are doing with MPI):
MPI intro, nice!
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
==============================================
What we are researching I (Sept)
(look what this school did in the 80s and 90s): 
Thomas Jefferson High courses
Thomas Jefferson High paper
Thomas Jefferson High ftp
Thomas Jefferson High teacher
==============================================
Today's Topic:
CIS(theta) 2011-2012 - Compiling MPI Code! - Meeting XIII
Today's Attendance:
CIS(theta) 2011-2012: GeorgeA, KennyK, LucasE
Today's Reading:
Chapter 9: Building Parallel Programs (BPP) using clusters and parallelJava
==============================================
Membership (alphabetic by first name):
CIS(theta) 2011-2012: Graham Smith, George Abreu, Kenny Krug, LucasEager-Leavitt
CIS(theta) 2010-2011: David Gonzalez, Herbert Kwok, Jay Wong, Josh Granoff, Ryan Hothan
CIS(theta) 2009-2010: Arthur Dysart*, Devin Bramble, Jeremy Agostino, Steve Beller
CIS(theta) 2008-2009: Marc Aldorasi, Mitchel Wong*
CIS(theta) 2007-2008: Chris Rai, Frank Kotarski, Nathaniel Roman
CIS(theta) 1988-2007: A. Jorge Garcia, Gabriel Garcia, James McLurkin, Joe Bernstein
*nonFB
==============================================
Well, that's all folks, enjoy!

Sunday, May 27, 2012

Quarter IV, Week 6: ScreenCasts, SmartNotes and Code, oh my!

Calc is still watching the "Back to the Future" Trilogy, CompSci is still having video game tournaments (tremulous, bzflag, starcraft, quake), Calculus Research Lab started DiffEqus and preCalculus just had a test on Differentiation Rules!  

Next week AP Calculus BC is starting a final project with SAGE I like to call LAC 2012: "Life After Calculus." LAC 2011 was the same last last year except we did without SAGE and didn't get a chance to record for YouTube. So, I'll be doing a lot of Calculus with SAGE on Youtube for the next couple of weeks! 

LACS 2012: "Life After CompSci" also starts next week. We are going to do some graphics programming with java on Youtube (GameOfLife, Turtles, Fractals). Last year LACS 2011 was about appInventor for Droid. Next year, LACS 2013 may revisit Droid app development again with Eclipse.

Maybe I'll record LApreC 2010: "Life After preCalculus" and LAP 2009: "Life After Physics" as well.

preCalculus for Seniors  
preCalculus finished Unit 3: Differentiating Rules (Product Rule, Quotient Rule, Trig Rules, Chain Rule).  I've been motivating new topics each day with a related Calculus Song from YouTube. There's a lot of great songs on YouTube. Goto YouTube and search for "Calculus Song" or "Calculus Parody" or "Calculus Cover" or "Calculus Carol" or even "Calculus Filk!"
http://shadowfaxrant.blogspot.com/2012/05/teaching-precalculus-continuing-intro_12.htmlAP Calculus BCAP Week traditionally means a movie marathon. Here's what we're doing this year:pre AP Week: AP Calculus BC finished a unit on Vector vs. Polar notation. We stressed the similarity of Polar and Parametric modes. So, once we figured out how to calculate slopes and arc lengths parametrically, we tried to do the same with polar graphs. We also talked about polar area! Next week is AP Review already! We had MCQ IX and CML 4 too.
http://shadowfaxrant.blogspot.com/2012/05/teaching-ap-calculus-finishing-unit-12.html

AP Computer Science  
This week we are running a video game arcade. BZFlag and Tremulous were easily installed earlier in the year using the Ubuntu Software Center. Now, the school has every student PC, even my Linux boxes, going through some crazy firewall, so we installed Quake using http://packages.ubuntu.com and we are running an old version of StarCraft via WINE! Quake was a bit of a pain to set up. Here's what we did (some boxes also needed libmad0 and libdynamite0 first):
(1) From http://packages.ubuntu.com/ download and install the following in this order:
(1A) quakespasm
(1B) quake
(1C) dynamite
(1D) game-data-packager
(2) goto /usr/share/games/game-data-packager and install quake-shareware_all_29.deb
(3) download pak0.pak and pak1.pak fromhttp://www.mirafiori.com/ftp/pub/gaming
(4) sudo cp *.pak /usr/share/games/quake/id1


pre AP Week: AP Computer Science tied up some loose ends with CH11 "Interfaces and Polymorphism" aka the "implements" key word in java. We also did CH18 "Recursion." We did CH13 "Inheritance" aka the key word "extends" via GridWorld. I assigned CH19 "Searching and Sorting" as a reading assignment for homework before the AP. I did the same with GridWorld Part IV about the Critter class as we had to start AP Review!
Calculus Research LabCRL finally started the Calc3 text on DiffEqus! We finished the Derivatives text (calc-1) last semester and the AntiDerivatives text (calc-2) last quarter. We are using this Differential Equations text (calc-3) in our final project! These texts are all free for the download as PDFs from http://www.sagemath.org
page 15 of DiffEqus text:
https://sage.math.clemson.edu:34567/home/pub/319

I hope you enjoyed this week's notes!

 
Learning with Technology, 

Monday, May 21, 2012

Reinstall Fest 2012!

Oh, Oh, I ran out of hdd space and totally FUBARRED my SmartBoard PC! So, I followed these instructions from previous installations and got back up and running in no time! I used the Ubuntu 11.04 32 bit install liveCD. After enabling my static IP and printers, I added WINE. Then I installed SmartNotebook 10 for 32bit Linux. I also enabled libDVDread4 and VLC as we are still in Movie marathon mode! I also added Chromium. 

The only thing that was troublesome was installing the new JRE from Oracle for Firefox! So, I followed these instructions:
http://sites.google.com/site/easylinuxtipsproject/java

Notes:
0) After installing JRE, I did not need flash for YouTube!
1) I use WINE mostly for Virtual-TI (not to mention StarCraft I).
2) This is a 64bit box, but I used 32bit Linux as SmartNotebook has no 64bit version.
3) VLC is mostly for playing mp4s I download from YouTube via KeepVid.
4) Chromium, or Google Chrome, is needed for our online gradebook.
5) I set the home page in Firefox to http://screencast-o-matic.com/
6) I set the home page in Chromium to https://esd.nasboces.org/
7) All my sites from Edmodo, YouTube, BlogSpot, PasteBin and SlideShare to http://sage.math.clemson.edu:34567 work!
8) I setup my Shadowfax background on the desktop!
9) I did not get to SAGE yet. Installing SAGE is what got me into trouble in the first place as it takes a lot of room. I downloaded sage.lzma which was about 400 MB. This file extracts to over 1 GB. When I tried extracting thusly:
tar --lzma -xvf sage.lzma
I had less than 1 GB of hdd space and tar did not check when in lzma mode! Suddenly my hdd was overloaded, my GUI crashed and I couldn't delete the offending files. I have to install SAGE locally as I had to remove internet access....

HTH!
Generally Speaking,

Saturday, May 19, 2012

Quarter IV, Week 5: ScreenCasts, SmartNotes and Code, oh my!

This week was AP Week #2! Calc is watching the "Back to the Future" Trilogy, CompSci is having video game tournaments (tremulous, bzflag, starcraft, quake) and Calculus Research Lab is watching "Stand and Deliver!" This is a great movie about a Calculus class failing the AP. We get to laugh at them after our AP! It's very cathartic. The only class doing any meaningful work is preCalculus!  

preCalculus for Seniors  
preCalculus finished Unit 3: Differentiating Rules (Product Rule, Quotient Rule, Trig Rules, Chain Rule).  I've been motivating new topics each day with a related Calculus Song from YouTube. There's a lot of great songs on YouTube. Goto YouTube and search for "Calculus Song" or "Calculus Parody" or "Calculus Cover" or "Calculus Carol" or even "Calculus Filk!"
http://shadowfaxrant.blogspot.com/2012/05/teaching-precalculus-continuing-intro_12.htmlAP Calculus BCAP Week traditionally means a movie marathon. Here's what we're doing this year:pre AP Weeks 1&2: AP Calculus BC finished a unit on Vector vs. Polar notation. We stressed the similarity of Polar and Parametric modes. So, once we figured out how to calculate slopes and arc lengths parametrically, we tried to do the same with polar graphs. We also talked about polar area! Next week is AP Review already! We had MCQ IX and CML 4 too.
http://shadowfaxrant.blogspot.com/2012/05/teaching-ap-calculus-finishing-unit-12.html

AP Computer Science  
This week we are running a video game arcade. BZFlag and Tremulous were easily installed earlier in the year using the Ubuntu Software Center. Now, the school has every student PC, even my Linux boxes, going through some crazy firewall, so we installed Quake using http://packages.ubuntu.com and we are running an old version of StarCraft via WINE! Quake was a bit of a pain to set up. Here's what we did (some boxes also needed libmad0 and libdynamite0 first):
(1) From http://packages.ubuntu.com/ download and install the following in this order:
(1A) quakespasm
(1B) quake
(1C) dynamite
(1D) game-data-packager
(2) goto /usr/share/games/game-data-packager and install quake-shareware_all_29.deb
(3) download pak0.pak and pak1.pak fromhttp://www.mirafiori.com/ftp/pub/gaming
(4) sudo cp *.pak /usr/share/games/quake/id1


pre AP Weeks 1&2: AP Computer Science tied up some loose ends with CH11 "Interfaces and Polymorphism" aka the "implements" key word in java. We also did CH18 "Recursion." We did CH13 "Inheritance" aka the key word "extends" via GridWorld. I assigned CH19 "Searching and Sorting" as a reading assignment for homework before the AP. I did the same with GridWorld Part IV about the Critter class as we had to start AP Review!
Calculus Research LabGotta have a movie marathon here too:pre AP Weeks 1&2: CRL was back in business as our internet connectivity issues  were finally addressed! So, we finally finished 1.3 Reimann Sums from the Calc2 text implementing Simpson's Rule as a python function. We finished the Derivatives text (calc-1) last semester and started the AntiDerivatives text (calc-2) last quarter. We will also use the Differential Equations text (calc-3) in our final project! These texts are all free for the download as PDFs from http://www.sagemath.org
CRL 1.3 Reimann Sums Intro
https://sage.math.clemson.edu:34567/home/pub/297


HTH and I hope you enjoyed this week's notes!

 
Learning with Technology,