Tuesday, December 27, 2011

Quarter II, Week 7: ScreenCasts, SmartNotes and Code, oh my!

AP Calculus BC finished Unit 6 and had a test on metrics aka measurement theory (applications of the definite integral). Last week we reviewed area and talked about volumes of revolution (disk, washer, shell) in dx and dy. This week we finished volumes with known cross sections as well as arc length and surface area. 
BC Calc 608 Surface Area




preCaclulus for Seniors started CH9 on polar coordinates last week. This week we focused on Rcis(theta) form complex number arithmetic.
preCalc 903 CIS(theta)

preCalculus 903 cis(theta)

View more documents from A Jorge Garcia


AP Computer Science just finished the lab at the end of Grid World Part II (CircleBug, SpiralBug and ZBug). Sorry, I don't post labs online! After break we are going to do loops and arrays.


Calculus Research Lab finished CH4 with exercises on the derivatives of trig and inverse trig functions.
https://sage.math.clemson.edu:34567/home/pub/264
https://sage.math.clemson.edu:34567/home/pub/262


Computer Math finished CH5 of positional number systems! We didn't do Day6 on fractions or Day7 on truth tables yet.
https://sage.math.clemson.edu:34567/home/pub/263
https://sage.math.clemson.edu:34567/home/pub/261

Computer Math Day 01+06 Number Systems

View more documents from A Jorge Garcia


Learning with Technology, 

Sunday, December 18, 2011

Classical Composers of the 21st Century, where fore art thou?


Even a casual reader of this blog must certainly know how deeply I'm into video streaming in class. I screen-cast my lessons every day! I play funny math and computer related YouTubes on Wednesdays and informative TEDs on Tuesdays. I also show some documentaries I rescued from old disintegrating VHS tapes on YouTube Wednesdays. My students even make math and computer filks for me!


However, I recently rediscovered audio streaming! In fact, I use http://www.pandora.com on Fridays when giving a test or quiz. A little Classical Music helps the concentration! I try to find an instrumental music playlist to play in the background. Very often, the playlist is related to Classical Music. However, you can only listen to so much Bach, Beethoven and Brahms before you want some variety. So, lately, I've been playing film score music from adventure, sci fi and historical movies. These scores are so colorful and broad in scope so as to sound like grand orchestral music anyway! Discovering all this music used to be prohibitively expensive. However, websites such as http://www.pandora.com and http://www.grooveshark.com have made it really easy to research composers of all genres and time periods for free! I use the Pandora and TinyShark apps on my DroidX a lot for this. For example, sometimes, I'll think of a nice theme from my childhood. Let's say Lara's Theme from Dr. Zhivago. I'll look up the film on http://www.imdb.com, The Internet Movie Database, to find the composer. Finally, I fire up my Pandora or TinyShark app and look for a playlist related to the composer and I'm all set, http://grooveshark.com/#/s/Dr+Zivago+Lara+s+Theme/3yWPHK?src=5

The choice of Pandora vs. GrooveShark is up to you! Pandora, the Music Genome Project, makes playlists for you with a mix of composers similar to the one you select. If you want every track you can get your hands on from only one composer, then GrooveShark is the way to go. When you search on GrooveShark, you get up to 200 CD tracks from the composer you want. You can then pick and choose which tracks you want on your playlist! Here's some of my favorite GrooveShark playlists:
Bach, http://grooveshark.com/#/search?q=bach
Beethoven, http://grooveshark.com/#/search?q=Beethoven
Brahms, http://grooveshark.com/#/search?q=Brahms
Mozart, http://grooveshark.com/#/search?q=mozart
Rodrigo y Gabriela, http://grooveshark.com/#/search?q=Rodrigo+y+Gabriela
Vivaldi, http://grooveshark.com/#/search?q=Vivaldi
Guitar, http://grooveshark.com/#/search?q=Guitar+Instrumental
Pepe Romero, http://grooveshark.com/#/search?q=Pepe+Romero

It is my contention, therefore, that the film score writers of today are, in fact, the Classical Composers of the 21st Century! Even Copland, Gershwin and Bernstein composed for film and Broadway after all! Some of the modern film score composers are so prolific and so creative that they've become part of our cultural psyche! There are so many great composers of this sort to choose from! Who can forget, for example, John Williams' Star Wars and Harry Potter music or Jerry Goldsmith's Star Trek and Planet of the Apes themes or Bear McCreary's BattleStar Galactica TV scores, just to name a few? Take a listen:
John Williams, http://grooveshark.com/#/artist/John+Williams/2148/songs
Jerry Goldsmith, http://grooveshark.com/#/jerry_goldsmith/songs
Bear McCreary, http://grooveshark.com/#/bear_mccreary/songs

In fact, I've also rediscovered radio - well, internet radio. Many of my favorite radio shows are now archived on the web. So, these shows become audio blogs of sorts:
Howard Margolin's Destinies, The Voice of Science Fiction
http://www.captphilonline.com/Destinies.html
Chris DeFilippis's deFlipSide (science fiction and fact)
http://deflipside.com/
Suzzane Bona's Sunday Morning Baroque
http://www.sundaybaroque.org/
Fiona Ritchie's The Thistle and Shamrock (celtic music)
http://www.thistleradio.com/
John Diliberto's Echoes (new age music)
http://www.echoes.org
WQXR Radio (live stream Classical Radio)
http://www.wqxr.org/?utm_source=ad&utm_medium=splash&utm_campaign=q1218c


Don't forget video blogs! I just found this vblog about the filming in 3D and HD of the LOTR prequel called the Hobbit:
http://www.youtube.com/user/rkleppe?feature=watch

Generally Speaking,







Saturday, December 17, 2011

Quarter II, Week 6: ScreenCasts, SmartNotes and Code, oh my!

AP Computer Science started and nearly completed GridWorld Parts 01 and 02 this week!

GridWorld Part01


GridWorld Part02



preCalculus started CH9 on Polar Mode.
pre901 Polar Coordinates


pre902 Polar Graphs




AP Calculus BC continued CH6 about volumes and arc length!
bc606 Known Cross Sections


bc607 Arc Length

Calculus Research Lab is working on trig derivatives!

Computer Math is working on positional number systems and different bases.
https://sage.math.clemson.edu:34567/home/pub/263

HTH and I hope you enjoyed this week's notes!


Learning with Technology, 

CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VII

The following is a summary of what we've accomplished so far with the 2011-2012 CIS(theta) team. Our new Shadowfax Cluster is coming along quite well. We have a nice base OS in the 64bit Ubuntu 11.04 Natty Narwhal Desktop on top of our AMD dualcore Athlons and gigE LAN. The Unity Desktop isn't that different from the Gnome Desktop we've been using these past few years on Fedora and Ubuntu. Natty is proving very user friendly and easy to maintain! We installed openMPI and we used flops.f to scale our cluster up to 50 cores! Remember, we needed openSSH public keys so openMPI can scatter/gather cluster jobs without the overhead of logging into each node as needed. We created a user common to all cluster nodes called "jobs" in memory of Steve Jobs so the cluster user can simply log into one node and be logged into all nodes at once (you can actually ssh into each node as "jobs" without specifying a userid or passwd)!
InstantCluster Step 1: Infrastructure
Make sure your cores have enough ventilation. The room has to have powerful air conditioning too. These two factors may seem trivial but will become crucial when running the entire cluster for extended periods of time! Also, you need to have enough electrical power, preferably with the cabling out of the way, to run all cores simultaneously. Don't forget to do the same with all your Ethernet cabling. We have CAT6E cables to support our gigE Ethernet cards and switches. We are lucky that this step was taken care of for us already!

InstantCluster Step 2: Hardware
You need up to date Ethernet switches plus Ethernet cards and cores as well as plenty of RAM in each Linux box. As stated above, our gigE LAN and switches were already setup for us. Also, we have 64bit dual-core AMD Athlons and our HP boxes have 2GB of RAM. We are still waiting for our quad-core AMD Phenom upgrade!

InstantCluster Step 3: Firmware (Meeting III)
We wasted way too much time two years ago (2009-2010 CIS(theta)) trying out all kinds of Linux distros looking for a good 64bit base for our cluster. Last year (2010-2011 CIS(theta)) we spent way too much time testing out different liveCD distros. Last year, we also downgraded from 64bit Ubuntu 10.04 Desktop edition to the 32bit version on our Linux partitions. 64bit gives us access to more RAM and a larger maxint, but was proving to be a pain to maintain. Just to name one problem, jre and flash were hard to install and update on FireFox. Two years ago, we tried Fedora, Rocks, Oscar, CentOS, Scientific Linux and, finally, Ubuntu. We've done this several times over the years using everything from Slakware and KNOPPIX to Fedora and Ubuntu! This year, 64bit Ubuntu has proven very easy to use and maintain, so I think we'll stick with it for the cluster!

InstantCluster Step 4: Software Stack I (Meeting IV)
On top of Ubuntu we need to add openSSH, public-key authentication (step 4) and openMPI (step 5). Then we have to scale the cluster (step 6). In steps 7-10, we can discuss several applications to scatter/gather over the cluster whether it be graphical (fractals, povray, blender, openGL, animations) or number crunching (C++ or python app for Mersenne Primes or Beal's Conjecture). So, what follows is a summary of what we did to get up to plublic-key authentication. This summary is based on the http://cs.calvin.edu/curriculum/cs/374/MPI/ link listed below. First, we installed openSSH-server from http://packages.ubuntu.com, then:
  1. If you have no .ssh directory in your home directory, ssh to some other machine in the lab; then Ctrl-d to close the connection, creating .ssh and some related files. 
  2. From your home directory, make .ssh secure by entering:
    chmod 700 .ssh
  3. Next, make .ssh your working directory by entering:
    cd .ssh
  4. To list/view the contents of the directory, enter:
    ls -a [we used ls -l]
  5. To generate your public and private keys, enter:
    ssh-keygen -t rsa
    The first prompt is for the name of the file in which your private key will be stored; press Enter to accept the default name (id_rsa).The next two prompts are for the password you want, and since we are trying to avoid entering passwords, just press Enter at both prompts, returning you to the system prompt.
  6. To compare the previous output of ls and see what new files have been created, enter:
    ls -a [we used ls -l]
    You should see id_rsa containing your private key, and id_rsa.pub containing your public key.
  7. To make your public key the only thing needed for you to ssh to a different machine, enter:
    cat id_rsa.pub >> authorized_keys
    [The Linux boxes on our LAN, soon to be cluster, have IPs ranging from 10.5.129.1 to 
    10.5.129.24 So, we copied each id_rsa.pub file to temp01-temp24 and uploaded these 
    files via ssh to the teacher station. Then we just ran cat tempnn >> authorized_keys 
    for each temp file to generate one master authorized_keys file for all nodes that we could 
    just download to each node's .ssh dir.]
  8. [optional] To make it so that only you can read or write the file containing your private key, enter:
    chmod 600 id_rsa [optional] To make it so that only you can read or write the file containing your authorized keys, enter: chmod 600 authorized_keys
InstantCluster Step 5: Software Stack II (Meeting V)
We then installed openMPI (we had a lot less dependencies this year with Natty 11.04 64bit) and tested multi-core with flops. Testing the cluster as a whole will have to wait until the next meeting when we scale the cluster! We followed openMPI install instructions for Ubuntu from http://www.cs.ucsb.edu/~hnielsen/cs140/openmpi-install.html
These instructions say to use sudo and run run apt-get install openmpi-bin openmpi-doc libopenmpi-dev However, the way our firewall is setup at school, I can never update my apt-get sources files properly. So, I used http://packages.ubunutu.com and installed openmpi-bin, gfortran and libopenmpi-dev. That's it!
Then we used the following FORTRAN code to test multi-core. FORTRAN, really? I haven't used FORTRAN77 since 1979! ...believe it or don't!
We compiled flops.f on the Master Node (any node can be a master):
mpif77 -o flops flops.f
and tested openmpi and got just under 800 MFLOPS using 2 cores (one PC):
mpirun -np 2 flops
Next, we generated a "machines" file to tell mpirun where all the nodes (Master and Workers) are (2 PCs or nodes with 2 cores each for example):
mpirun -np 4 --hostfile machines flops
Every node has the same "machines" text file in /home/jobs listing all the IPs, one per line. Every node has the same "flops" executable file (or whatever your executable will be) in /home/jobs. Every node has the same "authorized_keys" text file with all 25 keys in /home/jobs/.ssh 
Note: last year we got about 900 MFLOPS per node. This year we still have 64bit AMD athlon dualcore processors. However, these are new PCs, so these athlons have slightly different specs. Also, last year we were running Maverick 10.04 32bit ... and ... these new PCs were supposed to be quadcores! We are still awaiting shipment.
InstantCluster Step 6: Scaling the cluster
UPDATE: 2011.1126 (Meeting VI)
Including myself, we only had 3 members attending this week. So, we added 3 new nodes. We had nodes 21-24 working well last time. Now we have nodes 19-25 for a total of 7 nodes, 14 cores and over 5 GFLOPS! This is how we streamlined the process: 
(1) adduser jobs and login as jobs 
(2) goto http://packages.ubuntu.com and install openssh-server from the natty repository 
(3) create /home/jobs/.ssh dir and cd there 
(4) run ssh-keygen -t rsa 
(5) add new public keys to /home/jobs/.ssh/authenticated_keys to all nodes 
(6) add new IPs to /home/jobs/machines to all nodes
(7) goto http://packages.ubuntu.com and install openmpi-bin, gfortran and libopenmpi-devfrom the natty repository
(8) download flops.f to /home/jobs from our ftpsite compile and run: 
mpif77 -o flops flops.f and 
mpirun -np 2 flops or 
mpirun -np 4 --hostfile machines flops 
NB: since we are using the same hardware, firmware and compiler everywhere, we don't need to recompile flops.f on every box, just copy flops from another node!
(9) The secret is setting up each node identically:
/home/jobs/flops
/home/jobs/machines
/home/jobs/.ssh/authenticated_keys


UPDATE: 2011.1214 (Meeting VII)
We had 5 members slaving away today. Nodes 19-25 were running at about 5 GFLOPS last meeting. Today we added nodes 10, 11, 12, 17 and 18. However, we ran into some errors when testing more than the 14 cores we had last time. We should now have 24 cores and nearly 10 GFLOPS but that will have to wait until next time when we debug everything again....
===================================================
What we are researching IV 
(maybe we can use Python on our MPI cluster?):
What we are researching III 
(look at some clustering environments we've used in the past):
We used PVM PVMPOV in the 1990s.
Let's try out openMPI and Blender now!
http://www.blender.org
===================================================
What we are researching II 
(look what other people are doing with MPI):
MPI intro, nice!
Sample MPI code
http://www.cc.gatech.edu/projects/ihpcl/mpi.html
===================================================
What we are researching I 
(look what this school did in the 80s and 90s): 
Thomas Jefferson High courses
Thomas Jefferson High paper
Thomas Jefferson High ftp
Thomas Jefferson High teacher
http://www.tjhsst.edu/~rlatimer/
===================================================
Today's Topic:
CIS(theta) 2011-2012 - Scaling the Cluster! - Meeting VII
Today's Attendance:
CIS(theta) 2011-2012: GeorgeA, GrahamS, KennyK, LucasE; CIS(theta) 2010-2011: HerbertK
Today's Reading:
Chapter 4: Building Parallel Programs (BPP) using clusters and parallelJava
===================================================
Well, that's all folks, enjoy!

Saturday, December 10, 2011

Quarter II, Week 5: ScreenCasts, SmartNotes and Code, oh my!

AP Computer Science spent all week working on Lab 6 about boolean operators and predicate methods! Calculus Research Lab just finished Lab 4 on derivatives involving ln(x) and e^x. Computer Math finished Lab 4 on iteration.


preCalculus finished CH6 and CH7 reviewing trigonometry and then we had a test.
pre701 Trig Identities


pre708 Trig Equations




AP Calculus BC continued CH6 about volumes of revolution!
bc602 Disks in dx


bc603 Disks in dy


bc604 Washers in dx and dy

HTH and I hope you enjoyed this week's notes!


Learning with Technology,