Wednesday, November 13, 2019

CIS(theta) 2019-2020 Meeting #5 (11/13/19): Headless Horseman To The Rescue!

CIS(theta) 2019-2020
Meeting #5 (11/13/19)
Headless Horseman To The Rescue!
During this meeting we setup a Headless Server so as to control a Raspberry PI remotely over WiFi (Cellphone Hot Spot or Home Router). This setup frees up some space on our workbench since we don't need a monitor, keyboard or mouse so we are going wireless! Now I can use the monitor, keyboard and mouse of my Samsung Chromebook Plus (with Chrome OS and Google Play) as a dumb terminal! I was going to add a Bluetooth Keyboard and Mouse, but this solution is much better! What was old is now new again, I suppose. Anyone remember the DEC VT 100 dumb terminals? 

See the screen shots below for our headless mode setup. BTW, all screen shots were taken with scrot app on the Chromebook running in headless mode! We used the following droid apps to connect over WiFi: Network Scanner, Mobile SSH and VNC Viewer. You only need a regular monitor, keyboard and mouse during the initial setup of your RPI since the SSH and VNC interfaces are disabled by default on each RPI.

Then we installed openMPI on our RPI (from the Raspberrian/Debian Linux repositories). We ran our benchmark program flops.f, written in FORTRAN77, to see how fast a single node might be. Our best result was nearly 750 MegaFLOP/s (0.75 GigaFLOP/s) running on all 4 cores of a single node. 

BTW, we initially had difficultly installing openMPI, see the step by step screenshots below. We had to run in the terminal:

sudo apt-get update 
sudo apt-get upgrade 

again before installing (use sudo):

apt-get install openmpi-bin
apt get install libopenmpi-dev
apt-get install gfortran
apt-get install python-mpi4py

We installed gfortran to run flops.f and benchmark our quadcores this week. 

Finally, we downloaded, compiled and executed flops.f from the Pelican HPC DVD project (see code below):

mpif77 -o flops flops.f
mpirun -np 4 flops

We're going to need mpi4py to code fractal graphs next week using openMPI and python. We can scale the cluster using openSSH to add more RPIs aka nodes to the cluster next month.

HEAD01: Enable SSH and VNC

HEAD02: Choose Your Keyboard

HEAD03: Set Your Locale

HEAD04: Set Your Timezone

HEAD05: Set Your WiFi Country Code

STEP01: apt-get install epic fail

STEP02: re execute apt-get update

STEP03: re execute apt-get upgrade

STEP04: install openmpi-bin

STEP05: install libopenmpi-dev

STEP06: install gfortran

STEP07: install python-mpi4py

STEP08: download flops.f

STEP09: compile flops

STEP10: execute flops and get 734 MFLOP/s

CIS(theta) 2019-2020
Meeting #4 (10/30/19)HDMI & ETH0!
We finally got some HDMI and Ethernet cables. Now we have a USB Keyboard and USB mouse installed as well as a monitor via HDMI. We thought we'd have to set up WiFi, but we just used the Ethernet drop by the SmartBoard PC in the rear of our classroom. We switched on the power supply and finally got to boot up the Raspberian Desktop, a derivative of Debian Linux, as we had for many years prior to our WimpDoze lab running Ubuntu. 

Next, we would have tried to install openMPI but spent forever just updating our Linux OS:

sudo apt-get update
sudo apt-get upgrade

We have now completed STEP0 (September) and STEP1 (October). Next, STEP2 (November) is all about openMPI! See our project summary below.

0a) Administrativa
0b) Pelican HPC DVD

1a) update/upgrade Linux on one RPI
1b) use a RPI as replacement desktop  
2a) install/test openMPI on one RPI  
2b) run benchmark program flops.f
3a) install/test openMPI/openSSH on 2 RPIs
3b) run benchmark program flops.f
4a) scale the cluster to 4 RPIs
4b) write our own mpi4py programs

5a) programming hires mandelbrot fractals
5b) programming hires julia fractals
6) programming hires ray tracings 
7) programming a fractal zoom movie
8) programming animated movie sequences

Pixel/Raspberian Desktop:

NOOBS/Raspberian Desktop:

CIS(theta) 2019-2020
Meeting #3 (10/16/19)RPI UnBoxing!
We un-boxed all the stuff we got from BOCES and Donorschoose and figured out how to boot up one RPI per student. We are playing around with these micro-board PCs as replacement PCs at home until our next meeting. We tried to setup one RPI in the back of the room. We attached a USB mouse and keyboard. We added a power supply. But when it came time to attach a monitor we found that IT had upgraded all our monitors such that none of them had VGA ports. However, they do have HDMI ports which we didn't have last year. That's good, as the RPI has a HDMI port, so all we need to get for the next meeting is some HDMI cables!

Raspberry Pi 3B:

USB Power Supplies:


HDMI To VGI Converters:

CIS(theta) 2019-2020
Meeting #2 (09/25/19)PelicanHPC!
We downloaded the latest pelicanHPC ISO and burned a DVD for each of us. Then we booted our PCs from the DVD drive and ran openMPI from RAM. We used flops.f to test our "clusters." flops.f is a FORTRAN program that uses mpirun to stress a cluster by calculating PI using Riemann Sums for 1/(1+x^2) from a=0 to b=1. 

BTW, I call our PCs "clusters" since they have hexacore processors and openMPI can run on multicore just as well as on a grid. We can't set up a grid based (multinode) Linux cluster as we are not allowed to setup our own DHCP server anymore. We got about 500 MegaFLOP/s per core, so 3 GigaFLOP/s per PC. If we could setup our own DHCP server, we'd get 150 cores running in parallel for about 75 GigaFLOP/s!

mpif77 -o flops flops.f
Execute multicore:
mpirun -np 4 flops
Execute multinode:
mpirun -np 100 --hostfile machines flops

Enter pelicanHPC as a our first solution! We demoed an old DVD we had to show how to fire up the cluster. Our experiment demonstrated that we could not boot the whole room anymore, as we used to, since PXE Boot or Netboot requires we setup our own DHCP server. When you boot the DVD on one PC, it sets up a DHCP server so all the other PCs can PXE Boot the same OS over Ethernet running in RAM. However, our new WimpDoze network uses its own DHCP server. These two servers conflict, so we cannot reliably connect all the Worker bees to the Queen bee. We can't setup grid computing or a grid cluster, but we can still setup SMP. In other words, boot up a single PC with the pelicanHPC DVD and run multicore applications on all the cores on that one PC.

So, here's your homework. Download the latest pelicanHPC ISO file and burn your own bootable DVD. Don't worry if your first burn doesn't boot. You can use that DVD as a "Linux Coaster" for your favorite beverage the next time you play on SteamOS. If you can make this work at home, try to run from John Burke's sample MPI4PY (MPI for Python) code.

See below for our Raspberry PI project. We have been waiting for funding for some extra hardware from DonorsChoose and we just got it! Yeah! In the mean time we're playing with PelicanHPC and BCCD DVDs to see how openMPI works so we can set it up the same way on our new Linux Cluster.

We've decided to make a Linux Cluster out of Raspberry Pi single board computers! Our school district has been kind enough to purchase 25 RPIs plus some USB and Ethernet cabling, so now we just need some power supplies, routers and SD cards. So here comes DonorsChoose to the rescue! We started a campaign to raise the money to purchase all the remaining equipment from Amazon!

What we want to do is to replace our Linux Lab of 25 quadcore PCs, where we used to do this project, with 25 networked RPI 3.0s. The Raspbian OS is a perfect match for our project! Raspbian is Linux based just like our old lab which was based on Ubuntu Linux. Also, python is built-in so we can just add openSSH and openMPI to code with MPI4PY once again! With the NOOB SD card, we start with Linux and python preinstalled!

Once we get all the hardware networked and the firmware installed, we can install an openMPI software stack. Then we can generate Fractals, MandelZooms, POV-Rays and Blender Animations!

CIS(theta) 2019-2020
Meeting #1 (09/11/19)Administrativa!
(0) What Is CIS(Theta)? 
CIS stands for our Computing Independent Study course. "Theta" is just for fun (aka preCalculus class). Usually, I refer to each class by the school year, for example CIS(2019). I've been running some sort of independent study class every year since 1995. 

In recent years our independent study class has been about the care and feeding of Linux Clusters: How to Build A Cluster, How To Program A Cluster and What Can We Do With A Cluster? 

BTW, Shadowfax is the name of the cluster we build! FYI, we offer 4 computing courses: 

CSH: Computer Science Honors with an introduction to coding in Python using IDLE, VIDLE and Trinket
CSAP: AP Computer Science A using CS50 and OpenProcessing
CIS: Computing Independent Study using OpenMPI and 
CSL: Computing Science Lab which is a co-requisite for Calculus students using Computer Algebra Systems such as SAGE.

(1) Wreath of the Unknown Server: We visited our LAST ever Linux ssh/sftp server, Rivendell, which is still in the Book Room, though dormant. Yes, I'm afraid it's true, all my Linux Boxes have been replaced with WimpDoze!

(2a) Planning: So we have to find an alternative to installing MPI on native Linux! We're thinking Raspbery PIs?
(2b) Research: How do we run MPI under WimpDoze without installing anything? How about MPI4PY? What about Raspberian?
(2c) Reading: In the mean time, here's our first reading assignment.

(3) Display Case Unveiled: We took down a ton of fractal prints and ray tracings from Room 429 to the 2 display cases on the 1st floor near the art wing. We decorated both display cases as best we could and left before anyone saw us. Must have been gremlins. BTW, we also have a HDTV with Chromecast to showcase student work here.

(4) NCSHS: We're going to continue our chapter of the National Computer Science Honor Society. We talked about the requirements for membership and how we started a chapter. Each chapter is called "Zeta Omicron something." We're "Zeta Omicron NY Hopper." This is a pretty new honor society. The first few chapters were called Zeta Omicron Alpha and Omicron Zeta Beta. We have the first NYS chapter! BTW, NCSHS is not to be confused with my Calculus class and the CML. I am also the advisor for the Continental Mathematics League Calculus Division which is like Mathletes with in-house competitions about Calculus. CML is an international competition where we usually place in the top 3 or 4 schools  in the TriState. We've been competing for several years!


CIS(theta) Membership Hall Of Fame!
(alphabetic by first name):

CIS(theta) 2019-2020:
AaronH(12), AidanSB(12), JordanH(12), PeytonM(12)

CIS(theta) 2018-2019:
GaiusO(11), GiovanniA(12), JulianP(12), TosinA(12)

CIS(theta) 2017-2018:
BrandonB(12), FabbyF(12), JoehanA(12), RusselK(12)

CIS(theta) 2016-2017: 
DanielD(12), JevanyI(12), JuliaL(12), MichaelS(12), YaminiN(12)

CIS(theta) 2015-2016: 
BenR(11), BrandonL(12), DavidZ(12), GabeT(12), HarrisonD(11), HunterS(12), JacksonC(11), SafirT(12), TimL(12)

CIS(theta) 2014-2015: 
BryceB(12), CheyenneC(12), CliffordD(12), DanielP(12), DavidZ(12), GabeT(11), KeyhanV(11), NoelS(12), SafirT(11)

CIS(theta) 2013-2014: 
BryanS(12), CheyenneC(11), DanielG(12), HarineeN(12), RichardH(12), RyanW(12), TatianaR(12), TylerK(12)

CIS(theta) 2012-2013: 
Kyle Seipp(12)

CIS(theta) 2011-2012: 
Graham Smith(12), George Abreu(12), Kenny Krug(12), LucasEager-Leavitt(12)

CIS(theta) 2010-2011: 
David Gonzalez(12), Herbert Kwok(12), Jay Wong(12), Josh Granoff(12), Ryan Hothan(12)

CIS(theta) 2009-2010: 
Arthur Dysart(12), Devin Bramble(12), Jeremy Agostino(12), Steve Beller(12)

CIS(theta) 2008-2009: 
Marc Aldorasi(12), Mitchel Wong(12)

CIS(theta) 2007-2008: 
Chris Rai(12), Frank Kotarski(12), Nathaniel Roman(12)

CIS(theta) 1988-2007: 
A. Jorge Garcia, Gabriel Garcia, James McLurkin, Joe Bernstein, ... too many to mention here!


Well, that's all folks!
Happy Linux Clustering,
A. Jorge Garcia


Applied Math, Physics & CompSci
PasteBin SlideShare 


APCSA: Big Java