Monday, April 15, 2019

CIS(theta), 2018-2019 March & April Meetings: Research!

CIS(theta), 2018-2019
March & April Meetings: Research!

CH01SEPTEMBER READING
CH02OCTOBER READING
CH03NOVEMBER READING
CH04DECEMBER READING 
CH05: JANUARY READING
CH06: FEBRUARY READING
CH07: MARCH READING
CH08: APRIL READING

MARCH & APRIL UPDATE: (meetings 6&7)
We are stepping back a bit and doing some research. Let's see how other's are attacking similar projects:

Here's a HowTo!
Here's another HowTo!
And one more HowTo for you!
Here's a cool desktop solution!









FEBRUARY UPDATE: (meeting 5)
We tried to install openMPI but spent forever just updating our Linux OS:
sudo apt-get update
sudo apt-get install

JANUARY UPDATE: (meeting 4)
We un-boxed all the stuff we got from BOCES and Donorschoose and figured out how to boot up 1 RPI per student. We are playing around with these micro-board PCs as replacement PCs at home until our next meeting.

USB Power Supplies

NOOBS Sims

HDMI To VGI Converters

DECEMBER UPDATE: (meeting 3)
We downloaded the latest pelicanHPC ISO and burned a DVD for each of us. Then we booted our PCs from the DVD drive and ran openMPI from RAM. We used flops.f to test our "clusters." flops.f is a FORTRAN program that uses mpirun to stress a cluster by calculating PI using reimann sums for 1/(1+x^2) from a=0 to b=1. 

BTW, I call our PCs "clusters" since they have quadcore processors and openMPI can run on multicore just as well as on a grid. We can't set up a grid based (multinode) Linux cluster anymore as we are not allowed to setup our own DHCP server anymore. We got about 2 GigaFLOP/s per core, so 8 GigaFLOP/s per PC. If we could setup our own DHCP server, we'd get 100 cores running in parallel for about 200 GigaFLOP/s!

Compile:
mpif77 -o flops flops.f
Execute multicore:
mpirun -np 4 flops
Execute multinode:
mpirun -np 100 --hostfile machines flops

Enter pelicanHPC as a our first solution! We demoed an old DVD we had to show how to fire up the cluster. Our experiment demonstrated that we could not boot the whole room anymore, as we used to, as PXE Boot or Netboot requires we setup our own DHCP server. When you boot the DVD on one PC, it sets up a DHCP server so all the other PCs can PXE Boot the same OS. However, our new WimpDoze network uses its own DHCP server. These two servers conflict, so we cannot reliably connect all the Worker bees to the Queen bee. We can't setup grid computing or a grid cluster, but we can still setup SMP. In other words, boot up a single PC with the pelicanHPC DVD and run multicore applications on all the cores on that one PC.

So, here's your homework. Download the latest pelicanHPC ISO file and burn your own bootable DVD. Don't worry if your first burn doesn't boot. You can use that DVD as a "Linux Coaster" for your favorite beverage the next time you play on SteamOS. If you can make this work at home, try to run Hello_World_MPI.py from John Burke's sample MPI4PY (MPI for Python) code.


NOVEMBER UPDATE: (meeting 2)
See below for our Raspberry PI project. We have been waiting for funding for some extra hardware from DonorsChoose and we just got it! Yeah! In the mean time we're playing with PelicanHPC and BCCD DVDs to see how openMPI works so we can set it up the same way on our new Linux Cluster.

OCTOBER UPDATE: (meeting 1)
We've decided to make a Linux Cluster out of Raspberry Pi single board computers! Our school district has been kind enough to purchase 25 RPIs plus some USB and Ethernet cabling, so now we just need some power supplies, routers and SD cards. So here comes DonorsChoose to the rescue! We started a campaign to raise the money to purchase all the remaining equipment from Amazon!



What we want to do is to replace our Linux Lab of 25 quadcore PCs, where we used to do this project,with 25 networked RPI 3.0s. The Raspbian OS is a perfect match for our project! Raspbian is linux based just like our old lab which was based on Ubuntu Linux. Also, python is built-in so we can just add openSSH and openMPI to code with MPI4PY once again! With the NOOB SD card, we start with Linux and python preinstalled!


Once we get all the hardware networked and the firmware installed, we can install an openMPI software stack. Then we can generate Fractals, MandelZooms, POV-Rays and Blender Animations!


SEPTEMBER UPDATE: (meeting 0)
Please see september organizational blog post.

NEW SMARTBOARD SETUP
NOTE: MIC FOR SCREENCASTING!
NOTE: TI nSPIRE CX CAS EMULATOR!!
NEW DECOR IN THE REAR OF ROOM 429
NOTE: SLIDERULE!
NOTE: OLD LINUX SERVERS!!
NEW TAPESTRIES IN ROOM 429
NEW VIEW FROM LEFT REAR SIDE
NOTE: OLD UBUNTU DESKTOP!
NEW VIEW AS YOU WALK IN
NOTE: SIDERULE!

So, what's all this good for aside from making Fractal Zoom or Shrek Movies?
SETI Search
Econometrics
Bioinformatics
Protein Folding
Beal Conjecture
Scientific Computing
Computational Physics
Mersenne Prime Search
Computational Chemistry
Computational Astronomy
Computer Aided Design (CAD)
Computer Algebra Systems (CAS)

These are but a few examples of using Computer Science to solve problems in Mathematics and the Sciences (STEAM). In fact, many of these applications fall under the heading of Cluster Programming, Super Computing, Scientific Computing or Computing Science. These problems typically take too long to process on a single PC, so we need a lot more horse power. Next time, maybe we'll just use Titan!

====================
Membership 
(alphabetic by first name):

CIS(theta) 2018-2019:
GaiusO(11), GiovanniA(12), JulianP(12), TosinA(12)

CIS(theta) 2017-2018:
BrandonB(12), FabbyF(12), JoehanA(12), RusselK(12)

CIS(theta) 2016-2017: 
DanielD(12), JevanyI(12), JuliaL(12), MichaelS(12), YaminiN(12)

CIS(theta) 2015-2016: 
BenR(11), BrandonL(12), DavidZ(12), GabeT(12), HarrisonD(11), HunterS(12), JacksonC(11), SafirT(12), TimL(12)

CIS(theta) 2014-2015: 
BryceB(12), CheyenneC(12), CliffordD(12), DanielP(12), DavidZ(12), GabeT(11), KeyhanV(11), NoelS(12), SafirT(11)

CIS(theta) 2013-2014: 
BryanS(12), CheyenneC(11), DanielG(12), HarineeN(12), RichardH(12), RyanW(12), TatianaR(12), TylerK(12)

CIS(theta) 2012-2013: 
Kyle Seipp(12)

CIS(theta) 2011-2012: 
Graham Smith(12), George Abreu(12), Kenny Krug(12), LucasEager-Leavitt(12)

CIS(theta) 2010-2011: 
David Gonzalez(12), Herbert Kwok(12), Jay Wong(12), Josh Granoff(12), Ryan Hothan(12)

CIS(theta) 2009-2010: 
Arthur Dysart(12), Devin Bramble(12), Jeremy Agostino(12), Steve Beller(12)

CIS(theta) 2008-2009: 
Marc Aldorasi(12), Mitchel Wong(12)

CIS(theta) 2007-2008: 
Chris Rai(12), Frank Kotarski(12), Nathaniel Roman(12)

CIS(theta) 1988-2007: 
A. Jorge Garcia, Gabriel Garcia, James McLurkin, Joe Bernstein, ... too many to mention here!
====================

Well, that's all folks!
Happy Linux Clustering, 
AJG
A. Jorge Garcia

 

Applied Math, Physics & CompSci
PasteBin SlideShare 
MATH 4H, AP CALC: GC or SAGECELL
CSH: SAGE Server
CSH: Interactive Python
APCSA: c9.io


Beautiful Mind Soundscape:

No comments:

Post a Comment