The Shark Machines

You will be doing your Intro to Computer Systems (ICS) lab assignments on a cluster of rack-mounted Intel Nehalem-based servers called the shark machines. This cluster was donated by the Intel Labs Higher Education group for the ICS course. The original 15-213 cluster machines were known as the fish machines. Our new cluster systems are much bigger and faster, so it seems fitting to call them the shark machines.

Shark machines available to students

There are 10 machines available to students. They run the identical RHEL 6.1 operating system as the Andrew cluster machines. ICS students and teaching staff can login to them using their Andrew credentials. For example, if your Andrew ID is bovik, then you can login to a random shark machine:

unix> ssh bovik@shark.ics.cs.cmu.edu

or a specific machine:

unix> ssh bovik@angelshark.ics.cs.cmu.edu

angelshark.ics.cs.cmu.edu bambooshark.ics.cs.cmu.edu baskingshark.ics.cs.cmu.edu blueshark.ics.cs.cmu.edu carpetshark.ics.cs.cmu.edu
catshark.ics.cs.cmu.edu hammerheadshark.ics.cs.cmu.edu houndshark.ics.cs.cmu.edu lemonshark.ics.cs.cmu.edu makoshark.ics.cs.cmu.edu

Shark machines available to teaching staff only

Greatwhite.ics.cs.cmu.edu is the Autolab server. Teaching staff can login to it using their Andrew credentials. There are also 10 servers that Autolab uses to autograde student handins. Each autograding job is performed on a RHEL 6.1 virtual machine managed by the Apache Tashi system, which was developed by Intel Labs Pittsburgh and the Carnegie Mellon Parallel Data Lab.

greatwhite.ics.cs.cmu.edu
megamouth.ics.cs.cmu.edu milkshark.ics.cs.cmu.edu nurseshark.ics.cs.cmu.edu pygmyshark.ics.cs.cmu.edu reefshark.ics.cs.cmu.edu
rivershark.ics.cs.cmu.edu roughshark.ics.cs.cmu.edu sandshark.ics.cs.cmu.edu sawshark.ics.cs.cmu.edu tigershark.ics.cs.cmu.edu

Frequently Asked Questions

Q: How do I get an account?
A: Accounts will be created for you automatically. If you can't login, please send mail to your instructor.

Technical specs

  • Head node (greatwhite)
    • Dell R710 with 2 x Intel E5620 CPUs, 2.67 GHz peak, 32 GB DRAM, 8 Nehalem cores, 8 TB SATA RAID.
    • 64-bit Enterprise Red Hat (Linux kernel 2.6.18)
  • 20 Compute nodes (autograding servers and student machines):
    • Dell R410, 2x Intel E5520 CPUS, 2.67 GHz peak, 24 GB DRAM, 8 Nehalem cores, 160 GB SATA disk
    • Student machines: 64-bit Enterprise Red Hat (Linux kernel 2.6.18)
    • Autograding servers: Ubuntu Linux running Ubuntu virtual machines managed by Tashi
  • Nehalem processor cores:
    • 2-way hyperthreading
    • L1 d-cache: 32 KB, 8-way associative (per core)
    • L1 i-cache: 32 KB, 8-way associative (per core)
    • L2 unified cache: 256 KB, 8-way associative (per core)
    • L3 cache: 8 MB, 16-way associative (shared by all cores)
    • 64-byte block size for L1, L2, and L3
    • L1 d-TLB: 64 entries, 4-way associative (per core)
    • L1 i-TLB: 128 entries, 4-way associative (per core)
    • L2 unified TLB: 512 entries, 4-way associative
    • DDR3 on-chip memory controller