- Welcome - About the class . This is a grad class; . expected background . material we're going to cover -> seminal, cutting edge; . semester-long research project . grading (grad class, your job is to give me an excuse to give you an A...) . Warning: This class is going to be fun. for ME. I'm teaching this material in part because I want to learn some of it. I expect this semester to be all of us learning together, reading a lot of cool papers, accidentally reading some bad papers (for which I apologize, but bad papers happen and it's good to see some of them sometimes). This class WILL NOT BE polished! But the benefit you'll get is that we're reading hot-off-the-presses research and doing the same. But it's important to have expectations calibrated appropriately. . Your responsibilities: . Scribing one lecture . Writing some paper summaries (we'll start that next week) and/or pre-class insta-quizzes so I know what everyone's getting from the matreial . Leading one paper discussion . Coming to class having read the papers, prepared to discuss them, ask questions about them, praise them, criticize them, etc. . A semester-long original research project -> It's okay and encouraged to use your own research here; -> But all members of your group of 2-3 people must be OK with it, and must all contribute to the project. - Warm-up quiz (self-graded) . NOT graded! Just turn it in for credit. I don't care what you get - I just want to know where we're starting. . Discuss quiz & answers - A perspective on power-aware computing - Power aware really emerged with mobile and ubiquitous computing. Mark Weiser, a chief scientist at Xeroc PARC, was one of the huge movers and shakers - coined the term "Ubiquitous Computing" * The purpose of the computer is to help you do something else * The best computer is a smart, invisible servant * The more you can do by intuition the smarter you are; the computer should extend your unconscious * Technology should create calm Incidentally, he was also the drummer for the rock band "Severe Tire Damage", but I digress. - The first big things: How to make these mobile devices last longer! -> Sleep often, resume quickly -> We'll talk next time about reducing frequency to match the task at hand -> Offload work to other computers - now we have cell phones & PDAs -- PDA stuff was big research with the DEC/Compaq iPaq (back then it wasn't what it is now) and the itsy (1999 - had acceleometers, 320x200 touchscreen, etc.) -- This use continues to this day! - The next thing: Sensor networks and tiny embedded systems - Resurgence of interest in low-power, low-capability embedded computing - 8 and 16 bit microprocessors - nanojoules to millijoules of energy draw - Goal: Small (1") form factor, enough battery to sit around taking measurements, broadcasting them back to a collector, sit in field for months or ideally years - Difference from mobile: Very predictable workloads; optimizations based on communication patterns. - Here, using sensor and transmitting wireless is major power draw. - Lots of emphasis on scheduling, topology, data fusion, etc. - Now: Datacenter and global computing energy draw - Datacenters are limited by heat density; total cost of ownership is becoming large part power - Cooling is a PITA - We see things like Google moving to the Dalles, MS to Ireland, etc. - Let's take an example of the elements of a computer, build up from CPU, memory, disk; power supply; rack; (add network, KVM switches, monitors, etc) UPS; datacenter (lighting); generator; cooling (chillers, computer room Air Conditioners, air handler units, pumps, cooling towers) (PUE ingredients) Example: 1U rackmount server 2x 120W CPUs 4x 4GB DIMMs -> ~30W Motherboard: ~50W 2x hard drive: 15W -> Roughly 335W Power supply: ~80% efficient -> 425W But we don't want to run them at max, so we spec out 800W power supplies. Power supplies have a thing called "inrush current" where they draw more power on startup to fill their capacitors. So we've got a big peak load. That's one machine. We can fit 40 of them in a standard 72" rack, plus a per-rack UPS and a KVM switch, which we'll ignore 17 kW per rack Let's look at something like the DCO, which we've got at CMU 10 racks for computers 10 racks for cooling and power (Note: HALF the space budget is cooling and power!) 4 pods = 40 racks of computers Our power draw: 680 kW. What about the network? We have 1600 computers, so we need something like 40 48-port 1g switches with either 1 or 10g uplinks; one 40-port 10g switch. -> another 100W per switch (4kW) plus a few kW for the big switch Power draw: 690 kW. DCO capability: 774 kW Can we handle that? It's very, very close. With peak power draw and inrush, probably not without being clever about staging up when we boot computers. Conversion losses from UPSes and generators: About 8% of total power draw (About 13% of the computers draw) -> Big power substation: 0.3% loss -> UPSes (either rotary - inertial flywheel - or battery): 6% loss -> Converting to 280V for equipment: 0.6% loss Cooling in a pretty good datacenter: About 33%. (About 55% of the computers draw) So what does this mean? A kW of power costs about $0.10 if you're not in california or by a big power plant A decent server computer costs $2-5k, depending. If it draws 425W + overhead =~ 800W, it costs 0.1 per hour. Hey, that's nothing, right? Except... 24 * 365 * 0.04 = $876/year =~ about 1/2 of the cost of the computer over a 3 year lifetime. (!) Now, keep in mind that the DCO is a very small datacenter, and it consumes over a megawatt of power. DCO size: 2000 square feet Microsoft's new datacenter: $500 million 550,000 square feet Containers 220 of them each built with 1000 - 2000 servers -> think 300-400k machines The nearby city is building THREE substations just for this datacenter. 198 megawatts. That's 200,000 residential homes worth of power. (!) Over the next few classes, we'll examine the individual components of the computers and how we can save power. Then we'll step up to looking at datacenters and cluster workloads. The goal of the first few lectures is to do a zoom-around overview of a lot of major systems, and then we'll start hitting some sub-topics in more detail. Remember through all of this to keep an eye out for project ideas, and if you have suggested topics that you'd find of particular interest, please suggest them!