The Big Data performance challenge arises whenever the volume or velocity of data overwhelms current processing systems and techniques, resulting in performance that falls far short of desired. Three approaches to improving the performance by orders of magnitude are:
Scale down the amount of data processed or the resources needed to perform the processing, via data synopses, approximate query processing, or other tech-niques;
Scale up the computing resources on a node, via parallel processing and faster memory/storage technologies; and
Scale out the computing to distributed nodes in a cluster/cloud or at the edge where the data resides.
This talk will highlight my two decades of research tackling all three of these approaches, discussing the key challenges, our solutions and their impact, and promising future directions.
Phillip B. Gibbons is a Principal Research Scientist at Intel Labs and Principal Investigator (together with Prof. Greg Ganger) for the Intel Science and Technology Center for Cloud Computing, a $12M research partnership with Carnegie Mellon, Georgia Tech, Princeton, UC Berkeley, and Washington. His publications span a broad range of computer science (e.g., papers in ASPLOS, CCS, CIDR, EuroSys, ICDM, JFP, MICRO, NIPS, PACT, PLDI, PODC, PPoPP, SIGMOD, SPAA, ToN and VLDBJ in the past 4 years), and have been cited over 12,500 times, including 33 papers cited over 100 times and an h-index of 59 [Google Scholar]. Gibbons is Editor-in-Chief for the just launched ACM Transactions on Parallel Computing, an Associate Editor for both the Journal of the ACM and the IEEE Transactions on Cloud Computing, and has served on 60+ program committees. Gibbons is both an ACM Fellow and an IEEE Fellow.
Faculty Host: Guy Blelloch