Kirk W. Cameron, Ph.D. is a Professor of Computer Science at Virginia Tech and an IEEE Fellow. As of August 2022, he is the inaugural faculty lead and now Associate Vice President for Academic Affairs at Virginia Tech’s Innovation Campus. From 2012-2022, he was Director of the stack@cs Center for Computer Systems (ranked #26 by US News in 2022). He is a Distinguished Member of the ACM; an associate editor for the Journal of Parallel and Distributed Computing and Associate Editor in Chief for IEEE Transactions on Parallel and Distributed Computing. From 2018-2022 he was the inaugural Associate Department Head for Research and Engagement. He was Associate Department Head and Graduate Program Director from 2014-2017. From 2017-2018, Prof. Cameron held a Distinguished Visiting Fellowship at Queen’s University Belfast from the U.K. Royal Academy of Engineering.
RESEARCH
The central theme of his research is to improve power and performance efficiency in high performance computing (HPC) systems and applications. Accolades for his work include NSF and DOE Career Awards, IBM and AMD Faculty Awards, and being named Innovator of the Week by Bloomberg Businessweek Magazine. He pioneered Green Computing (Green500, SPECPower, PowerPack, grano.la) and his power measurement and management software has been downloaded by more than 500,000 people in 160+ countries. He is a recipient of the HPDC 2017 Best Paper Award. He also has a passion for art and education manifesting in the SeeMore kinetic cluster — named the second best RaspberryPi project of all time by MagPi Magazine. His work consistently appears in The New York Times, The Guardian, Time, Newsweek, etc. His educational LACE kinetic sculpture (pictured above) appeared at the Smithsonian National Museum of American History in Washington, D.C. in April 2022. He also conceived the Computer Systems Genome Project (CSGenome.org), a large-scale database of computer systems specifications and performance lineages. CSGenome was used and featured at the 35th anniversary pavilion at the SC23 conference — providing visualizations related to the history of computing, hidden figures of note, and a timeline of the impact of women in computing.
IMPACT
I care more about impact than numbers, but here are some quantitative things about me:
If you want to understand the impact my research has had on computing, keep reading or jump to: Green Computing, Power-Performance Modeling, HPC + Art + CS Education, or I/O Performance
IMPACT ON GREEN COMPUTING
[Pioneering Green HPC and Datacenter Research]
National Science Foundation, “CAREER: High-performance, distributed power-aware computing,” $402,203, 5 years. 2/1/04-1/31/09. Kirk W. Cameron (PI). Personal share: $402,203. If there is anything I’m widely known for, it is green HPC and data centers. Originally entitled “High-performance, power-aware computing” or HPPAC, this proposal, written in 2003 and based on ideas I first had in 2000-2001, is the first known submitted and funded research project in Green HPC — energy efficient large-scale systems. The research plan was years ahead of the community and the resulting work inspired hundreds (perhaps thousands) of researchers worldwide to study energy use in data centers and supercomputers. This work led to my co-founding Green500.org, my being a founding member of SPECPower, my early influence on the energy star program for servers, and the co-founding of energy-efficient software startup MiserWare, Inc. The original proposal name was adopted by colleagues for the IEEE HPPAC workshop at IPDPS that will celebrate 15 years in 2019.
[First Green HPC Publications]
Cameron, K.W., Ge, R., Feng, X., Varner, D., Jones, C. (2004). POSTER: High-performance, Power-aware Distributed Computing Framework. In 2004 SC Companion: IEEE/ACM International Conference on High Performance Computing, Networking, Storage, and Analysis (SC-C) Reno: NV:IEEE. This poster is the first published work on Green Computing for HPC. This single poster described 1) motivation of the dire need for Green HPC later adapted and used by hundreds of papers to justify this new field of inquiry; 2) a system power measurement methodology adopted by the Green500 and adapted for SPECPower; and 3) power management techniques that were later revisited by dozens of research groups worldwide and still cited as seminal today.
Feng, X., Ge, R., Cameron, K. W. (2005). Power and energy profiling of scientific applications on distributed systems. Proceedings of the 2005 IEEE International Symposium on Parallel & Distributed Processing (pp. 1-10). IEEE. This can be cited as the first accurate system-wide power and energy measurement framework for HPC systems and applications (PowerPack).
Ge, R., Feng, X., Cameron, K. W. (2005). Improvement of power-performance efficiency for high-end computing. Proceedings of the 2005 IEEE International Symposium on Parallel & Distributed Processing Workshops (pp. 1-8). IEEE. This can be cited as the first runtime power management to leverage communication inefficiencies in HPC systems and applications.
Ge, R., Feng, X., Cameron, K. W. (2005). Performance-constrained distributed dvs scheduling for scientific applications on power-aware clusters. Proceedings of the 2005 IEEE/ACM International Conference on High Performance Computing, Networking, Storage, and Analysis (pp. 34-44). IEEE. This can be cited as the first runtime power management to leverage parallel load imbalance and synchronization delays in HPC systems and applications
Ge, R., Feng, X., Song, S., Chang, H. -C., Li, D., Cameron, K. W. (2010). PowerPack: Energy Profiling and Analysis of High-Performance Systems and Applications. IEEE Transactions on Parallel and Distributed Systems, 21(5), 658-671. doi:10.1109/TPDS.2009.76. This can be cited as a comprehensive reference for the PowerPack energy measurement framework.
Tolentino, M. E., Turner, J., Cameron, K. W. (2009). Memory MISER: Improving Main Memory Energy Efficiency in Servers. IEEE Transactions on Computers, 58(3), 336-350. doi:10.1109/TC.2008.177. This can be cited for the earliest system-level power management system to fully turn off DRAM DIMMs to conserve energy while maintaining performance.
Cameron, K. W., *Pyla, H. K., & Varadarajan, S. (2007). Tempest: A portable tool to identify hot spots in parallel code. Proceedings of the 2007 IEEE International Conference on Parallel Processing (ICPP) (pp. 309-316). Xian, PEOPLES R CHINA: IEEE. This can be cited as our version of the gprof tool for thermal sensors.
Cameron, K. W. (2009). My IT Carbon Footprint. Computer, 42(11), 99-101. From 2009 thru 2017, I was an Associate Editor for IEEE Computer Magazine and the founding Green IT columnist. This is my favorite article that I wrote during that tenure and it received the most feedback from readers.
Cameron, K.W. and Pruhs, K. (eds.), “NSF Report on the Science of Power Management,” pp. 37, submitted to the National Science Foundation August 2010. (Technical Report No. VT/CS-09-19). This was an influential report that resulted from a workshop sponsored by NSF and organized by me and a colleague. The report led to a funded program at the NSF.
[Patented Power-Performance Guarantees]
K.W. Cameron and J. Turner, “Systems, devices, and/or methods for managing energy usage”, [United States Patent: 8,918,657] [UK Patent: #GB2476606B]. This patent is a technique for bounding performance loss while maximizing power efficiency. This was the core technology use by MiserWare, Inc. to design the grano.la software. Grano.la reduced computer energy use up to 30% and was used by hundreds of thousands of people in more that 160 countries plus a number of universities and 3-letter agencies. MiserWare was a venture backed startup spun out of Virginia Tech.
IMPACT ON POWER-PERFORMANCE MODELING
[Power-Performance Modeling]
*Ge, R., & Cameron, K. W. (2007). Power-aware speedup. Proceedings of the 2007 IEEE International Symposium on Parallel & Distributed Processing (pp. 1-12). IEEE. This is the first in a series of papers where I wanted to better understand the fundamental tradeoffs between power and performance at scale. The insight here was that energy efficiency in scalable systems results from exploiting inefficiencies in performance (e.g., communication overhead). Since existing models (e.g., Amdahl’s Law) ignore overhead, we generalized Amdahl’s Law to incorporate overhead and demonstrate the speedups possible in power-scaled systems.
*Song, S., *Su, C. -Y., *Ge, R., Vishnu, A., & Cameron, K. W. (2011). Iso-energy-efficiency: An approach to power- constrained parallel computation. Proceedings of the 2011 IEEE International Symposium on Parallel & Distributed Processing (pp. 128-139). IEEE. This is the second in a series of papers where I wanted to better understand the fundamental tradeoffs between power and performance at scale. The insight here was that if given a power cap at runtime we can control variables such as workload, number of nodes and number of threads, to maintain a fixed energy efficiency as we scale an application and system. Maintaining an iso-efficient configuration enables us to maximize performance at scale under a power cap.
*Li, B., León, E. A., & Cameron, K.W. COS: A Parallel Performance Model for Dynamic Variations in Processor Speed, Memory Speed, and Thread Concurrency. Proceedings of the 26th ACM Symposium on High-performance Parallel and Distributed Computing (HPDC 2017) (pp. 155-166). Washington, DC: ACM. doi: 10.1145/3078597.3078601 [19% accept rate] Karsten Schwan Memorial Best Paper Award (1/100 submits). This is the third in a series of papers where I wanted to better understand the fundamental tradeoffs between power and performance at scale. This work draws attention to the fact that while hardware and software designers have spent decades increasing parallelism and computation/memory overlap, no models capture the separation of CPU or memory performance from overlap. This fundamental omission results in inaccurate performance prediction of simultaneous changes in computational throughput — demonstrated with CPU, memory, and concurrency throttling. The Compute-Overlap-Stall model was shown to be the most accurate model for performance prediction under such conditions.
IMPACT ON HPC + ART + CS EDUCATION
*Li, B., *Mooring, J., Blanchard, S., Johri, A., Leko, M., & Cameron, K. W. (2017). SeeMore: A kinetic parallel computer sculpture for educating broad audiences on parallel computation. Journal of Parallel and Distributed Computing, 105(C), 183-199. doi: j.jpdc.2017.01.017. This project takes some explaining. It’s basically a kinetic sculpture of 256 Raspberry Pi systems that run parallel codes. Want to know more? Visit the SeeMore project page for details, videos, etc. If you want to know how a computer scientist is inspired to create a sculpture, read this.
IMPACT ON I/O PERFORMANCE IN LINUX
Chang, H. -C., Li, B., Back, G., Butt, A. R., Cameron, K. W. (2015). LUC: Limiting the Unintended Consequences of power scaling on parallel transaction-oriented workloads. Proceedings of the 2015 IEEE International Symposium on Parallel & Distributed Processing, (pp. 324-333). Hyderabad, INDIA: IEEE. doi:10.1109/IPDPS.2015.99. This work showed that faster performing processors sometimes lead to slower application performance. This student’s dissertation led to pending changes in the Linux journaling system and opencopy. These findings also led to two follow-on projects: the first related to exploiting slowdowns in advanced computing systems and the second related to understanding software variability.
SELECT PROJECTS
VarSys (2016-2023) is a collaboration at the intersection of computer science, mathematics, and statistics to isolate, understand, and manage variability in software for high-performance and cloud-based systems. The VarSys project was made possible through a generous grant from the National Science Foundation (CISE CNS #1565314).
SeeMore (2014-) is a 256-node kinetic cluster of Raspberry Pi computers that visualize parallel computation through coordinated movement. SeeMore is an inspiring collaboration between computer science and art. SeeMore was made possible through generous grants from Virginia Tech’s Institute for Creative Technologies and the National Science Foundation (CISE OAC #1355955).
High-Performance, Power-Aware Computing (2004-2009) is the project that launched the Green HPC movement and resulted in numerous pioneering contributions including the PowerPack power measurement toolkit (2004-), the Green500 List (2006-), the SystemG Supercomputer(2009-2014), and venture-backed power management startup MiserWare and the grano.la software (2007-2014). This was all made possible from an NSF CAREER Award (CISE CCF #347683), an NSF infrastructure grant (CISE CNS #0709025), and venture capital from CIT, Valhalla Partners, and InQTel.
The CSGenome Project (2018-) began in early 2018 as Cameron’s self-funded effort and outgrowth of the VarSys project. The key insight was that while we had a desire to classify systems by there variability, there was no agreed-upon way to do so. As we dug deeper, we realized that while silos for performance and specifications existed, the field lacked a way to link the two together. The CSGenome Project seeks to unify disparate hardware/performance data sets and integrate them into a lineage for querying and connecting to other data sets (e.g., individual impact on technology). This effort now involves over 25 undergraduates several MS students from a wide array of backgrounds and experiences. We also have a growing list of more than 40 alumni.
PHD ALUMNI
Xizhou Feng, PHD 2007, Research Scientist, Meta
Rong Ge, PHD 2008, Professor, Clemson University
Matthew Tolentino, PHD 2009, Associate Professor, University of Washington (Tacoma), startup founder (Namatad)
Dong Li, PHD 2011, Associate Professor, University of California – Merced
Song “Leon” Shuaiwen, PHD 2013, Senior Lecturer (Assoc. Prof.) University of Sydney – Australia, Research Scientist at Microsoft
Hung-Ching Chang, 2015, Intel Research
Chun-Yi Su, PHD 2015, Intel Research
Mariam Umar, PHD 2018, Intel Research
Bo Li, PHD 2018, Splunk
MS ALUMNI
Sam Furman, MS 2021, Bloomberg
Nicolas Hardy, MS 2021, Amazon
Eles Jones, MS 2022, Bridgephase
Chandler Jearls, MS 2020, Apple