This article is published in the April 2017 issue.

Research Highlight: CRA Board Member Sarita Adve


What value should a memory read return? The answer to this simple question is surprisingly complex for modern systems running parallel software. The memory consistency model, which governs this answer, is a fundamental part of the hardware-software interface, but has been one of the most challenging and contentious areas in parallel hardware and software specification. As we approach the end of Moore’s law, the hardware-software interface is evolving with profound implications for how easily we can use our systems and how well they perform. My research is at this interface. Although my “home” community is computer architecture, my work necessarily spans the system stack, and has included hardware design, programming language semantics, parallel algorithms for emerging applications, cross-layer system energy and resiliency management, and approximate computing.

I started exploring memory consistency in 1988 as a junior graduate student, not knowing that it would take more than 15 years for the work to have real impact and that I would circle back in an unlikely instance of déjà vu 25 years later. The most intuitive model, sequential consistency, is the simplest to program, but most systems do not provide it for performance reasons. Instead, when I started, the solution was to have divergent models—often ambiguously specified—for different hardware. My early work, with my advisor, Mark Hill, departed from the prevalent hardware-centric approaches to use a combined hardware/software view more appropriate for an interface. We observed that for well-synchronized programs, formalized as data-race-free, both sequential consistency and high performance could be achieved. The consistency model became a contract where the system guaranteed sequential consistency if software was data-race-free. Over several years, I worked closely with hardware and software researchers and practitioners, including Hans Boehm, Bill Pugh, and many others, to forge consensus towards adopting the data-race-free model as the standard. More than 15 years after its inception, data-race-free became the foundation of the consistency models for most of the popular programming languages such as Java, C++, and C.

Today, as we approach the end of conventional transistor scaling, the next phase of performance increases will likely come from clever architectures. These architectures will be driven by application requirements more than ever, resulting in an explosion of specialized and heterogeneous systems that are orders of magnitude more efficient than current homogeneous, general-purpose systems. We are already seeing the start of this revolution with large scale adoption of specialized platforms that were considered impractical just a few years ago, including FPGAs in data centers at Amazon and Microsoft, GPUs everywhere, and Google’s tensor processing unit. An increasing number of systems will be built out of many specialized accelerators combined together at multiple scales from within the same chip to across large scale distributed systems enabling future applications that we can barely imagine. Today’s mostly opaque hardware-software interfaces, however, are an obstacle to exploiting the inherent efficiencies promised by such systems.

My group’s DeNovo project is exploring the design of such heterogeneous systems, with a focus on efficient data movement and a richer hardware-software interface. For example, we have shown that recent, complex consistency models being proposed for heterogeneous architectures fall into the same trap of hardware-centric design we navigated 25 years ago – they are hard to program and constrained in their performance benefits. Instead, a hardware-software interface driven approach such as data-race-free again results in better performance, programmability, and design complexity. Another result showed that we don’t have to choose between the efficiencies of specialized memories such as scratchpads and the programmability of a global address space provided by a general-purpose cache – our stash architecture achieves both.

A more revolutionary change in the hardware-software interface will be needed if we are to exploit approximate computing to compensate for the slowdown of Moore’s law. As computing cycles are increasingly spent on human-centric tasks, most computations no longer require a single precise answer. But how do we design systems that can systematically exploit application-level flexibility to improve metrics such as efficiency and reliability? How do we test such systems? We are currently working with researchers in software engineering and testing to adapt the software development workflow to approximations in hardware and software.

Regardless of what techniques finally succeed, the relationship between hardware and software is poised for a change. The effective design of future systems depends on a closer collaboration between hardware and software communities. I am honored to chair ACM SIGARCH at this exciting time for computer architecture. The SIGARCH executive committee, with many other volunteers, has begun several initiatives with the goal of reaching out to other communities, both to expose them to our advances and to invite them to work with us to drive the design of future systems. Babak Falsafi, Boris Grot, and Alvin Lebeck (editor) recently launched a blog, Computer Architecture Today, to inform the broader community about current activities and future trends in computer architecture. Luis Ceze, Joel Emer, and Karin Strauss are spearheading broad-interest visioning workshops at the intersection of computer architecture and other areas at our flagship conferences. The next workshop, led by Olivier Temam, will be on “Trends in Machine Learning,” in conjunction with ISCA. You can follow SIGARCH news on twitter @acmsigarch, led by Adrian Sampson.

About the author

Sarita Adve is the Richard T. Cheng Professor of Computer Science at the University of Illinois at Urbana-Champaign. Her research interests are in computer architecture and systems. She co-developed the memory models for the C++ and Java programming languages based on her early work on data-race-free models. She is a recipient of the Anita Borg Institute Women of Vision award in innovation, the ACM SIGARCH Maurice Wilkes award, and an Alfred P. Sloan Research Fellowship. She is a fellow of the ACM and the IEEE and was named a University Scholar by the University of Illinois. She is currently the chair of ACM SIGARCH and on the board of the Computing Research Association. She received the Ph.D. in computer science from Wisconsin in 1993 and a B.Tech. in electrical engineering from IIT-Bombay in 1987.