Big data needs a hardware revolution

Advances in computing focus on software: compelling apps and programs that can track the health of people and ecosystems, analyze big data, and beat the human champions in Go. Meanwhile, efforts to bring about sweeping changes to the hardware that underpin all innovation have gone relatively unnoticed.

Since the start of the year, the Semiconductor Research Corporation (SRC) – a consortium of companies, academia and government agencies that help shape the future of semiconductors – has announced six new university centers.

After seeing software giant Google expand into hardware research on artificial intelligence (AI), major chipmakers are moving to reclaim the field. As they do so, they are eyeing the beginning of a significant change – arguably the first major change in architecture since the birth of computing.

This will be important for science: Research in fields ranging from astronomy and particle physics to neuroscience, genomics and drug discovery would like to use AI to analyze and detect trends in vast sets of data. But it places new demands on traditional computer hardware.

Traditional von Neumann architecture places data-storage units inside a computer separate from data-processing units. Moving information back and forth between them takes time and power, and hinders performance.

To take advantage of AI technology, hardware engineers are looking to build computers that go beyond the constraints of von Neumann design. This will be a big step. For decades, advances in computing have been driven by reducing the size of components, guided by Gordon Moore’s prediction that the number of transistors on a chip doubles roughly every two years – which usually means that processing power has increased. Did the same.

Modern computers bear little resemblance to early machines that used punch cards to store information and mechanical relays to perform calculations. Transistors in integrated circuits are now so small that more than 100 million of them can fit on the head of a pin. Yet the fundamental design of separate memory and processing remains, and places a limit on what can be achieved.

One solution may be to merge memory and processing units, but performing computational work within a single memory unit is a major technical challenge.

Google’s AlphaGo research shows a possible, different, way forward. The company has produced new hardware called a Tensor Processing Unit with an architecture that enables many more operations to be performed simultaneously.

This approach to parallel processing significantly increases the speed and energy efficiency of computationally intensive computations. And designs that loosen the strict requirement to perform accurate and error-free calculations – a change in strategy known as predictive computing – could further amplify these benefits.

As a result, the power consumption of AI programs like AlphaGo has improved dramatically. But making AI widely accessible requires increasing the energy efficiency of such hardware.

The human brain is the most energy-efficient processor, so it’s natural for hardware developers to try to imitate it. An approach called neuromorphic computing aims to do the same with technologies that simulate communication and processing in biological nervous systems. Several neuromorphic systems have already demonstrated the ability to simulate a collection of neurons on tasks such as pattern recognition.

These are small steps, and now SRC has stepped in in an effort to encourage the hardware to stay running. As part of its Joint University Microelectronics program, SRC has quietly turned its attention to hardware architecture development.

For example, a new center at Purdue University in West Lafayette, Indiana, will research neuromorphic computing, and one at the University of Virginia in Charlottesville will develop ways to use computer memory for additional processing power.

This technical work is huge. So it’s heartening to see the traditionally US-focused SRC open its doors. South Korean firm Samsung joined in late 2017, making it the fifth foreign company to sign up in the past two years. This is a welcome sign of cooperation. But that commercial rivals will work together in this way also indicates how technically difficult the industry thinks it will be to develop new hardware systems.

As this research develops, Nature looks forward to covering the progress and publication results. We welcome papers that will enable computing architectures beyond von Neumann, such as neuromorphic chips and components for in-memory processing. Scientists from many fields await the result: computers powerful enough to filter all their new-found data. They’ll have to wait longer. But the wait should be worth it.

Leave a Comment