By Matthew Dublin
Science has an article discussing what it will take to make exascale computing a reality. These new systems — which at present remain only theoretically possible — would be capable of performing 10 to the 18th power floating point operations per second, or an exaflop.
Exascale supercomputers would be 100 times more powerful than today's fastest supercomputer, the K Computer at Japan's Riken institute, which is currently ranked at roughly 11.3 petaflops. All the major supercomputing powers are racing towards constructing a viable exascale system, including the US, China, Japan, Russia, India, and the EU.
However, the challenges of energy efficiency and sustained performance are formidable, not to mention developing brand new programming models for these huge systems.
Even though computer hardware has seen a steady increase in performance over the last few decades, when it come to actually achieving exascale performance, all those technological advances go out the window. Exascale won't simply be a matter of building a really, really large supercomputer center, crammed to the ceiling with the latest server blades, but rather, an entirely new processor and interconnect architecture.
Intel has released its 50-core Knights Corner and Xeon E5 server chips in an attempt to build up to exascale by the year 2018. These chips are designed for massive processor core counts as well as low energy consumption.
Sometimes the need for a completely new hardware to accommodate the perpetual growth in research data gets lost — folks still think the cloud can save them when, for example, genomics datasets reach the exascale mark. Unfortunately, an exascale cloud can't exist until there is exascale hardware to make it float.