BBC coverage of the jump to 135.5 teraflops.
ZDNet has more.
The feat won’t show up on the current Top500.org list until they release the next revision of the list, which I think will be in May (the last was released in November at the Supercomputing 2004 conference in Pittsburgh, and it seems to be issued at six month intervals).
Update: John West, Director of the ERDC MSRC — one of four DOD HPC program centers — e-mails with a helpful clarification:
Top500 lists are published twice a year: in June and in November. The November list is announced at the annual Supercomputing series of conferences (www.supercomp.org), which is probably part of the reason for its not-quite-six-months timing.
He also notes that the LINPACK score (upon which the Top500 list is based) isn’t the best way to assess a supercomputer’s relative benefit to a discipline, despite it’s popularity — something I probably should have noted in my post.
In my defense, as limited as the LINPACK score is in what it says about a particular machine, it is the one number most people out here (certainly in the policy world) cling to when trying to understand progress in supercomputing. Though it wasn’t the message we sought to convey, the fact that the Japanese Earth Simulator was X teraflops faster than our “best” machine certainly focused the mind of a lot of policymakers in Congress last year, for better or worse. In talking about high-end computing with them, we certainly tried not to emphasize that measure; rather, we tried to talk about the importance of a sustained research effort on a diverse set of approaches to enable progress on a wide range of different problems.
John also notes that there are some interesting efforts to develop a new metric coming out of DARPA’s HPCS program, but those measures are likely to be a bit more complex — almost certainly spelling doom for their adoption over the “one number fits all” of the LINPACK.