Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.
The National Science Foundation (NSF) in the USA has been handing out some grants this week. One of them is for the University of Tennessee, to the tune of $10 million, to deploy a new visualisation system:
The University of Tennessee (UT) will receive $10 million from the National Science Foundation over four years to establish a new, state-of-the-art visualization and data analysis center aimed at interpreting the massive amounts of data produced by today’s most powerful supercomputers.
The TeraGrid eXtreme Digital Resources for Science and Engineering (XD) award will be used to fund UT’s Center for Remote Data Analysis and Visualization (RDAV), a partnership between UT, Lawrence Berkeley National Laboratory, the University of Wisconsin, the National Center for Supercomputing Applications at the University of Illinois, and Oak Ridge National Laboratory (ORNL).
Showing that clusters, in fact, are not the solution to every HPC and visualisation problem out there, the new system will be a Single System Image (SSI) box from SGI. Details at the moment are pretty sparse, but the press release does say:
Much of RDAV will rely on a new machine named Nautilus that employs the SGI shared-memory processing architecture. The machine will feature 1,024 cores, 4,096 gigabytes of memory, and 16 graphics processing units. The new SGI system can independently scale processor count, memory, and I/O to very large levels in a single system running standard Linux.
Despite the large percentage of clusters in the Top500, they’re only really at home for jobs which can be properly parallelised. Shared memory systems are still much faster at certain types of compute jobs – especially visualisation:
Shared-memory processing can be even more useful than the world’s most powerful computers for certain tasks, especially those aimed at visualization and data analysis. The system will be complemented with a 1 petabyte file system and will be fully connected to the TeraGrid, the nation’s largest computational network for open scientific research.
1024 CPUs in an SSI system probably means that this will be an SGI Altix 4700 – unless SGI are indeed pushing a new x86 Altix out the door. 16 GPUs may not sound like much, however, if (as I suspect) they’re taking about NVidia Tesla S1070s, then you’re looking at 4 teraflops of performance for each one – that’s 64 teraflops of GPU performance in the system.
Hopefully we’ll be hearing some noise from SGI in the coming days, shedding some light on the exact configuration of the system.
You can read the full press release over at HPCWire.