Too little, too late – Tukwila Itanium is released

Silicon Graphics News

Intel have announced that the massively delayed Tukwila Itanium processor is now available. HP is crowing about the performance advantages, but it’s not like they have much of a choice. Interestingly though, there have been no Altix related announcements from SGI.

The delays in Tukwila have hurt SGI’s NUMA sales, and the new Altix UV gives much better price/performance than Itanium could ever deliver. I predicted (Project Ultraviolet and the future of Itanium Altix) that we’d see a final Itanium Altix using Tukwila later this year, before the Itanium line was killed off.

With no product announcement from SGI to accompany the Intel fanfare, and with SGI’s Cyclone cloud offering accidentally offering a neat Itanium to x86 migration platform, it looks like we could finally be seeing the death of Itanium within SGI’s product line.

Silicon Graphics went there in the past with the R8000 MIPS CPU – it had the potential for massive performance, but only if you optimised the code and really knew what you were doing. That level of investment is always a niche game, and with the lower price and better performance for less effort from x86, Itanium was always going to struggle in the long term.

It’s just a shame SGI had to go bankrupt twice and shed a load of talented and skilled engineers to learn the lesson.

Comments Off on Too little, too late – Tukwila Itanium is released

Institute of Cancer Research deploys an SGI Altix UV super

Silicon Graphics News

The installations of SGI‘s latest NUMA beast, the x86 based Altix UV, are starting to trickle in. Launched back in November, the Altix UV finally took SGI’s scalable NUMA solution to a cheaper (and hopefully more profitable) x86 base, using Nehalem EX processors instead of Itaniums.

The Institute of Cancer Research have announced that they are the latest to deploy an Altix UV. Although the press release is short on specs of the machine, it looks like large Single System Image (SSI) machines are still in demand, despite the overwhelming dominance of clusters in HPC.

Altix UV will provide the ICR with a massively scalable shared memory system to process its growing data requirements, including hundreds of terabytes of data for biological networks, MRI imaging, mass-spectrometry, phenotyping, genetics and deep-sequencing information across thousands of CPUs.

SGI needed some lower cost NUMA SSI machines – some would argue since the disastrous migration away from MIPS – and the Nehalem base Altix UV has been a long time coming. Hopefully this will mark an upturn in SGI’s fortunes. The full press release can be viewed here.

Comments Off on Institute of Cancer Research deploys an SGI Altix UV super

University of Tasmania’s new SGI Altix ICE cluster for climate modelling

Silicon Graphics News

SGI have announced that the Tasmanian Partnership for Advanced Computing (TPAC) at the University of Tasmania’s (UTAS) supercomputing facility have just installed a chunky 64 node Altix ICE compute cluster.

‘Katabatic’ has a total of 512 processors and a terabyte of RAM, and will be used for Antartic and South Pacific climate modelling.

Thirty full-time TPAC users and more than 100 university researchers in the Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC), the Australian Integrated Marine Observing System (IMOS), the School of Chemistry, the School of Maths and Physics, and the Menzies Research Institute share access to Katabatic every day.

The Altix ICE cluster also boasts over 70TB of disk space, and over 500TB of mirrored tape storage. More on SGI‘s press release here.

Comments Off on University of Tasmania’s new SGI Altix ICE cluster for climate modelling

New SGI visualisation system for University of Tennessee

Silicon Graphics News

The National Science Foundation (NSF) in the USA has been handing out some grants this week. One of them is for the University of Tennessee, to the tune of $10 million, to deploy a new visualisation system:

The University of Tennessee (UT) will receive $10 million from the National Science Foundation over four years to establish a new, state-of-the-art visualization and data analysis center aimed at interpreting the massive amounts of data produced by today’s most powerful supercomputers.

The TeraGrid eXtreme Digital Resources for Science and Engineering (XD) award will be used to fund UT’s Center for Remote Data Analysis and Visualization (RDAV), a partnership between UT, Lawrence Berkeley National Laboratory, the University of Wisconsin, the National Center for Supercomputing Applications at the University of Illinois, and Oak Ridge National Laboratory (ORNL).

Showing that clusters, in fact, are not the solution to every HPC and visualisation problem out there, the new system will be a Single System Image (SSI) box from SGI. Details at the moment are pretty sparse, but the press release does say:

Much of RDAV will rely on a new machine named Nautilus that employs the SGI shared-memory processing architecture. The machine will feature 1,024 cores, 4,096 gigabytes of memory, and 16 graphics processing units. The new SGI system can independently scale processor count, memory, and I/O to very large levels in a single system running standard Linux.

Despite the large percentage of clusters in the Top500, they’re only really at home for jobs which can be properly parallelised. Shared memory systems are still much faster at certain types of compute jobs – especially visualisation:

Shared-memory processing can be even more useful than the world’s most powerful computers for certain tasks, especially those aimed at visualization and data analysis. The system will be complemented with a 1 petabyte file system and will be fully connected to the TeraGrid, the nation’s largest computational network for open scientific research.

1024 CPUs in an SSI system probably means that this will be an SGI Altix 4700 – unless SGI are indeed pushing a new x86 Altix out the door. 16 GPUs may not sound like much, however, if (as I suspect) they’re taking about NVidia Tesla S1070s, then you’re looking at 4 teraflops of performance for each one – that’s 64 teraflops of GPU performance in the system.

Hopefully we’ll be hearing some noise from SGI in the coming days, shedding some light on the exact configuration of the system.

You can read the full press release over at HPCWire.

Comments Off on New SGI visualisation system for University of Tennessee

Impressive SPEC benchmarks from SGI

Silicon Graphics News

SGI have just posted some pretty decent SPEC benchmarks, and it’s clear that they’re aiming straight for the datacentre, calling out IBM, HP and Sun’s big SSI (Single System Image) machines. The benchmarks posted are for the Altix 4700, and the cynic in me says this is a bit of “rah rah rah” flag waving exercise before we see a Xeon version rolled out.

From the press release:

* SGI is more than five times faster than its next closest SSI competitor in the SPECfp_rate_base2006 floating point performance benchmark. Altix 4700 proved 5.7 times faster than the Sun SPARC Enterprise 9000, its next closest SSI competitor; IBM and HP products trailed further behind.

* SGI performed over four times better in the SPECint_rate_base2006 integer performance benchmark. Altix 4700 proved 4.3 times faster than the Sun product, its next closest SSI competitor.

* SGI outperforms IBM by almost three times in Java performance with SPECjbb2005 benchmark. Altix 4700 is the leader in Java performance as measured by SPECjbb2005, outperforming IBM by 2.8 times.

* SGI has five times higher aggregate memory bandwidth than its next closest competitors. Altix 4700 has the highest aggregate memory bandwidth in the world, five times higher than its next closest competitors, NEC and IBM.

The arguments over how unrealistic and artificial the SPEC benchmarks are will rage forever, but these are some pretty impressive numbers and some pretty bold claims from SGI. You can read the full press release here.

Comments Off on Impressive SPEC benchmarks from SGI
« Older Posts