Cyclone – SGI’s Technical Compute Cloud

Silicon Graphics News


Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.


It seems SGI has joined everyone and their dog in jumping on the cloud bandwagon. SGI have just announced Cyclone, their Cloud Computing offering for technical and HPC computing.

The offering is non virtualised (what’s called “single tenancy”) which addresses the main stumbling block to using cloud compute resources for HPC – the overhead of that virtualisation layer. The other stumbling block – how to actually get your data onto the cloud – is addressed by being able to ship drives of data direct to SGI, who will preload it into your compute instance for you.

On the hardware side, Cyclone offers a nice possibility of “try before buy” for compute customers, with SGI’s entire product range available, packed with some GPU and accelerator goodness:

The SGI technology at Cyclone’s core is comprised of some of the world’s fastest supercomputing hardware architectures, including SGI® Altix® scale-up, Altix® ICE scale-out and Altix® XE hybrid clusters, all based on Intel® Xeon® or Itanium® processors. The hybrid architecture offers either NVIDIA® Tesla GPUs or AMD FireStream™ GPU compute accelerators for floating point double precision workloads, and Tilera accelerators for integer workloads. High performance SGI InfiniteStorage systems are available for scratch space and long-term archival of customer data.

Itanium and x86 offerings would offer customers a great way to port their apps from Itanium and onto the new Altix UV platform. But I’m sure SGI would never have done that intentionally. Ahem.

On the software side, SGI will be pre-installing many commonly used technical computing applications:

With Cyclone’s SaaS (Software as a Service) model, SGI delivers access to leading-edge open source applications and best-of-breed commercial software platforms from top Independent Software Vendors (ISVs). Supported applications include: OpenFOAM, NUMECA, Acusolve, LS-Dyna, Gaussian, Gamess, NAMD, Gromacs, LAMMPS, BLAST, FASTA, HMMER, ClustalW and OntoStudio. SGI expects to add additional domains and applications partners over time

SGI are mixing it up with Penguin and NewServers, coming in at a higher price but arguably offering more value by pre-loading software, and enabling users to migrate to in-house SGI hardware later on down the line. Costs are also high compared to Amazon, but really, I can’t see anyone putting HPC or technical compute apps on Amazon’s offering.

You can read more in SGI’s press release here.

No Comments

Institute of Cancer Research deploys an SGI Altix UV super

Silicon Graphics News


Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.


The installations of SGI‘s latest NUMA beast, the x86 based Altix UV, are starting to trickle in. Launched back in November, the Altix UV finally took SGI’s scalable NUMA solution to a cheaper (and hopefully more profitable) x86 base, using Nehalem EX processors instead of Itaniums.

The Institute of Cancer Research have announced that they are the latest to deploy an Altix UV. Although the press release is short on specs of the machine, it looks like large Single System Image (SSI) machines are still in demand, despite the overwhelming dominance of clusters in HPC.

Altix UV will provide the ICR with a massively scalable shared memory system to process its growing data requirements, including hundreds of terabytes of data for biological networks, MRI imaging, mass-spectrometry, phenotyping, genetics and deep-sequencing information across thousands of CPUs.

SGI needed some lower cost NUMA SSI machines – some would argue since the disastrous migration away from MIPS – and the Nehalem base Altix UV has been a long time coming. Hopefully this will mark an upturn in SGI’s fortunes. The full press release can be viewed here.

No Comments

University of Tasmania’s new SGI Altix ICE cluster for climate modelling

Silicon Graphics News


Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.


SGI have announced that the Tasmanian Partnership for Advanced Computing (TPAC) at the University of Tasmania’s (UTAS) supercomputing facility have just installed a chunky 64 node Altix ICE compute cluster.

‘Katabatic’ has a total of 512 processors and a terabyte of RAM, and will be used for Antartic and South Pacific climate modelling.

Thirty full-time TPAC users and more than 100 university researchers in the Antarctic Climate and Ecosystems Cooperative Research Centre (ACE CRC), the Australian Integrated Marine Observing System (IMOS), the School of Chemistry, the School of Maths and Physics, and the Menzies Research Institute share access to Katabatic every day.

The Altix ICE cluster also boasts over 70TB of disk space, and over 500TB of mirrored tape storage. More on SGI‘s press release here.

No Comments

SGI launches Altix UV

Silicon Graphics News


Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.


As expected, SGI have used the SC09 show to launch their latest single system image NUMA machine – the Altix UV. The specs are impressive – not only because SGI have dropped in Nehalem EX processors (as expected) – but the improvements in core density and NUMAlink bandwidth are also impressive.

As with the previous Origin and Altix machines, the Altix UV is available in two models. The Altix UV 100 is the familiar 3u ‘building block’ system, allowing you to scale up as needed. Altix UV 100 scales to 96 sockets (768 cores) and 6TB of shared memory in two racks for up to a claimed 7.0 Tflops of performance.

Altix UV 1000 is the big daddy, scaling up to 256 sockets (giving 2,048 processor cores) and 16TB of shared memory in four racks, for up to a claimed 18.6 Tflops of performance. Interestingly, the 16TB memory limit is imposed by the Nehalem EX architecture.

The next generation of NUMAlink offers a staggering 15 GB/sec transfer rate. The new hub chip has been designed to offload MPI communication. Instead of the CPUs having to handle the packaging and transmission of MPI messages, the UV hub now takes that load. This clears the CPUs to do pure number crunching, but still enabling the level of fast MPI communications that’s needed in such a large NUMA system.

Speaking of large systems, the UV design allows individual 4 rack systems to be hooked together in an 8×8 torus. The theoretical limit of the UV hub could provision over 32,000 cores. The UV hub also has some FB-DIMMs to cache directory information, which not only speeds up operations but also helps with the scalability of the solution.

The design of the processor board is interesting. SGI have used Intel’s Boxboro chipset to handle I/O, with the UV hub plugged directly into both Nehalem CPUs via the QPI interconnect.

SGI Altix UV system board design

The I/O risers mean that, since it’s a single system image, any processor core can access any I/O device anywhere in the system. With the potential for so much I/O throughput, it would be interesting to see what a large Altix UV system packed with Tesla GPUs could achieve.

The Altix UV is an evolution, rather than an evolution, of the flexible NUMA design that first appeared in the Origin 3000. Despite all the press about clusters, big single system image machines still remain the most efficient for many problems. The problems that needed solutions like the original Origin 2000 – pre- and post-processing tasks, very large data problems, I/O and memory intensive apps – have, if anything, gotten more complex and demanding over time, and SGI still have the technology to solve them.

The Nehalem EX won’t be formally launched by Intel until Q2 2010, so SGI aren’t releasing any performance figures. However SGI have announced four initial customers, who will be taking delivering once the processors start volume shipment.

The customers announced at launch are the University of Tennessee (1024 cores, 4TB memory), the North German Supercomputing Alliance (HLRN) (two systems totalling 4352 cores, 18TB of memory, to plug into their existing ICE installation), CALcul en MIdi-Pyrénées/Computations in Midi-Pyrénées (CALMIP) based at the University of Toulouse in France (128 cores and 1 TB of memory), and the Institute of Low Temperature Science at Hokkaido University in Japan (180 cores, 360 GB of memory).

With the Altix UV no longer requiring customers to recompile for Itanium, SGI now have a real chance to push these machines – not just for HPC, but also in business data centres, where Sun and HP have been very successful selling large machines like the F25k and Superdome.

No Comments

SGI at SC09

Silicon Graphics News


Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.


SC09 kicks off next week in the US, and SGI has been dropping hints about a big product announcement coming up.

There’ll be the usual pimping of the latest lines from their product range:

Octane™ III, SGI’s personal supercomputer, ICE Cube™ modular data center, CloudRack™ scalable workgroup clusters, SGI® Altix® ICE departmental server, and SGI® InfiniteStorage hardware and software products.

I’m hoping the new shiny bit of kit is going to be a line of Xeon based Altix gear, but we shall see:

“Supercomputing 2009 is the preeminent conference for leading HPC companies to showcase their most innovative technologies,” said Mark J. Barrenechea, president and CEO of SGI. “SGI is proud to demonstrate influential HPC products that scale from the personal supercomputer to the largest scale-up platform, and to make a major product announcement.”

New toys aside, SGI’s CTO Dr. Eng Lim Goh is going to be giving a number of talks, which are always worth a listen. He’ll be presenting “Scalable Architecture for the Many-Core and Exascale Era” at the Exhibitor Speaker Forum on Tuesday, November 17, at 2:30 p.m. PST in room E143-144. He’ll also be talking about scaling up systems using the Nehalem EX in the Intel Theater within the Intel on Wednesday, November 18, at 4:00 p.m. PST.

Once again this year work has gotten in the way and I won’t be able to attend, but I highly recommend you keep on top of things with John’s coverage of SC09 over at insideHPC.

No Comments
« Older Posts