Intel’s Nehalem opens up some options for Silicon Graphics

Silicon Graphics News, supercomputing

Intel have announced their “Nehalem” processors will be coming to market in Q1 2009, with 2, 4 or 8 cores. Nothing to spectacular there (see Sun’s OpenSPARC CPU to see how to really scale with cores) but moving away from arguments about how multi-cores are better, Nehalem uses Intel’s QuickPath Interconnect (QPI) and that’s of great interest to Silicon Graphics customers.

QPI will be used by both Nehalem Xeon processors and the upcoming Tukwila Itaniums. This means that SGI’s biggest class of box – the Altix 4700 – could in theory be perfectly happy with either newer Itaniums, or make the move to cheaper Nehalem Xeons.

The next generation of Altix ICE blades will definitely be sporting the new Xeon processors (along with double data rate (DDR) Infiniband), but it’s the scalability of the bigger Altix NUMA boxes that are of interest to many customers. Given the architecture can scale to 128TB of shared memory, and with installations running up to 4096+ cores, being able to shove 8-way CPUs per socket would be a massive shot in the arm to the system – along with the increases in memory density and cooler running that will come with the new processors.

Being able to shove the cheaper Xeons into the high end offerings also means Silicon Graphics can lower production costs and increase margin, which given the recent quarterly earnings reports can only be a good thing.

Comments Off on Intel’s Nehalem opens up some options for Silicon Graphics

More global shared memory on SGI Altix 4700 systems

Silicon Graphics News

Silicon Graphics have just announced that more global shared memory is available with fewer CPUs on their Altix 4700 systems. Increased DIMM density now means you can get an Altix 4700 with 2TB of memory, with only 8 processors.

If you’ve got applications that require large amounts of memory but not much in the way of compute-intensive processes, this is very good news indeed.

Global shared memory is memory which is accessible from all processors/cores. So in an SGI Altix with 1024 processors and 4TB of RAM, any one of the 1024 CPUs can access any part of that 4TB of memory. This is due to the design of Silicon Graphics’ large scale systems, which are Single System Image (SSI) machines – all resources are shared.

Clusters work in a different way, where each node has ‘local’ CPU and memory, and this can’t be accessed from another node.

Both SSI and clusters can scale, but in different ways and with different workloads. Shared memory jobs, where you’re doing lots of memory I/O and you can peg your dataset in physical RAM, don’t scale well with clusters, whereas rendering (where discrete jobs can be chopped up and executed in batches) are just right for clusters but not SSI machines.

With lots of memory density enhancements coming down the line, I’m wondering when Silicon Graphics will break through the 4TB system memory barrier?

Comments Off on More global shared memory on SGI Altix 4700 systems

How to scale a Terabyte in-memory database?

Performance

McObject are one of those database vendors who you don’t normally hear of, but who are really pushing the boundaries of what can be done with your data.

Their product, extremeDB-64, is written to take advantage of large memory systems by pegging the entire dataset in physical RAM. The advantages are pretty obvious – as are the downsides as well. The McObject guys have really thought about the problems, though, and extremeDB-64 is an impressive database solution.

What’s more impressive is McObject’s recent benchmark and scalability testing, where they test a 1.17 Terabyte, 15.54 billion row in-memory database on a 160-core SGI Altix 4700 server. They measured transaction throughput of up to 87.78 million query transactions per second, which is the sort of uber data-warehousing capability I know a number of businesses would love to get their hands on.

The benchmark white paper is available as a free download – head on over to this page to enter your details.

Comments Off on How to scale a Terabyte in-memory database?
Newer Posts »