More global shared memory on SGI Altix 4700 systems

Silicon Graphics News

Silicon Graphics have just announced that more global shared memory is available with fewer CPUs on their Altix 4700 systems. Increased DIMM density now means you can get an Altix 4700 with 2TB of memory, with only 8 processors.

If you’ve got applications that require large amounts of memory but not much in the way of compute-intensive processes, this is very good news indeed.

Global shared memory is memory which is accessible from all processors/cores. So in an SGI Altix with 1024 processors and 4TB of RAM, any one of the 1024 CPUs can access any part of that 4TB of memory. This is due to the design of Silicon Graphics’ large scale systems, which are Single System Image (SSI) machines – all resources are shared.

Clusters work in a different way, where each node has ‘local’ CPU and memory, and this can’t be accessed from another node.

Both SSI and clusters can scale, but in different ways and with different workloads. Shared memory jobs, where you’re doing lots of memory I/O and you can peg your dataset in physical RAM, don’t scale well with clusters, whereas rendering (where discrete jobs can be chopped up and executed in batches) are just right for clusters but not SSI machines.

With lots of memory density enhancements coming down the line, I’m wondering when Silicon Graphics will break through the 4TB system memory barrier?

Comments Off on More global shared memory on SGI Altix 4700 systems

How to scale a Terabyte in-memory database?

Performance

McObject are one of those database vendors who you don’t normally hear of, but who are really pushing the boundaries of what can be done with your data.

Their product, extremeDB-64, is written to take advantage of large memory systems by pegging the entire dataset in physical RAM. The advantages are pretty obvious – as are the downsides as well. The McObject guys have really thought about the problems, though, and extremeDB-64 is an impressive database solution.

What’s more impressive is McObject’s recent benchmark and scalability testing, where they test a 1.17 Terabyte, 15.54 billion row in-memory database on a 160-core SGI Altix 4700 server. They measured transaction throughput of up to 87.78 million query transactions per second, which is the sort of uber data-warehousing capability I know a number of businesses would love to get their hands on.

The benchmark white paper is available as a free download – head on over to this page to enter your details.

Comments Off on How to scale a Terabyte in-memory database?