Looking for IRIX or Solaris expertise? Visit my UNIX Consultancy website.
I’ve just had an email from Cerelink Digital Media Group with some clarification. Apparently, while they are using NMCAC resource to explore ‘cloud computing’ and commercial provisioning, Dreamworks will not be using Encanto as part of this. Instead Cerelink DMG will be building out compute clusters for Dreamworks based on HP blades. Interesting – I’ve emailed Cerelink and asked them for some more information, which I’ll post up here when I have it.
Truly. After their brief affair with HP, Dreamworks have gone back to using SGI hardware. Although not in the way you might think ….
A while ago Silicon Graphics install a big Altix ICE system at the New Mexico Computing Applications Center. The machine was nicknamed Encanto, and currently has 14,336 Xeon processor cores and 28TB of memory.
Encanto is housed at Intel’s facility in Rio Rancho. Encanto is one of (if the not the) world’s largest non-government machines, and in fact has been funded by a unique mix of private-public money. This gives some flexibility on how it’s used – at the moment, The University of New Mexico, New Mexico State University, and New Mexico Tech all have first say on jobs running on the box. The U.S. Department of Energy’s Los Alamos National Laboratory and Sandia National Laboratory, which are also located in New Mexico, are also partners in the Encanto system, and so also get access.
It seems Dreamworks have been working an IT company called Cerelink Digital Media Group, who are setting up access and reselling compute time on Encanto.
A bit roundabout, but Dreamworks are indeed back on SGI kit. Perhaps more importantly, this sort of deal is just going to be the tip of the iceberg. As we move forward, more and more organisations are going to be renting time on their large machines and clusters, ensuring that they’re generating revenue and operating most efficiently round the clock.
Sun’s Darkstar project is the first direct from a vendor to commercialise this sort of activity, and again – we’ll be seeing everyone moving to this model in the future.