categories

HOT TOPICS

Enabling Ecological Data Centers: Rackable Systems CEO Mark Barrenchea (Part 6)

Posted on Tuesday, Dec 2nd 2008

SM: Are you going to take it to the enterprise data center?

MB: Actually we won’t. We do not focus on running SAP, Oracle or databases. We are trying to take all the redundancy out of hardware that we can. We know servers are going to fail. Our view is that you should let them fail. When you buy a rack of them and they are clustered, they become disposable blades. Throw them away instead of buying five power supplies and triple fan blowers. Over a lifetime you pay more.

When you overprovision an environment you can focus redundancy on software and operating system life. We tend not to sell into big transactional environments. We sell more into clustered environments such as HPC, search, social networking, and things which run more like directory services versus big databases. I am staying away from the big glass house type of applications such as Oracle and SAP. Exchange has come a long way, and they run more like a directory service than a big centralized clearinghouse of a database.

SM: You need to get new product strategies and new customer bases; what is next?

MB: Over the last year I have changed our engineering focus to enclosures. We are thinking very differently about how to enclose a server. You buy a datacenter today and it is a big chassis of aluminum. You have $100 of aluminum in a server, and there is no reason to have that much, which in turn is put into another cabinet of aluminum. You wind up with cabinets and cabinets of aluminum. We are removing the aluminum from what we do and we are building servers on pizza trays. Imagine a big cafeteria-style rack and just put in a tray of servers and storage. We are thinking very differently about enclosure styles. When you do that you get even better density.

We have removed all fans from our servers and gone to centralized power fans on the back of cabinets. We are using new 50-year-old technology around heat sinks. When you take a piece of copper and magnetize it, then run 12V DC through it, the copper turns cold.

When a processor runs it is throwing off 180 degrees of heat, and that heat is lost. We are trying to capture that heat, and capture wasted current, by running it through a different style of heat sink that actually turns cold. When you move air over that heat sink it is enough to cool an enclosure. We have moved to big 40-foot cargo containers to build a completely enclosed data center.

SM: What market did you have in mind for a cargo container data center?

MB: New data center construction. Google spends over $100 million a year on electricity. With our ice cube we can remove 50% of all power requirements for a data center. Instead of building a very expensive data center near electricity, we want to have data centers anywhere in the world which require network connectivity and very little power.

We continue to innovate at the server level, and we innovate at the data center level as well. It is a new market but we are targeting brand new data center construction and we are targeting folks who are out of space. Over at Yahoo! we have a cargo container in their parking lot that is backed up to their data center. They added 15,000 cores in a 40-foot structure.

This segment is part 6 in the series : Enabling Ecological Data Centers: Rackable Systems CEO Mark Barrenchea
1 2 3 4 5 6 7

Hacker News
() Comments

Featured Videos