categories

HOT TOPICS

Thought Leaders in Cloud Computing: Ari Zilka, CTO of Terracotta (Part 7)

Posted on Sunday, Jan 8th 2012

SM: There are many classifications of big data and real time. At this point, that’s one of the reasons why there’s so much activity in the space right now.

AZ: Yes. Big data can be sold to enterprises because no matter who you are, you don’t have to be a Facebook anymore to have an explosion of data on your hands. Big data can be sold to the end user. It’s purely a function of context. There are companies like Backblaze and Blaze Logic and Dropbox and Box.net selling data management in the cloud. There is big data for you and me. There is big data for Main Street enterprise. There’s big data for Wall Street. There is big data in different flavors and slices all over the place, helping us manage the fact that all our data has moved online. At some point, the IRS said, “Please don’t send me any paper tax returns. I don’t want to file anything anymore. I’m happy to buy disks. I’m not very happy to buy file cabinets.” If the IRS is online and they’re handling big data, and you’re not thinking about it for your business and your customers, then you’re behind the curve … absolutely.

SM: Very good. I enjoyed the discussion. Is there anything else that you want to add before we close?

AZ: It was refreshing change of pace for me. I will point out, without peddling my own products, that there is a problem in the cloud when you go big data, and you go big data in a public cloud or private cloud context. A lot of people are debating if cloud is utility computing. Are they one and the same? Or managed hosting from years before, what’s the difference? One of the differences is managed hosting, for example, I can call up Sun – well, Oracle now — and say, “I want to buy a refrigerator sized server. Ship it over to my hosting provider, and he’s going to run it for me.” That’s managed hosting. Cloud has an aspect of homogeneity to it and black box opacity to it. You don’t know what’s back there. You’re not supposed to know. You’re supposed to make all your workload in fit in small, little chunks of servers. That’s how they’re being bought, racks and racks of small servers strung together to create one big cloud, be it Amazon’s cloud or Rackspace’s cloud or your own enterprise’s cloud at JPMorgan Chase or Goldman or what have you.

The problem is that data management solutions are traditionally designed to go on huge monster machines scaled up. So, cloud and database don’t go together. There’s an opportunity out there. Facebook is doing this. Yahoo is doing this. Google is doing this. Amazon is doing this. There’s an opportunity to break your data up into cloud sized bites and look at it in aggregate across thousands of pieces of data. No one slice of your data needs a giant, expensive custom piece of hardware anymore. We’re going to a homogeneous run-time environment where everything is an exact size brick. In that brick size world, a solution like ours, which is not big disc-based data, but memory-based data, this is very interesting.

I can string together the memory of many machines across a network to give one uber data store that handles all this big data at very high speeds. And that’s where I think we’re going to see a lot of innovation. We have seen a lot of it in the last decade, but we’re just getting started. It’s being adopted at this point, in main stream enterprises. Leaving the database at home is what I call it. Leave the old Oracle and the old IBM infrastructure at home Don’t throw it away, but pull all your data out of it so that you can get at it in the cloud where it’s much cheaper to run, much cheaper to own and it runs much faster.

SM: Yes. Absolutely. Great. It was very nice to talk to Ari.

AZ: Likewise, I appreciate your time.

This segment is part 7 in the series : Thought Leaders in Cloud Computing: Ari Zilka, CTO of Terracotta
1 2 3 4 5 6 7

Hacker News
() Comments

Featured Videos