Sramana Mitra: You talked about in-memory databases and said you were using Oracle’s in-memory database technology. Could you talk a bit about trends in in-memory databases, because it looks like SAP is setting its entire company on HANA?
Franz Aman: It is fascinating because some in-memory technologies have been around for a while, and others have been promoted heavily just recently. SAP’s HANA would be one example. In the case that I mentioned we used Oracle’s x10 database, which has been around for a while. I think it was an under-appreciated asset – even at Oracle, people seem to have forgotten about it. The fundamental key difference with in-memory here is as follows: You have amazing analysis capabilities. You don’t have to worry about breaking up a problem into smaller pieces and distributing them over a smaller cluster. If you have one big memory space that you can look at, the types of relationships you can see immediately – whether they are social media analysis, fraud detection or other problems – are amazing. It is freeing you from many burdens and having to think small in many ways. You can see the forest, rather than having to worry about the trees. This is considered “freeing you of small technologies” if you go in-memory. You don’t have to do some of the painful things you had to do in the old world, where you had to maintain indexes or aggregates and worry about how to lay out the data so that you could process it. The moment you have to think about laying out the data to get something done, you limit the answers you can get and the questions you can ask.
Again, the fundamental big thing with in-memory is that it frees you from compute limitations and architectural or thought limitations. What a lot of scientists use our systems for is seeing the big picture. They can ask whatever questions pop into their minds, and they are not limited by how they arranged their data. It is fascinating that in enterprises people have only come to realize that during the past few years. I think both in software and hardware we are getting closer to how our brain works and getting closer to a big brain system is really empowering. In that context I also see visualization as a key element that helps individuals understand what they are working with. If you see the data, you will immediately detect the patterns, rhythms, and repeat patterns over time. If you would have to start at a spreadsheet or a traditional report, you couldn’t see that. It is fun to see small new companies that have picked up on that and are building phenomenal visualizations and interactive associative technologies, whether that is a TIBCO spotfire, a Clicktech associative technology that allows you to quickly ask questions and get answers immediately (also based on in-memory technology), or something like Tableau, which also has very nice visualization capabilities. Those are some of the trends I see in in-memory.
SM: In terms of limitations, is there a limitation size when you are dealing with in-memory architectures?
FA: Size can definitely be limiting. There are different schools of thought here. Today’s laptops typically have a few gigabytes of memory. That is wonderful for a lot of things we do. But if you do big data–style analysis and you want to see the bigger picture, that is not enough. It is helpful to have a big brain–type system. One of the things we developed for some of those applications and reasons is our big brain system, which has more than 64 terabytes of main memory.
This segment is part 3 in the series : Thought Leaders in Big Data: Franz Aman, Chief Marketing Officer, Silicon Graphics
1 2 3 4 5