categories

HOT TOPICS

Thought Leaders in Big Data: Interview with Robert Youngjohns, SVP and GM at HP Autonomy (Part 2)

Posted on Saturday, Apr 27th 2013

Sramana Mitra: What is the state of the union on video analysis?

Robert Youngjohns: It is developing very quickly, and we have very powerful tools. It started way back with something as simple as number plate recognition, which is now very established. We have a demo app where we show people where we are taking pretty much every TV feed we can get from anywhere in the world and then do real-time analysis on what that feed is about. It turns out to be a complex problem, but not as complex as people may think. In terms of news media, people use a lot of subtitling. We can sense and detect those and then use that to categorize the information stream we are getting in. Then, by looking at the content of those streams, we can apply negative or positive sentiment to them.

SM: That is a less complicated problem. I think the use case that you described of video surveillance and something anomalous happening in that workflow is much more complicated.

RY: That is correct. We have secret agencies using our software because it helps them narrow down mug shots to known suspects. We produce the underlying engines for that. Sometimes I wonder about them from my own political perspective, but at the end of the day that is how people use it, rather than the underlying technology.

One of the interesting things we have done on the video side, which is something you can have a look at on a personal level, is that you take some of the video analysis capability we have and move it into the phone. From that we have been able to create an app called Aurasma. What Aurasma will do is image analysis through your phone lens and then use a trigger image, which you predefined to the system, to then bring out other digital content, which we call Auras. For example, you could have a road sign or a consumer item, point your phone at it, and it will recognize that or use it as a trigger image to bring up digital content related to it.

That is an interesting business area. It is an area we have been experimenting with for a while. We are ready to move in to mainstream deployment with it. A magazine, for example, that says: “It is tough being a printed magazine nowadays.” But what if someone held up their phone to an image in a magazine and new digital content sprung out of that image? Would you bring that thing to life? Take catalogs that you get through the post today, which you would probably throw away on the way between the postbox and home. If they had digital content triggered by the images, would it cause you to keep the catalogs longer, and therefore you are more likely to buy from them? These are the things we have been working on, trying to find as many use cases as possible.

SM: You were talking about Aurasma. That seems to me like a regular video app, having nothing to do with big data.

RY: It isn’t, but you asked the question about video analysis and I mentioned some of our video analysis tools.

SM: But this is not a video analysis use case. It is just a video use case.

RY: It is not. We have a trigger in this video analysis. That trigger is bringing up digital content. The digital content could be anything. We had an example in the UK, where Times Magazine used our technology and digitally enabled one of their supplements. You could hold your iPhone over the Sunday Times supplement – this page happens to be a photograph of a ballerina dancing – and the image is analyzed by the phone – and it recognizes a trigger image for a particular video. It then streams a video to you, and that video happens to be a ballerina dancing. It is far more than just cutesy videos.

This segment is part 2 in the series : Thought Leaders in Big Data: Interview with Robert Youngjohns, SVP and GM at HP Autonomy
1 2 3 4 5

Hacker News
() Comments

Featured Videos