Wasim Khaled: If you look back to the era of the Soviet Union, they weaponized propaganda and disinformation. They recently wielded it using these new tools. Again, it’s self-subscribing to propaganda. It’s the perfect system when you have polarized groups to drill in on that.
I want to think about this and consider where that might be going. Today, it’s pretty bad. We can see this week, in particular, how many opposing narratives are coming out on the same topic.
That is the goal. The goal of disinformation is to do one of two things. One is to have the audience, whether it’s society at large or an audience for a company, tune out. They can’t tolerate it anymore, so they tune out. They don’t participate in it anymore. That is a success for the people waging that war.
The second piece is, they end up falling into one particular rabbit hole and then you get drawn into that particular group where you get radicalized. The problem is that today, these campaigns work like creative agencies. They need a bunch of people to write the posts and then use bot-like scripts to amplify their messaging.
There might be a couple of people operating a hundred thousand accounts or it might be a scripted bot spreading that information. This is what Blackbird looks at – to detect and understand manipulation.
The reason you have to use AI and sophisticated technology for this is that the direction in which it’s going is leaving that disinformation as a service where you are working with a creative agency to create these stories and heading towards AI-driven automated propaganda.
This is where we are heading. In a critical event like COVID or the US election, you can imagine someone enters some keywords into a system and that system generates thousands of articles written in machine language. Both sides of the narrative are automatically spread through every single channel through influencer accounts or duplicate news agencies. They can spread that throughout the network targeting individuals across multiple viewpoints of the same story. You can do that at scale and every day. The post could include a couple of hundred images, memes, and deep fake videos that are propagandist in nature. You can do that at scale and you can do that every day.
Unfortunately, that is where we could be heading, which could render the entire information space across the space non-functional. That is our biggest concern. A couple of years back when we started the company, that was the biggest concern. We wanted to have something in place that could battle that kind of world. That is not a world that anyone wants to live in.
Sramana Mitra: Let’s get down to what you are doing and what your vision is for how to deal with that world that you are describing. It’s a rather dystopian world. If a competitor creates a deep fake that really damages the brand, what does your technology do?
Wasim Khaled: The example which you’ve used is not as common as other types of scenarios we’ve seen. The key thing is you have to gauge three key areas that we focus on.
First, you have to understand what the conversation is. What is going on today for that particular brand? In other words, where are my battles today? What are people chattering about? More typically, it might be a video saying something against their industry.
The second piece is, what’s driving that? Who are the key influencers? Are bots driving it? Are people driving it? That’s key because if you end up reacting to something that the adversary wants you to react to, you have a problem. The third thing is, what is the impact on us as a company?
There’re quite a few measures that we look at for impact. As an example, there is a large group of people that are spreading a narrative about a company. It is spreading with great velocity. It indicates that it’s a bot network or some nefarious group trying to push that story through a social platform.
Those are the kinds of things we try to identify as early as possible to be able to get them to handle that narrative in a more informed way. It’s about critical decision-making enhancement and augmenting their analysts and the people guarding the brand and its reputation.
We’re seeing those spin up more and more often. These are things that are an issue for them today. It can cause significant harm. We tell our customers that we need to think about this as a cyber security problem. It’s a cyber attack on perception. Most people don’t even realize they’re being hacked. Are there internet trolls or fringe activists driving that story?
This segment is part 5 in the series : Thought Leaders in Artificial Intelligence: Blackbird.ai CEO Wasim Khaled
1 2 3 4 5 6 7