Sramana Mitra: I think the parallel is that cigarette smoking is injurious to health. The whole tobacco industry has agreed to it and gradually society has come to understand that. There were fewer and fewer public places that you can smoke and so on and so forth.
An equivalent understanding and rejection of this kind of addiction needs to happen, but I guess it’s too young of an industry. The smartphone came out in 2007. Facebook came in 2004. It hasn’t been long. It’s only been a decade worth of real addiction that we have seen.
There isn’t enough research and analysis to come to any conclusive data on how injurious or dangerous any of this is. Right now, we are starting to see the ill effects of destroying democracies. In the political space, the harm is being felt broadly and more readily.
We need to get our arms around it in a new and unique way. Algorithms need to get reviewed and designed in ways where the objective is not getting people addicted.
Wasim Khaled: Going back to this whole attention versus credibility and putting out responsible AI that is going to be keeping technology companies in check. That’s a big piece of the regulation – this notion of responsible algorithms. The philosophy of attention versus credibility is diametrically-opposed. It is the basis for what is happening in the space that we see.
If you start to think about where the media was once like the baseline landscape for the formulation of opinions and ideas, you had these moderators, reporters, and editors that cleaned up the junk before it got out to the general public. This could be for the better or for the worse depending on which new organization and which country you are in.
Today, we have a system that is peer to peer on social media where all of those human editors have been taken out. You see Facebook and Twitter flopping between algorithms and humans to try to get it right, but at the scale and speed, it’s difficult to do it right.
Sramana Mitra: This is a big issue when we had a credibility check and an editorial process, media produced stuff that had to go through a much bigger level of due diligence. It’s not to say that there weren’t problems at that level as well.
Fox News is an example. They are willing to amplify a lot of garbage that comes from the right wing point of view. That’s there, but it’s still edited and it’s still managed. Meanwhile, the scale at which things are happening on the large social media platform, it’s difficult to manage.
Wasim Khaled: These are user-generated comments at scale and they are completely immune from the blowback. Fox News and CNN are not. They are still an entity that is known, and they will be held responsible by the FCC or the audience. They also have shareholder value so they have to watch what they do to some extent.
Sramana Mitra: There is no accountability in this case. There is also another looming threat when you can edit videos and doctor them. I don’t know how it can be managed or mitigated.
You can get Barack Obama to say that Donald Trump is the greatest president that America has ever had for example. It now comes to your question on credibility versus engagements. There could be credible supporting material on things that are dubious.
Wasim Khaled: You are getting into the topics of disinformation and information integrity and deep fake, which includes manipulated videos. This is one component of it.
If you think back to 2016, disinformation did not pop out of thin air. It’s been around for a long time as long as the printing press has been around and even before that. These are rumors and gossip turned into something else.
Sramana Mitra: The vehicles of propagation and amplification didn’t exist to this degree.
This segment is part 4 in the series : Thought Leaders in Artificial Intelligence: Blackbird.ai CEO Wasim Khaled
1 2 3 4 5 6 7