Sramana Mitra: Speaking about disinformation and harmful-to-society information, the COVID misinformation is also equally dangerous.
Wasim Khaled: There’re a lot of complexities that go into the decision-making process in social media platforms where the line between freedom of speech and censorship lies. That’s going to be a contentious topic that pushes into regulation on the social platform as we go forward in the next six months to a year.
What can they get away with and what they are going to be held responsible for? There are a lot of talks around Section 230 that protect social platforms from being held responsible for the content in their platforms.
Without that law, there would be no social platforms today. You just have to be careful about how you tweak it but today, they are immune to anything that can happen on their platform that might cause some sort of violent event or societal harm.
They are immune to that as a result of that law that was passed quite a while back that social platforms are just the medium and they are not responsible for the message and content. I don’t think that it should just be repealed, because it would just hurt all companies including the small tech.
It’s the big companies that can cause the most harm there, so there has to be some sort of sliding scale on who you apply that to otherwise you will completely create a monopolistic situation. The people who have that advantage of that policy stay incumbent and smaller companies would not even have the chance. They might get sued out of existence because of one post that someone might make.
One of the reasons that we don’t see proactive solutions being spun up is because every path you take has a number of complexities and unintended consequences that might occur as a result of the decisions that you make. There is a massive push back, and in some cases rightly so, on censorship versus freedom of speech.
It’s laying down policy in real-time as they go forward because new things come up every day that shatter the old mold that they put down a month ago as policy. People are always working to circumvent those rules themselves. You mentioned the big QAnon group. Twitter put a huge stance on banning those kinds of accounts.
The very next thing you saw is QAnon members found ways to circumvent that through something they called Pastel QAnon which started putting up pages that spread the same conspiracies but with much lighter rhetoric and nicer designs. They targeted soccer moms and similar groups without mentioning those keywords that they are known for.
They bypassed the filters. Something that I push to whoever I’m talking to is that it’s an escalation or arms race between those who create disinformation and those that are going to try to defend against it. That’s going to continue for the foreseeable future.
Sramana Mitra: There is a fair amount of complexity in what we are talking about here. There is free speech and laws protecting free speech and social media platforms being held accountable for what goes on in their platform.
Social media platforms however are not public services. They are private companies, so they have the right to formulate their own policies. Currently, as we stand and sit, this is not a regulated industry, so being responsible such as Facebook and Twitter have been, they can take the same exact stance. They take the same position.
They are not protecting free speech. They don’t have to allow all kinds of garbage being spewed around their platforms. They can take the position of taking down something false and malicious. This is the direction that they are taking baby steps in.
In the use case that you provided about lighter versions of QAnon minus those keywords, a well structured AI algorithm should be able to pick those things up and flag those pages.
Wasim Khaled: Yes, and that is one of the areas where Blackbird has been focusing on for some time. The reason that it might be easier for us is not because we have the same level of resource or engineering that the platforms do but rather we are not restricted by the same type of incentives – revenue versus congressional regulations.
Those things don’t apply to us because our business is detecting a harmful emerging conversation or synthetic amplification of narratives. We talk about this all the time. If companies that have almost unlimited resources want to do it, then they could do it at least on their own platform and not across multiple platforms.
Sramana Mitra: They don’t have to do it across multiple platforms. As long as they keep their own houses clean, then that should be enough.
Wasim Khaled: I agree with you. You can see the slow play leading up to the election. Three months before, it suddenly became a big urgent topic in the press that they had four years to prepare for this and other harmful events. They had not reacted to it.
If it wasn’t for social platforms ignoring that growth, it wouldn’t have gotten to millions of people strong across the world. It’s too little too late. Unless those things are improved, you’ll see other potential groups like that pop up as things progress.
This segment is part 2 in the series : Thought Leaders in Artificial Intelligence: Blackbird.ai CEO Wasim Khaled
1 2 3 4 5 6 7