categories

HOT TOPICS

Thought Leaders in Artificial Intelligence: Blackbird.ai CEO Wasim Khaled (Part 7)

Posted on Sunday, Jan 17th 2021

Sramana Mitra: There must also be a way for those platforms to receive input. If your system has detected some major misinformation that is being amplified in a platform, there must be a way of informing that platform.

Wasim Khaled: Absolutely. The platforms are definitely on our radar as future partners. Just this Monday, Blackbird released a report which is on our website called “Man, Machine Intelligent System for Scalable Identification of Hoaxes and Misinformation.” This is a collaboration with a company called News Guard.

It’s a way to take AI and small human teams to provide a high-integrity assessment of hoax content at scale in a manner that can keep up with the rapid spread of hoaxes and conspiracies across platforms. We had some great results on evaluating the effectiveness in tracking the spread of disinformation on networks like this, or detecting when a hoax is starting to mutate.

It has a win-win effect. One, the social platforms don’t have to take the brunt of that responsibility themselves. We can take that off them as a trusted third-party partner. Also societal impact is great because less people are exposed to those harmful campaigns.

The third thing is, it reduces the amount of stress on the moderators who typically see massive amounts of harmful content. You probably saw that story printed last year on how many Facebook and YouTube moderators end up with PTSD.

That’s the direction we’d like to go in as a company. Today, our focus is on the Fortune 500 brands as well as the international security space.

Sramana Mitra: I’m reading a book. It’s Jean Tirole’s Economics for Common Good. He won the Nobel Prize for Economics. He’s an MIT economist. He works out of Toulouse.

One thing he stresses on greatly is, in this tenuous relationship between economics and politics, one of the essential parties is independent institutions like the NATO, United Nations, and WHO who are not politically-driven but have the independent responsibility of managing global corporations.

There could be a similar organization that could be founded in the foreseeable future that is in charge of misinformation and protecting the world from misinformation. You could be supplying the technology for doing that kind of stuff. They could surface the issues.

You could be providing them the infrastructure and the technology with which to do their job. They also have human resources on their staff to surface the major issues that need to be monitored and do the negotiation with the platforms and so forth. That could be a scenario that the world evolves into.

Wasim Khaled: It’s funny that you mention that because some of the partners or clients that we’ve been talking to are starting to see the early discussions around this. What we’re seeing is smaller versions of that in silos today.

There’s an independent Facebook oversight committee that was started by some of the people we worked with in the past. It’s not affiliated with Facebook. I agree that there should be a global organization similar to the UN that is looking at propaganda and disinformation globally and setting a standard or baseline for what is acceptable.

I don’t think we’re very close to that today. I do think that there are enough people with ideas like this out there. It’s certainly something that we’ll be keeping an eye out for. This is not to belittle policy, but you cannot address this with policy alone. You need technology solutions to actually monitor it.

We have operations in Singapore. We did a conference to inform the Singaporean government who had recently passed something controversial in that they’re mandating removal of information that could potentially be harmful. Facebook has to comply with that.

What was immediately apparent was passing that law is one thing. How do you then decide what is harmful? How do you do that at scale? If you have a team of people who occasionally remove a thing, this is a PR thing more than anything else.

We try to keep our eyes out for proactive organizations and governments who are looking at this problem, but it has to be paired with technology that can participate and keep up with that escalation of what is an arms race around computational propaganda. 

Sramana Mitra: There is one other missing piece in this. The world that we live in is a world run by technocrats. Mark Zuckerberg is a technocrat. We are assuming that this is a person who’s capable of making big philosophical decisions about how society should evolve, and about how humanity should evolve.

Perhaps he is not quite equipped to take on that kind of a role. Society will need to think about the philosophical implications of all these decisions that programmers with no philosophical understanding or study are being asked to make.

Wasim Khaled: In my close communications and talking to technologists, so many of the engineers are disillusioned. If you rewind 10 to 15 years ago, it was an incredibly rosy picture of Silicon Valley. This was a place of utopian understanding and progress. That has changed so dramatically over the last four years.

The recent poll that came out said that satisfaction for Facebook engineers has dropped from 70% to 50%. A lot of these people didn’t go in there thinking that they would need to think about these things. What ends up happening is the brightest people in the world end up going out there trying to create that attention engine. 

Sramana Mitra: Just like we came to this age of reckoning with the financial crisis isn’t a meaningful way to live life. Silicon Valley is having its moment of reckoning now. Addicting people to sell ads through algorithms is not a meaningful existence especially when the side effect of that is destroying democracy and addicting people. There is a moment of reckoning going on. It’s really come to head now. 

Wasim Khaled: This ties back to responsible AI and responsible technology in general. You have to have a framework that brings a lot of these critical practices and create a more ethical, transparent, and accountable use of these technologies. It goes back to organizational values and tying them to equivalents in societal law.

A big piece of it is, you’re going to see a huge uptick in better verification of your own identity so that you can’t get away with spinning up these types of fake personas. It’s about that accountability. That’s another piece of the puzzle.

Sramana Mitra: This was an excellent conversation. Thank you for your time.

This segment is part 7 in the series : Thought Leaders in Artificial Intelligence: Blackbird.ai CEO Wasim Khaled
1 2 3 4 5 6 7

Hacker News
() Comments

Featured Videos