The human rights council of the World Economic Forum has urged US tech firms to step up the fight against extremism online. The report comes amid US government pressure on tech giants to prevent terrorists from using their platforms. We spoke with Athina Karatzogianni, an associate professor at the University of Leicester about this issue.
In your view what steps should be made by tech giants to fight extremists on their internet platforms?
Athina Karatzogianni: The problem is really that tech giants don’t want this … intelligence work that the governments are supposed to be doing outsourced to them, that’s the first problem.
The other problem is that they use algorithms and automatic tools, but when they do use these tools they might be actually removing their own material. So there might be material that is legitimately debating an issue, let’s say the war in Syria or documenting war crimes, for example, but this can be automatically also deleted.
Twitter has taken hundreds of thousands of tweets down, Facebook [launched a fact-checking system] to detect fake news and provide information about the publisher so they seem to be doing more because they have to and also because it is helping them with their profits, and that’s another problem. So I think that there is a pressure on tech companies, at the same time they need more collaboration with governments so they can’t custom-make things for governments what they think should be allowed to happen or not. Saying this is fake news and it might not be, it is a very complex model and it’s difficult for an algorithm. So that is why this human oversight is mentioned and you have these tech giants needing to put in or hire more people, like 3,000 more people. Google has hired people because of extremist content on YouTube so you have an effort being made, but it can’t be just tech giants or just the government so it must be some collaboration between the two.
How likely is it that such restrictions on these larger platforms will push terrorists to smaller social networks? Would it be much harder to detect them? They are actually trying to resolve this situation, but isn’t it just going to move the problem to a smaller platform?
Athina Karatzogianni: I think the serious problem is how people organize [and] recruit operations through using encrypted services. It’s not only the content – like people get radicalized and so on. We have seen this type of radicalization for about 15 years now.
The problem is when you have people organize and find recruitment operations through encrypted services. So even if you move the content out of Facebook, Google, Twitter and so on, and you take it down and by doing that have the problem of freedom of expression because you might be removing content that is actually not harmful and people are documenting war crimes and so on. But actually going to these encrypted services is the biggest problem because this is where a real organization can happen that has an impact, if you see what I mean. The further problem is that you can have things popping up on Google like the Las Vegas shooting, which is complete misinformation and the algorithm can’t handle that so you have extremely complex problems with algorithms and how they work, because there are black boxes and you need to have some sense of what’s happening to adjust algorithms to do a better job, but at the same time content is not the only problem. … So I think this is really a complex problem. …
When tech companies are asked to remove whole groups of people that might be engaged in violent extremism you have an additional problem because they are saying we are not government intelligence so why is this outsourced to us to do, so what they are doing is blaming each other, private companies blaming governments and the other way around.
Regarding the issue, what if tech giants infringe on the freedom of speech? Is freedom of speech or the security of people more important? What is your particular view on that?
Athina Karatzogianni: Well it is a tricky one to balance. It takes time for regulation … because it is a very new area. Algorithms make decisions on popularity rather than what is ethically good on the one hand, and ethically good is decided by what is legitimate or not, or what is accepted by world players as good or not. So you have a complex situation that is actually a political problem. It’s not just a legal problem, it is a political problem, but it’s also a technological problem because these companies have to deal with adjusting algorithms to be ethically good. And that is very difficult to do because already decisions have been made by how the algorithm will set up in the first place. You can’t decide between freedom of speech versus security; it cannot be an algorithm that decides that. This is why there is a question about reviewing and about human oversight, but humans also make mistakes. Actually they make more mistakes than machines do, so you have a really complex situation that I think needs some transnational and global cooperation like with most things in today’s world.
Also published on Medium.