Catching bad guys with data

Extremist rhetoric online, data science and counterspeech: an expert’s view.

Jonathon Morgan is the Founder/CEO of New Knowledge, a data scientist and researcher of violent extremism. He has studied the communication of jihadists online and recently written an insightful and alarming analysis on the rise of violent rhetoric within the American far right. He also hosts an excellent data science podcast. I had the opportunity to talk to him on the subject of extremist groups online for Tages-Anzeiger, with Süddeutsche Zeitung running the piece as well. On the mentioned sites an abbreviated and slightly edited version appeared. Here’s the english transcript of the interview.

You wrote that you’re interested in „catching bad guys with data“ – how does that work?

Just like in real life: Criminals, extremists, people who cause harm engage in behaviors with a certain signature. They do this online and in the real world. These patterns can be fairly complex and I use new techniques to analyse data and new machine learning techniques to recognise them. It’s a way that gives a lot of opportunities to identify extremists and develop strategies for counteracting them. In a law enforcement way but also when it comes to understanding how to stop people from being radicalized. And perhaps create a society where extremists have less success preying upon people who are vulnerable.

What is your motivation?

I’d like to live in a world where people aren’t compelled to hurt one another. I’d like to live in a society where these hateful antagonistic organisations aren’t a preferable alternative to mainstream society.

In your work you have studied different groups of extremists and their behaviour online. How do you gain information on them?

In two ways. It’s possible for algorithms to understand the structures of communities online, looking at the relationships of different people. When we see these relationships forming we understand who’s important in these communities and who’s most actively participating. The structure is mostly complex and can be difficult to understand without the assistance of modern data science technology. We’re starting to teach machines to understand language. So we can identify groups that adopt a radical ideology and use in their language. And also identify markers that reveal individual identity. Some of these techniques are already available in “traditional” law enforcement in the offline world. The difference is that we can do it with much larger communities and a vaster amount of information than we could analysing this information one by one. We’re teaching machine techniques that humans already understand but we’re able do it at much larger scale.

Let’s look at your research about ISIS propaganda online. What did you learn?

Online these groups behave in a fundamentally different way than an average person participating in an online community. Most of us have a network of friends. We engage with them casually, sometimes we take the same actions, sometimes we don’t. We have a baseline of normal behaviour online. These extremist groups are very different in the way that they associate with each other and the actions that they take and the coordination amongst the group. It’s much more like a swarm, like a hive.

In the case of ISIS, they tend to be directed by someone who’s explicitly instructing the group to behave in a certain way and that has some kind of vision of how the propaganda is disseminated online. With other groups it’s more organic, not directed from a leader. But either way these online communities – when they focus on a target they all move in unison. They speak about the same topics, they use the same language. They do it in swarms, with high frequency and lots of volume. Most people on Twitter chat with their friends, they share a news story, tweet once or twice a day and talk about a variety of topics – from lunch to their favourite sports team. But these people tend to be uniformly focused. And they behave like a single organism.

That sounds like they have professionalised communicating online.

They are capable of exploiting the way online platforms work. In a way that would be difficult for a legitimate organisation. Organisations that you might think might gain a similar advantage by spreading a message in this aggressive way – even a commercial brand or a government – can’t because they operate in the mainstream, within the law, the norms of society. So they don’t have access to the same kinds of tactics. Extremists know how to bend the capacity of the platform to help their agenda.

You mentioned language processing. How does that work?

The early attempts were that you show a computer examples and then it learns to see patterns. Based on how often certain words appear. Texts that are supportive of the islamic state certain words are much more likely to appear, califate for example. The newer way is more interesting: We’re actually able to understand people’s points of view through analogy. Language reveals our bias. We can use that to identify which groups are extremist. Extremists by definition have extreme viewpoints and use extreme language. This we can measure. It’s an important development because the larger goal is to have a dialogue where this extremism doesn’t occur.

How?

Because we know that bias is often associated with users who engage in violent rhetoric. Look at the Islamic State: Many of the individuals who left the west to join the fight in the early days had become radicalized online. There is a relationship between these individuals consuming extremist rhetoric online and then adopting these ideologies themselves. Identifying where this language exists in the online world is key to finding individuals who are at risk of become radicalized and even committing acts of violence.

You recently studied the increasingly extremist rhetoric of the Alt-right in the US. What were your findings?

There’s been a significant rise in the activities of the far right white nationalist extremists in social media in the US. People who are part of that community are using language that is increasingly extreme and are using increasingly violent rhetoric. The reason that’s a concern is that other similar communities have inspired people to commit actual physical acts of violence in the offline world. And there is a trend occuring in the USA number of attacks on muslims but also mixed race couples, members of the LGBT community are increasing. When there’s a lot of extremist content online that projects that that is a more commonly held belief than it really is normalized ideologies that have real world, offline consequences.

Would you say that online communities can be an incubator for extremist view?

Yes. There an echochamber effect perhaps more amplified by social media. The companies have an interest in the users having an experiences online that makes them happy. And we tend to be happy when we associate with people who are like us and say the things we like to hear. For better or worse. With these communities that are particularly insular, the mechanics of some platforms increase that insularity. It does heighten the effect. The only content extremist are consuming and the only language they’re hearing shares this warped perspective on the world. It’s difficult to maintain a realistic perspective.

Because there’s no one to contradict you.

Exactly. There has been recent work showing that intervening with these individuals may have some effect on the amount of radical content they consume. Google has done some experiments, redirecting people who sought out extremist content on Youtube and shown them advertisements that debunk the myth of ISIS. People tend to interact with those videos longer than normal so they seem to have some effect on catching the interest to people who are vulnerable to radicalization. This shows: To counteract extremist narratives you have to reach people almost in a microtargeting way and try to inject reality into their online experience to dissuade them from becoming radicalized.

The US Department has its own way of counteracting jihadists online. With “Think again DOS” they’ve been basically trolling extremists online.. You’re laughing. Why?

I’m familiar with the team that sits behind the State Departments efforts on social media. I should start by saying that they are a remarkable group. They understand a lot of cultural nuance, they operate in many languages, they have an understanding of Islam and the complexity of these organisations. However, the approach of “trolling” terrorists online has been a mistake, that strategy is flawed. It’s one thing to target individuals but another to antagonize them. I think it’s important to generated authentic narratives. People who have interviewed fighters who have come back from Syria, who have joined the Islamic State and then left because of all the false promises – it’s not a utopia, it’s a disaster, poorly run, people are suffering. If someone who’s been there, has been radicalized, who’s seen the error of their ways and want to communicate that to others says: “The Islamic State lied to me and they lie to you.” That kind of authenticity, that kind of empathetic approach is much more likely to dissuade people from walking the path. “I understand why you’re angry and I don’t want you to make my mistake.” is a much stronger message. It’s been proven time and again that this more effective than antagonistic posturing, where you’re basically make fun of people.

Back to your study on how ISIS communicates online. What were your main findings?

That study is a year and a half old now. Many of the finding are now out of date. I will address why that is in a second. As for the results: We saw the mechanics, the behaviour of this group as a highly energized very ideologically driven group of extremists online. We saw their tactics and community behaviour, what the structure of these kinds of communities and their behavioural signatures were. There was a core group of roughly 40’000 accounts on twitter that we discovered were strongly supportive of the Islamic State, but just 2’000 core hyperactive users who drove a lot of the activity online. We saw how they, if accounts were suspended, created a new account within hours, quickly reestablishing themselves in the community and taking the same actions they took before. It’s a sophisticated way of manipulating the network. There were also some odd and interesting findings. Many ISIS supporters online were leaving their location feature enabled, even when they were tweeting from the battlefield in Syria. Revealing of course tactically important information that would be interesting in the offline world for the military.

Shortly after the report, perhaps informed by it, Twitter became much more aggressive in suspending accounts supporting the IS. Now there’s at any given time only about 500 active accounts. By and large Twitter polices that fairly aggressively. While other extremist groups operate on their platform with relative impunity.

Is it positive that Twitter suspends Jihadist accounts in this way?

It’s difficult to give an answer one way or the other. It’s clear that the social media platforms have the ability to remove this content from their platform effectively. As evidenced by how Twitter has removed this content the past year. That reduces the number of people who can stumble into radicalized content online. However: Most people who become radicalized are also physically surrounded by people who share their ideology, friends or family mostly. And people who are interested in that content still find ways to consume it. From an intelligence perspective or something we could call “digital diplomacy”, where we could interact with people to dissuade them from becoming radicalized before we consider law enforcement: That has become much more difficult. They now communicate with end-to-end encrypted apps like Telegram. It’s difficult to listen in to that or gain access to these closed groups. Driving these communities into the very darkest most closed isolated corners of the internet reduces the exposure most people have to radical content but then again most people aren’t susceptible to it. So it’s difficult to say how positive it is. It’s better for Twitter as a company, their product. But in terms of dealing with the problem it’s hard to say.

What is digital diplomacy for you?

Presenting a narrative to people who might be vulnerable that pokes holes in the story ISIS tries to tell.

Going forward, what would you like to see in terms of the social media plattforms dealing with extremist content?

One could argue they don’t police the platforms in response to political pressure but in response to bad press, a threat to their customer base and growth. I think Twitter absolutely has a responsibility to be part of the solution to the problem. There’s a lot of debate about whether private companies need to engage with public social problems. Twitter’s stated mission is to be a public space for conversations. If you want to have that role, you have to accept the responsibility to be a good public citizen. That means working with experts and with the government even if that makes the tech community uncomfortable.

The companies’ usual defense for doing nothing is that they want to guarantee free speech.

This is a debate we need to have: How far are we willing to let jihadist or far-right groups publish content that we find personally objectionable and what’s the difference between personally objectionable and something that might inspire violence.

What’s interesting about the debate is that we’re asking private companies to behave in the way that we would expect a state to behave. One argument is: They should be able to police their platform as they see fit. Their responsibility is just to provide a good customer experience. But I think they are striving to be the de facto places where public discourse occurs. So they have to accept the responsibility of balancing free speech against the safety of their users. They can’t just hide their head in the sand and say: “We’re a private company, we can do what we want”.

Can they really do a good enough job?

It’s not really about whether we can be successful. I’m not even sure we all agree on what successful is. It’s more about that the companies try to improve. And that we need to hold them accountable for their failures.

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert

Diese Website verwendet Akismet, um Spam zu reduzieren. Erfahre mehr darüber, wie deine Kommentardaten verarbeitet werden.