A new science: Using physics to understand hate groups on the internet

Scientists have been able to model the dynamics of online hate communities with equations used to describe shock waves.

June 30, 2023 10:30 am | Updated January 02, 2024 10:54 pm IST - Sri City

Water’s flow becomes turbulent as it crashes against rocks, Dana Point Harbour, October 24, 2017.

Water’s flow becomes turbulent as it crashes against rocks, Dana Point Harbour, October 24, 2017. | Photo Credit: Austin Neill/Unsplash

Neil Johnson is a professor of physics at the George Washington University, Washington D.C., who was trained in many-body physics. Many-body physicists focus not on the individual parts of an object or a system but on properties that emerge when these parts interact with each other.

For example, a many-body physicist would be interested in what happens to a group of water molecules when water changes to ice, rather than studying an individual water molecule in great detail. 

In the 1990s, Dr. Johnson’s interests took a peculiar turn. “As theory in physics got ahead of experiments,” he told this writer, “we decided to look at data in other areas: traffic, financial markets, etc.” He was in effect entering the realm of social physics, or the physics of social systems.

Since then, to quote a 2019 editorial in Scientific Reports, methods of physics have been applied to “traffic, crime, epidemic processes, vaccination, cooperation, climate inaction … antibiotic overuse and moral behavior, to name a few.”

Dr. Johnson’s recent study has added another flower to this bouquet: online hate communities. In a recent paper in the journal Physical Review Letters, he and his colleagues modelled the dynamics of how online hate communities form and develop, with mathematical equations used to describe the behaviour of shock waves in fluids.

“So the idea that ‘the online world is turbulent’ – we’ve proved it is much more than an analogy,” he said.

Physics magazine called his team’s work a “new science”.

Physics of social systems

Physicists are no strangers to collective behaviours. Gautam Menon, a professor of physics and biology at Ashoka University, Sonepat, uses mathematical models to tackle a different kind of collective phenomenon: infectious diseases. He told this writer that mathematical models do “surprisingly well” in explaining collective phenomena like “bird flocking, fish schooling and the spread of infectious diseases.”

Methodologically, the way physics approaches the question of collective behaviour, including online hate, is by building “mathematical models that have average behaviour,” Dr. Johnson said.

He used the example of traffic. While different locations in the world have different drivers, vehicles, and rules that govern their movement on a highway, a physicist or a mathematician might ask what are the “big things” that happen in traffic everywhere. Then, they come up with equations that describe these things.

According to Dr. Johnson, “there is some kind of predictability about [them] in terms of the science.”

Sergey Gavrilets, who uses mathematical models to study cultural evolution and social norms and beliefs at the University of Tennessee, Knoxville, said social physicists and mathematicians attempt to “generalise and bring together” different theories and models of social processes. 

He pointed at a 2015 paper that documented as many as 82 models of human behaviour. Rather than getting caught up in the specific ways in which each model is different from another, “we can attempt to bring it all together,” he said.

This way, a generalised mathematical model of human behaviour might be able to explain or predict the behaviour of people in several common scenarios. Such a model can also be extended in the future for specific scenarios that its generalised form is currently unable to account for.

Shock waves and online hate groups

Online hate communities – or what Dr. Johnson & co. call “anti-X” communities (where ‘X’ is something to which the communities are opposed) – are distinct from other online communities because, among other things, they grow quickly.

This rapid growth can be attributed to a large number of interested individuals or groups joining these communities, in a process called “fusion”. This is opposed to “fission” – when moderators of a particular online platform discover that the content shared by these communities violates the platform’s guidelines and shut them down.

In their Physical Review Letters paper, Dr. Johnson and his team studied how online anti-X communities form and persist despite platform moderators’ attempts to shut them down – despite, in other words, “moderator pressure”.

Scholars have called this volatile behaviour “online turbulence”. In physics, ‘turbulence’ is fluid movement characterised by chaotic changes in the pressure and velocity.

According to their paper, a model that can account for the changing behaviour of online hate communities, or their dynamics, must incorporate five things.

1: These communities have an “internal character” that changes over time. This refers to the particular “flavour” of hate in a particular community, Dr. Johnson said. For example, of three hypothetical anti-semitic platforms, one could be perpetuating hate against Jewish people in the U.S., another against Jewish women in Europe, and the third against Jewish people who are queer or transgender.

2: These communities work in a “distance-independent” manner: while in a physical space these communities might be on the “fringes” of society, in the virtual space they are part of the mainstream.

3: The total size of these communities constantly increases, corresponding to the increasing internet usage over the world.

4: They undergo rapid fission and fusion.

5: They aren’t limited to one social media platform and work across several platforms.

A shocking finding

To develop their model, Dr. Johnson and his team used a large database of online hate communities over different social media platforms (including Facebook, VKontakte, and Twitter) that they have been collating since 2016.

In a 2016 Science paper on online support groups for the Islamic State (ISIS), Dr. Johnson et al. identified 196 “pro-ISIS aggregates” involving more than 100,000 followers. In 2019, the same group published a Nature paper entitled ‘Hidden resilience and adaptive dynamics of the global online hate ecology’. Here, they found that the population of “hate-driven individuals” in the dataset had risen to about a million.

The team has continued to add to their dataset, expanding both the kinds of hate communities and social media platforms.

In their new paper, the team modelled how people aggregate and disaggregate. “After about 10 pages of mathematics, out came these equations that were exactly like [those] of [a] turbulent fluid,” he said.

They found that a novel form of equations for turbulent fluids – one that takes into account shock waves – could account for the dynamics of online hate communities.

Shock waves are disturbances in a medium that travel faster than the speed of sound in that medium. They are defined by drastic changes in pressure, temperature, and density of the medium.

According to the paper, the strength of the model lies in its ability to account for how each online hate community has its own “flavour” of an anti-X topic, its own time of onset, and its own growth curve. In other words, the model could account for differences between individuals in online hate communities, the kind of communities individuals form or join, and how these communities speak to each other in diverse and constantly changing ways.

That said, the researchers also acknowledged that they glossed over some details relevant to determining how online hate communities form and persist, and instead chose to focus on the average behaviour of these communities. These details include differences in how each social media platform operates and how particular content is shared. Yet the paper stated that their model can be extended in the future to account for these specific “heterogeneities”.

Identifying and combating hate speech

Joyojeet Pal, an associate professor at the University of Michigan who studies misinformation and Indian politicians’ use of social media, said the study contributes in an important way to our understanding of the relationship between persistence of hate speech networks on social media platforms and the existing moderation of “incendiary content”.

He said the paper also explains why it’s important to track networks of “known hate speech offenders” as opposed to individual instances of hate speech: “most of those who indulge in hate speech tend to do it repeatedly, so tracking their networks is a worthy means of figuring out, and hopefully undermining, incendiary content,” he said.

And by studying networks of known online hate communities, the model also circumvents the “innuendo problem” that makes tracking and censoring online hate speech very difficult for social media platforms.

The innuendo problem, Dr. Pal explained, stems from machine learning algorithms’ poor understanding of sarcasm. Most “clever hate speech” according to Dr. Pal is not explicit, but “delivered as an innuendo”, making it difficult for algorithms to identify such content, leading to ineffective content moderation.

The problem can be circumvented by tracking repeat offenders and their networks, Dr. Pal said.

Three ways ahead

Dr. Menon told this writer that the novelty of Dr. Johnson and his team’s paper is that it connected turbulence in fluids to the social behaviour of online hate communities.

“This is an idea I had not heard of before,” he said.

While the model does well within the limits of its reasonable assumptions, he continued, the fission of online communities was “insufficiently explored or described.” According to him, “here’s where a more detailed sociological understanding of how and why this happens would have helped.”

Dr. Johnson’s team is planning to do three things next.

First, they will be expanding their dataset of online hate communities. They are currently looking at such communities on gaming channels, given the growing controversy over violent games and violent behaviour, and mass shootings in the U.S. at gaming events.

Second, they will test their model in different scenarios where online hate communities form, persist or disappear. In their paper, the team tested the model with two kinds of hate communities: domestic anti-US communities and foreign anti-US communities.

Finally, the team plans to extend their model to account for more “flavours” of hate. According to Dr. Johnson, while different forms of hate coexist on online platforms (e.g. race and gender), their intensities or prevalence is in flux. The next logical step for the team is to upgrade their model to account for these changes.

They are also monitoring online hate communities ahead of a slew of elections in 2024. “With about 65 elections in 50 countries next year, and platform moderators backing off from moderating hate speech, things are going to get very interesting,” Dr. Johnson said.

Sayantan Datta (they/them) are a queer-trans freelance science writer, communicator and journalist. They currently work with the feminist multimedia science collective TheLifeofScience.com and tweet at @queersprings.

0 / 0
Sign in to unlock member-only benefits!
  • Access 10 free stories every month
  • Save stories to read later
  • Access to comment on every story
  • Sign-up/manage your newsletter subscriptions with a single click
  • Get notified by email for early access to discounts & offers on our products
Sign in

Comments

Comments have to be in English, and in full sentences. They cannot be abusive or personal. Please abide by our community guidelines for posting your comments.

We have migrated to a new commenting platform. If you are already a registered user of The Hindu and logged in, you may continue to engage with our articles. If you do not have an account please register and login to post comments. Users can access their older comments by logging into their accounts on Vuukle.