‘Othello’ to pizzagate: How social media misinformation plays out its role like a Shakespearean tragedy

Four cubes against a yellow backdrop spelling out "fact" and "fake" with the last two cubes set askew to showcase both words.

In March, a New York state judge ruled that a lawsuit could go forward against several social media companies alleging that the platforms contributed to the radicalization of a gunman who killed 10 people at a grocery store in Buffalo, New York in 2022. 

The lawsuit claims companies like Meta, Reddit and 4chan “profit from the racist, antisemitic and violent material displayed on their platforms in order to maximize user engagement.” However, the companies say they are instead merely message boards containing third-party content and should not be held liable for what others post. 

While the case itself won’t likely see the inside of a courtroom for years, it has sparked a debate on just how culpable – and calculating – social media platforms really are. 

Hamed Qahri-Saremi is an assistant professor of Computer Information Systems in Colorado State University’s College of Business. On the CSU podcast The Audit, he spoke about a new theory that social media misinformation might actually be taking a page from Shakespearean tragedies.

 

Transcript

(Lightly edited for clarity)

So, you and your collaborator, former CSU Associate Professor Nick Roberts, put together what you called the 3T Theory of Social Media-Driven Misinformation. The three T’s being tragedy, truth and technology. Let’s start by having you kind of break that down for us a bit.

So, this is a paper that was published in the fall of 2023 in the Journal of the Association for Information Systems. In this paper, we borrowed concepts and stories from Shakespearean tragedy, such as “Othello,” in order to explain a process through which an ordinary, rational person gets exposed to misinformation on social media and then starts developing false beliefs. Throughout this process, those false beliefs get reinforced and finally end up taking some detrimental actions, detrimental actions that have detrimental consequences for the person and for society.

Probably more importantly then, we talk about the role of social media in accelerating and facilitating this process. We talk about four different properties of social media, which we call feeding, signaling, matching and sensing, and how these four properties essentially facilitate this process. The idea of this paper started when Nick and I — I’ve known Nick for many years, he’s a good friend. Nick and I were meeting over dinner at a conference in 2019. Nick has a background in English literature, and it was a couple of years after the start of, what we are seeing these days as conspiracy theory. It was a couple of years after Pizzagate, which was probably one of the biggest political conspiracies.

Pizzagate started in spring of 2016. It was after the hacking of the Clinton campaign’s emails, and the concept of it has been debunked time and time over. But what this conspiracy was saying was that the New York Police Department found links between officials in the Democratic Party, and some restaurants across the United States, that they are engaging in human trafficking and child sex rings and those sort of things. What ended up happening in the case of the Pizzagate was that one of the restaurants that had been mentioned called Comet Ping Pong, this small pizzeria in Washington, D.C. And in December of 2016, a person called Edgar Welch, a 28-year-old from North Carolina had seen this conspiracy theory on social media. It spread vastly on social media platforms, such as 4chan, 8chan, Twitter and Reddit. He had been exposed to them, and then he decided to self-investigate this particular restaurant by himself. So, he took his own AR 15 rifle, went to the restaurant, and he ended up firing three shots in the restaurant while the customers and people were in the restaurant. He was arrested shortly after by the police, and he mentioned that how he got exposed to this sort of conspiracy theory was social media platforms.

Nick told me that when he looks at these stories, because, I mean, they are so out of this world in terms of the action that the person takes. Just taking your AR 15, going into a small pizza restaurant and then ending up shooting. He said that it reminds him of what he sees in Shakespearean tragedy. That, in many of these tragedies you have the protagonist that he’s a hero at the beginning of the story and then throughout this story, throughout this process, he is sort of turned into the opposite of himself. It’s detrimental actions that he brings on himself in many of them, especially “Othello” that we have mentioned in the paper. It’s really the sort of false hood, the lies that he ends up believing and that change his life. I’ve been working on social media research for more than a decade. That’s one of my main research areas. So, he suggested that we start thinking about this and see if we can explain how this works.

In essence, social media algorithms are kind of the villain in this story.

The short answer is, partly.

OK. It’s never that clear.

Never that clear. Yes. So, the reason is, let me explain a little bit of a tragedy that we mentioned in the paper. One of the tragedies that we discuss, that we sort of borrow the ideas from, is the tragedy of “Othello.” Othello is the general of the Venetian army, a very high commander, and he has a wife. He loves his wife. There is another character in “Othello,” which is the antagonist. His name is Iago. Iago is a soldier in that army that Othello passed over for promotion. He was, because of that, upset with Othello. So, in order to get back at Othello, what Iago does is that he made up this lie that Othello’s wife is sleeping with another lieutenant in the army, and he starts sort of making up evidence, false evidence, fake evidence.

Throughout this play, Othello at first, had a lot of doubt. Then he starts believing in this lie, and it gets reinforced by the false evidence Iago is presenting. By the end of this story, Othello ends up killing his own wife, who is the love of his life. When he realizes that it was just a lie, he ends up taking his own life. So, the question really is are social media algorithms the Iago of our story?

Now, when we talk about the misinformation, there are people, organizations, individuals, criminal organizations like QAnon who make, produce, create this misinformation, disinformation campaigns. The social media algorithms do not actively create misinformation. They don’t produce content. But the main role that they play is that they target this misinformation to the right person. They disseminate and propagate the misinformation, not randomly, but trying to target the right audience for the misinformation. Sending it, showing it to the users who are much more likely to believe in it, to engage with it. Through that, social media algorithms significantly increase the reach and the efficiency, and the effectiveness of the misinformation. That’s the role of social media. So, that’s why I said “partly,” because it doesn’t create it, but it does play a major role in the dissemination and propagation of it.

It’s targeting.

It is. It’s targeted marketing, and it is really one of the most effective tools to do that.

Earlier I mentioned the court case against several social media platforms. Your theory really points to the crux of the lawsuit’s argument. That these companies pushed out violent, racist misinformation because that’s what got clicks. Their algorithm targeted. While they weren’t creating it, they were specifically sending it to certain people. I’d like to ask your opinion then, how responsible are these companies? How much are they the Iago?

They are responsible. I think in order to get to that, it would be helpful to talk about how the social media algorithms do that. It’s not intentional. It’s a byproduct of what social media platforms are. You have to consider that the social media platforms are ad-selling machines. They are designed to sell ads. And when you are in the ad-selling business, the more engagement, the more time from the user means more money. So, they are designed to send the content to the person who has the higher likelihood of engaging with that content.

In the paper, we talk about four properties in social media that essentially make up, comprise social media. The feeding algorithms — all right, all of us when you open a social media platform whether it is TikTok, Instagram, Facebook, YouTube, whatever, you have a feed. Right? So, there are billions and billions of content that potentially can be sent to you by the algorithm. Choosing which content you see and which content you don’t see, and what’s the curation of that and so on, so forth, is the job of the feeding algorithms in social media.

Now they don’t do this randomly. These are the smart, intelligent algorithms that use all the information that they have from you. That’s when the sensing property of social media comes in because they are sensing, they are collecting data and information about the users all the time. As you interact with them more, as you watch the content, the amount of time you spend on the content, the likes that you send, the particular source that you show more interest in, the content, any type of engagement you can imagine, they collect more information from you, from your profile, from your friends, the profile of your friends. This information is all used in order to customize, to identify the content with a higher likelihood of engagement for you.

So, the feeding algorithms, really, if they show misinformation, it’s not just random misinformation. It’s a lie that it’s something you may not be interested in. There is quite a good chance that you might be interested in that misinformation. It’s a topic that you’re interested in.

In addition to this, another property we talk about is a signaling property. Social media platforms sense social signals. They use social signals to further reinforce the effect of the content, to increase that engagement likelihood. The social signals are when you look at, let’s say, Facebook. You see content on Facebook and you see that your friend has liked it. You see the comment of your friend liking it. Also on LinkedIn, you will see that. You see content about a particular event one of your contacts has liked it or has posted on and so on, so forth. And so, it comes to the top of your feed.

When you see the name of your friend there is research showing that you trust that content more, you are more likely to engage with that content because a trusted person to you has already approved it by liking it or by engaging in it. These are the social signals that the social media platforms knowingly send in order to increase engagement.

One thing that reinforces these social signals is the force property that we talk about in the paper, which is the matching property. These are social networks. They work based on the network of the content, and they are essentially matching the users to each other, engaging the users to connect to other users who are similar to them.

This is really the process through which they create these echo chambers. The byproduct of all this is that when there is a misinformation that is posted on social media — and generally the unfortunate thing is that it’s not actively moderated, which is the way it should be — that the misinformation is going to be sent to an echo chamber that is more receiving of this. Then if you are in that echo chamber, not only may it show on your feed because you are very highly likely to engage with it. You might also see some social signals on it as well, significantly increasing the chance that the person engages with that social media, even believing it. That’s the development in the reinforcement of the falsehood and the false beliefs really for the user.

So, from this perspective, yes, they are responsible. These algorithms are doing it. They are not intentionally created to disseminate misinformation, of course. They are ad-selling machines. But all the people are using it. And misinformation on social media right now is not a new topic. It’s not a new thing. It’s happening. So, these algorithms are also targeting misinformation to the right audience to engage with them.

The other answer to this question, are they responsible or not, is how have social media platforms been able to do this? That goes back to a law that we have in the U.S., Section 230 of the Communication Decency Act of 1996. Interestingly, Section 230 essentially says internet service providers cannot be held responsible as a publisher or as a speaker of the information if they are just showing the information or disseminating the information. So, it essentially protects them from liability.

This is what social media companies really use, although they moderate the content to some extent, to say that they are not liable. It keeps them not 100%, but quite a broad sort of immunity from legal liability. Interestingly, Section 230 came about in order to protect the internet and in order to actually encourage moderation. Back in the 1990s, during the early days of the internet and online forums and message boards, there were two companies, CompuServe and Prodigy, with two different policies.

CompuServe decided not to moderate the message board at all. It was completely open, similar to 8chan and 4chan, these are some of the main platforms for misinformation in recent years. So, they were not moderating at all. Prodigy wanted to create a more family-oriented environment, so they were moderating the content. Really the focus was mostly on sexual content and pornographic content and so on.

Both of them got sued by some users because of the content. In the court, CompuServe was not held liable because the judge essentially argued that they were not moderating the content. They were just allowing any content to appear to so they’re not responsible. Prodigy was actually held responsible because they were actively moderating. The judge said that they were more like a newspaper and publisher. So, it is partly their fault that that content showed up.

That didn’t sit well with Congress. The Congress thought that if they let this happen, the internet would become a wild West because nobody was going to moderate anything. And again, in the 1990s, they were concerned that that could hamper the growth of the internet at the time. So, then Section 230 came about, and it was around the same time that they were talking about the Communications Decency Act, which was an act in the Congress to stop or penalize the dissemination of pornographic content to the minors.

They added Section 230 to it. That has been the case in the U.S. since then, and the social media companies have been protected under this, although we see a lot of misinformation be disseminated on that. It has been sort of changing, and that’s why they have not been held responsible in the U.S.

But it has been changing around the world. In 2022, the European Union came up with a new law called the Digital Services Act. In the Digital Services Act, the one that is very interesting in there is that you are providing not as broad of immunity, legal immunity that Section 230 is providing. It’s providing what they called conditional liability exemptions. Meaning that the internet service providers — including, and this is quite broad. It’s not just for social media. It’s for the marketplaces with goods and services and content included — they will not be held accountable for the content on their platforms if they meet three conditions.

The conditions are that they don’t know about the content, the damaging content or misinformation. That they have set up notice and an active mechanism to fight illegal content. So, they have the mechanism in place, but they are not they are not aware that this particular content was on their platform. And that they will remove and disable access to that content the moment they realize that it exists, meaning that they cannot really profit from it, and they should actually show that they have taken action in order to take it down.

This is changing this space. Although they still have some protection because they want to protect moderation and want to encourage moderation. But at the same time, they are putting some condition on that immunity in order to encourage more active moderation.

Kind of walking a pretty fine line.

It is. It is a very sensitive topic. So, are they responsible? They are disseminating it. I believe that — and that’s what we discussed in the paper, and there is a lot of research on it as well — that without social media, these misinformation/disinformation campaigns wouldn’t have been even closely effective. You have this really strong tool at your disposition for propagating misinformation. So, yes, the responsibility falls on them from this perspective.

If it’s found that these platforms are responsible for the content posted on them, how can they really police it? That seems kind of like it could be a slippery slope that could lead to issues of censorship and free speech violations, potentially.

One thing we have to note is that that same Section 230 also protects them from legal liability if they remove or censor content on the platforms, as long as they do it in good faith. Any content that they see damaging, as long as they remove it in good faith, they are not legally liable.

One thing that some of these platforms are doing is and they could do more of, in my opinion, is crowdsourcing their moderation. They get help from the users. They allow the users to flag it, to report the content, and then they responsibly review that content in order to see whether that content is problematic content or not. And you can imagine on social media that if there has been misinformation, somebody would report that.

That would help the effectiveness of moderation. I don’t think, at least under the current law because it encourages moderation, that it would be a violation of the First Amendment. We just need some stronger laws in order to motivate the social media companies to take the misinformation more seriously. Without those laws, you should know that this misinformation is designed and written in a way to attract attention and encourage action. That’s the malicious intention of the creators. So, it works well in increasing the engagement. We need to encourage the social media companies to take these seriously and use less of its engaging mechanisms on the platform. So, I think that’s where moderation should go.

What responsibility, then, does the public have in all of this? I mean, and furthermore, how can we kind of reclaim our power from these platforms?

That’s a very good question. A few points that come to my mind, one is really the basics of information literacy. Just understanding how we should verify a claim, any claim, including the claims that we saw on social media platform, against its facts, against its source. What are the sources of these facts? Are they credible sources? Cross examining the claims with other sources.

If there is a claim that’s coming from this particular source and there is no trace of it anywhere else, it’s highly likely that it is not factual, or at least the facts are not as common or as accepted. These are really the basics of information literacy. To be able to verify the information that we decide to engage with, read or believe, especially sharing it, definitely we have to verify the information that we share. There is research that shows that a lot of the dissemination of the misinformation and fake news on social media is not the job of bots or automated algorithms or automated users. They are active users. They receive this…

Ah, so we’re the problem.

Yes. That’s it. It’s not random content. Of course, you’ll see that content because the algorithms know that you’re going to be very interested in it and then share it. This sharing and resharing and resharing and resharing is how the dissemination happens.

So, bring in information literacy and verify before sharing, verify before believing and how to verify it and how to get the facts right, and be able to separate the emotions and confirmation biases from the facts that you see in the content. You don’t want to believe in falsehood, regardless of what that falsehood is. You always want to believe in the truth, even if you are very emotional about the content. These are really the basics of information literacy that can help sort of to reclaim truth.

One other thing. With respect to social media platforms, we have to consider that these are addictive platforms. That goes back to another research that I have on the addiction of the social media platforms. They’re addictive because, again, they have been designed very well. These are some of the best designed I.T artifacts out there. They are designed to increase their engagement, and they do it very effectively.

What does engagement mean? It means that you spend more and more time on it. It gets to be so much that it fills up your life, and it becomes hard to withdraw from it. That is what behavior addiction looks like. That’s why they are addictive platforms. We did the research. We looked at the users and their level of addiction to social media platforms, and what is their response to that addiction. How do they react to it, and what’s the level of psychological well-being?

First of all, we found that 16% of the users that we collected data from were very addicted to social media, and there was a profile of these users who were addicted that they had a very low level of psychological well-being, passing the threshold for clinical depression because they have a feeling that they’re trapped within social media. They cannot reclaim their life. They don’t want to use it anymore, but they had to use it because it’s become their life. Some of them have realized that this is not the whole world, but they’re just living in it. They are spending hours and hours every day on social media platforms.

Of course, when you spend a lot of time on social media, there’s a much higher likelihood that you’re going to be exposed to misinformation on it. It’s more likely that you get your news, your information from it, and then that becomes your world more and more and more. That’s how confirmation bias starts. It becomes much more difficult then when you see another misinformation, another claim, to go ahead and verify it because you already have this strong confirmation bias.

So, understand that it’s addictive. Understand that you have to treat it as an addictive substance. That doesn’t mean that you shouldn’t use it at all, but you should have control over your use. How much you use it, what you use it for, and so on and so forth. That is very important.

The final thing is that we should really never get our news, our truth, our information from social media if we are really looking for truth, for a comprehensive representation of the information from all aspects. We want the complete truth. Social media is not designed for that. Social media is not designed to show you all the content or randomly show you content. It has been designed just to show you the content that you’re going to like. And there are other aspects to a truth, to a phenomena, to a matter, that I’m not going to like. I may have some biases. So, if you’re looking for news, if you’re looking for truth, social media is not a platform to get your news, your information for truth from it.

We have to look at it as more of an entertainment, hedonic platform, like gaming platforms. Use it for that purpose or communicating with our contacts. But if you are looking for news, for truth, for something to base judgment on, and then it’s going to affect our beliefs and take action, social media is not a platform to do that.

We should not go to it for the news, even if you are following some news channels. Because again, if you are following those, you probably do not see every single content that those news channels like CNN, Fox or whatever are posting. The algorithms are going to show you the ones that are more likely for you to engage with them and like them more.

I think these three together probably would create more awareness, would empower the users to be more careful about the claims they see. Not everything you see, not everything that your friend or even the most trusted person shares is necessarily true. When you are believing it and you are resharing it, you are responsible for what you are doing. So, you must verify it before doing it. and understand that the social media platform is not designed to give you the truth.

So be aware of what you’re watching and where you’re getting your information and leave social media to pictures of the family reunion and cats. Am I right?

That sounds good to me.

Well, thank you so much for your time today. I really appreciate it.

Of course. Thank you so much for having me.

CSU Assistant Professor Hamed Qahri-Saremi researches online social platforms and artificial intelligence systems and how they influence our behaviors. I’m your host Stacy Nick, and you’re listening to CSU’s The Audit.