Skip to main content
Image
Overhead view of Sacramento

MATSUI OPENING REMARKS AT COMMUNICATIONS AND TECHNOLOGY SUBCOMMITTEE HEARING ON BIG TECH CENSORSHIP

March 28, 2023

WASHINGTON, D.C. – Today, Congresswoman Doris Matsui (CA-07), Ranking Member of the House Energy and Commerce Subcommittee on Communications and Technology, delivered the following opening remarks at the Communications and Technology Subcommittee hearing titled,
“Preserving Free Speech and Reining in Big Tech Censorship.”

Thank you, Chairman Latta.

At last weeks’ TikTok hearing there was bipartisan concern about the rise in harmful content on the platform. While some of the examples highlighted by Members were jarring, TikTok is by no means unique.

This hearing provides another chance to explore those same concerns across the wider internet ecosystem.

The spread of misinformation, hate speech, and political extremism online has been meteoric.

During the early days of the pandemic hate speech targeting Chinese- and other Asian-Americans boomed. One study from the AI company L1ght documented a 900 percent increase in the volume of tweets containing hate speech targeting Chinese people and China.

That same study showed that the amount of traffic going to specific posts and “hate sites” targeting Asians increased three-fold over the same period.

But the increase in hate speech wasn’t limited to just racial motivations. Young people of all backgrounds have been subject to some of the most appalling examples of cyberbullying and hate speech.

There was also a 70 percent increase in the number of instances of hate speech between teens and children during the initial months of quarantine.

But that’s not all, political extremism and dangerous conspiracy theories are also on the rise.

A study by DoubleVerify, a digital media analytics company, found that inflammatory and misleading news increased 83 percent, year-over-year, during the 2020 U.S. Presidential Election.

And perhaps most disturbingly, hate speech tripled in the 10 days following the Capitol insurrection compared with the 10 days preceding that violence. The week after the Capitol insurrection, the volume of inflammatory politics and news content increased more than 20 percent week-over-week.

So, across all sectors, the amount of online speech related to political extremism, race-based violence, and the targeting of other protected classes is growing.

The reason this increase is so concerning to me is because it rarely stays online only.

A 2019 study by New York University analyzed more than 530 million tweets published between 2011 and 2016 to investigate the connection between online hate speech and real-world violence.

Unsurprisingly, the study found more targeted, discriminatory tweets posted in a city related to a higher number of hate crimes.

This backs similar findings from studies in the UK and Europe. This trend is backed up by the FBI’s own real-world data on hate crimes which show that the number of hate crimes has only increased.

This escalation isn’t a one-way problem – social media platforms are taking daily steps to foment it and see that it reaches as many people as possible.

The algorithms that promote harmful content with the users it will resonate with most have benefited from massive investments in R&D and personnel. In many ways, these platforms are competing over the effectiveness of their respective algorithms.

They represent a conscious choice by online platforms. And one that that I believe means they must assume more responsibility and accountability for the content they’re actively choosing to promote.

In a 2020 academic article, describing racial bias online, Professor Overton notes that “Through data collection and algorithms that identify which users see suppressive ads, social media companies make a ‘material contribution’ to the illegal racial targeting.”

This point is an important one – online platforms are making regular and conscious contributions to the spread of harmful content.

This isn’t about ideological preferences, it’s about profit. Simply put, online platforms amplify hateful and misleading content because it makes them more money. And without a meaningful reorganization of their priorities, their behavior won’t change.

That’s where this Subcommittee must step in.

On a bipartisan basis, there is widespread agreement that the protections outlined in Section 230 of the Communications Decency Act need to be modernized.

Because continuing to accept the status quo just isn’t an option. Without bipartisan updates to Section 230, it is naïve to think large online platforms will change their behavior. The profit motive is too great and the structural oversight too weak.

The discussion we’ll have at today’s hearing is an important one. And one that I hope serves as a precursor to substantive, bipartisan legislation.

Section 230 needs to be reformed and I’m ready to get to work.

With that, I yield the remainder of my time. 

# # #