X Corp., the social media company owned by Elon Musk, is challenging a new Minnesota law that bans the use of deepfakes to influence elections. Filed in federal court this week, the lawsuit claims the law violates free speech protections under the First Amendment. The company argues that the law also conflicts with a 1996 federal statute that protects social media platforms from being held liable for content posted by users. X says the law would stifle free speech and unfairly punish platforms for moderating content.
X Corp. Takes Legal Action Against Minnesota Law
X Corp., the parent company of the social media platform formerly known as Twitter, is suing the state of Minnesota over a new law aimed at deepfakes. The law, which went into effect in 2023, makes it a criminal offense to spread deepfake media—videos, images, or audio that appear real but are actually fake—intended to harm candidates or influence election outcomes. The law applies within 90 days of a political event, such as a primary or general election, and can result in penalties, including jail time, if the content is knowingly or recklessly misleading.
In its lawsuit, filed earlier this week, X claims the Minnesota law infringes on First Amendment rights, which protect freedom of speech in the United States. The company also argues that the law contradicts the 1996 Communications Decency Act, which shields social media companies from responsibility for user-generated content.
Key Points of Minnesota’s Deepfake Ban
The Minnesota law specifically targets content that is so realistic that a reasonable person would believe it is genuine. To qualify as a deepfake, the media must have been created using artificial intelligence or other advanced techniques. The law makes it illegal to distribute deepfake content with the intent to harm a candidate or influence election results.
However, X Corp. argues that the law is overly broad and could stifle legitimate political speech, including satire and humorous content. The company warns that the law could lead to criminal charges for social media platforms that fail to police user-generated content, thereby discouraging free expression.
Reactions from Minnesota Lawmakers
Democratic Minnesota State Senator Erin Maye Quade, who authored the law, criticized X Corp.’s lawsuit, saying the company’s actions show it is unwilling to take responsibility for harmful content on its platform. She said Elon Musk’s influence in the 2024 presidential election makes him upset that the Minnesota law prevents the spread of harmful deepfakes.
Minnesota Attorney General Keith Ellison’s office, responsible for defending state laws, said it is reviewing the lawsuit and will respond in due course. Ellison’s office has previously argued that deepfakes pose a significant threat to democracy and free elections, making this law a necessary step to combat the growing problem of misinformation.
Legal Experts Weigh In
Legal scholars are divided on the issue. Alan Rozenshtein, a law professor at the University of Minnesota, believes the lawsuit is unlikely to succeed. He explained that there is no exception under the First Amendment for false or misleading political speech, even lies.
Rozenshtein cautioned that while deepfakes are concerning, the law could have unintended consequences. He said the potential for criminal penalties gives social media platforms an incentive to take down anything that might be a deepfake, leading to over-censorship.
Previous Challenges to Similar Laws
X Corp. is not the only entity challenging laws that regulate deepfake media. In California, a similar law was blocked by a judge earlier this year after being challenged by the platform. X has also criticized other state laws it sees as overreaching, particularly those that impose restrictions on free speech in the name of protecting political integrity.
Despite the controversy surrounding deepfakes, X maintains that its existing safeguards, such as the “Community Notes” feature and its “Authenticity Policy,” are sufficient to combat harmful content. The company’s lawsuit points out that it already takes steps to prevent the spread of misleading media, including using its “Grok AI” tool to flag problematic content.
The Bigger Picture: Misinformation and Democracy
The rise of deepfakes has raised broader questions about the role of social media in modern democracy. Misinformation and manipulated media are seen as major threats to elections and public trust. However, some experts argue that banning deepfakes outright may not be the best solution.
Rozenshtein said that the demand for misinformation is a larger issue that cannot be solved simply with a deepfakes ban. He noted that while deepfakes are problematic, addressing the demand for false information is more important for democracy.