New YorkCNN
 — 

Facebook, Instagram, Google and YouTube are clamping down on political ads in an effort to combat misinformation that could undermine trust in the results of a contentious electionor stir up unrest.

Meta last week began blocking advertisers from creating or running new ads about US social issues, elections or politics across its platforms, including Facebook and Instagram. Meta’s ban of new political advertisingwas initially set to expire Tuesday night, but on Monday the company extended it until later this week. Google says it will implement a similar, temporary pause on ads related to US elections after the last polls close on Tuesday, set to remain in place for an unspecified period of time. TikTok has not allowed political ads since 2019.

In contrast, X ended its ban on political advertising last year after billionaire Elon Musk took over the platform and has not announced any pause around the election.

The election ad pauses are designed to prevent candidates or their supporters from attempting to sway public sentiment or claim early victory during what could be a days-long period of uncertainty around the results as ballots are counted. But experts say that previous moves by social media companies — such as cutting their own internal safety teams — could undercut their current efforts.

The pauses on some ads come as election officials have already spent weeks trying to combat viral misinformation about the election, including uncorroborated allegationsof machines flipping votes and claims of widespread fraud in mail ballots. It also comes as federal law enforcement officials have warned that domestic extremists with “election-related grievances,” such as a belief in voter fraud despite a lack of proof, could engage in violence following the election.

Former President DonaldTrump and many of his supporters have already made repeated false claims that Democrats are cheating in the election. What’s more, the proliferation of artificial intelligence tools has raised concerns that fake images, video or audio could be used in an effort to lend legitimacy to election rumors and false claims.

But while political advertising pauses mark just one of the steps tech platforms say they are taking to safeguard the online information ecosystem during election week, after cutting their own trust and safety teams and walking back election-related policies, experts say it may be too late to stop the flood of misinformation that has already permeated much of the internet from continuing to spread.

That’s especially true now that X (formerly Twitter) and its relatively newowner Musk have become some of the top purveyors of false and misleading election claims. Twitter was considered a leader in combating political misinformation and violent rhetoric before Musk bought it, with bigger players even following its example — like when it blocked then-President Donald Trump following the January 6, 2021, attack on the US Capitol.

“Since the last presidential election, we’ve seen a dramatic backslide in social media companies’ preparedness, enforcement and willingness to protect information online related to the election, related to candidates, politicians,” said Sacha Haworth, executive director of the watchdog group Tech Oversight Project, at an event hosted by the organization Monday. “Platforms are hotbeds for false narratives.”

‘The backslide’

The months leading up to the election brought a flood of misinformation that experts say is likely to undermine confidence in the electoral process.

“Over the last four years, we have had the drip, drip of lies about our electoral process, the operation of democracy and our elections,” Imran Ahmed, CEO of the social media watchdog group Center for Countering Digital Hate, told CNN Monday. “It’s too late.”

In the wake of online interference in the 2016 election and then again following the January 6, 2021, attack on the US Capitol, which was largely organized online, many big platforms beefed up their trust and safety and election integrity teams and policies, including by removing posts and suspending thousands of accounts that spread lies.

But since then, those companies have made cuts to those teams and walked back policies designed to restrict false claims about politics and elections. Last year, they said they would no longer remove false claims that the 2020 election was stolen.

The effects of that pullback — known among some industry watchers as “the backslide” — reached fever pitch over the summer when, following the first attempted assassination of Trump, conspiracy theories ran wild across social media. The spread of false claims about the response to hurricanes Helene and Milton, many of them targeted at the Biden-Harris administration, also threatened to hinder recovery efforts following those disasters, government officials and response agencies said.

On X, Musk’s false and misleading claims about the election — including about immigration and voting, and often in support of Trump, whom the billionaire campaigned for and donated to — generated more than 2 billion views this year, according to an analysis by Ahmed’s CCDH.

Ahmed said that as long as the platforms’ approach to false and misleading content remain the same, a temporary pause on political ads during election week is likely to have little impact.

“Stopping ads on platforms that are algorithmically designed to promote the most contentious information, whether that’s disinformation or hate, because of the high engagement it gets – they don’t need paid reach when they have platforms which organically are designed to promote their claims anyway,” Ahmed said.

Meta and X did not immediately respond to a request for comment regarding concerns that their election-week safety efforts may be inadequate.

TikTok pointed CNN to its “US Elections Integrity Hub,” where it says, “we protect election integrity on TikTok by preventing the spread of harmful content, connecting our community to authoritative information, and partnering with experts.

A YouTube spokesperson said in a statement that “Over the last few years, YouTube has heavily invested in the policies and systems that allow us to support elections, not just in the U.S. but around the world.”

They added: “Responsibility remains our number one priority, both during election time and year round. Content that misleads viewers or encourages interference in the democratic process is prohibited on YouTube. We quickly remove content that incites violence, encourages hatred, or promotes harmful conspiracy theories.”

What the platforms are doing

Still, the major tech giants say they’re doing more than pausing election ads to safeguard their platforms. Facebook and Instagram, Google and YouTube, X and TikTok all say they have worked to elevate reliable information about the election, including by pointing users to state websites or neutral non-profits for information about voting, candidates and election results.

Most of the major platforms also say they have taken steps to prevent coordinated influence operations that could disrupt the election, including by foreign actors. Russian and Iranian operatives have attempted to sway US voters via online disinformation campaigns in recent months.

Leslie Miller, vice president of government relations at YouTube, which is owned by Google, laid out the platform’s plans for the election in a blog post last December, where she noted that YouTube does not allow content that misleads voters on how to vote or encourages election interference.

“We quickly remove content that incites violence, encourages hatred, promotes harmful conspiracy theories, or threatens election workers,” Mann said in the post.

TikTok says it does not allow content that could “result in voter interference, disrupt the peaceful transfer of power, or lead to off-platform violence,” including unverified or false claims about the final results of an election. The platform says it works with fact-checkers and will label other unverified claims and make them ineligible for promotion in its For You feed, such as “a premature claim that all ballots have been counted or tallied.”

Meta, likewise, says it removes content that could interfere with people’s ability to vote, such as threats about going to an election site to “monitor” and intimidate voters. With other content that its fact-checkers determine to be false, Meta says it will “move it lower in Feed” and label it to provide additional information for viewers.

Meta says it has no specific policy prohibiting non-ad content that declares early victory for a candidate before a vote has formally been called, although such posts could be eligible for a fact-check label. YouTube also will not prohibit videos declaring early victory for candidates, although it said such videos will display an informational panel with election results shared by the Associated Press.

In a statement to CNN, X said its Civic Integrity Policy, which prohibits false claims intended to manipulate or interfere in elections, such as content that could mislead people about how to vote or “lead to offline violence,” has been in effect in August. However, the policy explicitly allows for inaccurate statements about candidates, as well as “organic content that is polarizing, biased, hyperpartisan, or contains controversial viewpoints expressed about elections or politics.”

And, as is often the case with social media platforms’ policies, there is a difference between instituting a policy and enforcing it. Musk took heat in September for an X post seeming to question why “no one is even trying to assassinate Biden/Kamala,” which he later deleted and called a joke. Musk also appeared to violate a separate X policy when he shared a video last month on X that used AI to make it appear that Vice President KamalaHarris had said things she, in fact, did not.