In preparation for the midterm elections, Google provides information on its plans to counteract misinformation. The firm said that by elevating reliable content and making it more visible across services like search and YouTube, Google is prepared for a tsunami of false information around the US midterm elections.
Google stated in a blog post that as part of the initiative, it would soon provide a new tool that spotlights local and regional media covering elections and campaigns. Soon, searches for “how to vote” will return highlighted data from state election officials in both English and Spanish. This data will include directions on legal ways to cast a ballot and instructions on important dates and deadlines based on users’ locations.
Meanwhile, YouTube announced that it would emphasize reputable news organisations and display labels beneath videos in both English and Spanish that offer up-to-date election information. YouTube stated that it attempts to stop viewers from being algorithmically recommended “damaging electoral misinformation.”
The statement represents a Big Tech platform’s most recent attempt to persuade the public that it is prepared for a crucial election that could fundamentally alter the congressional agenda, including upcoming legislative fights over how the US controls the platforms.
It occurs when many of the underlying problems relating to the 2020 presidential election, such as irrational claims of voter fraud and false assertions about the election results, remain unresolved and are sometimes being supported by the candidates running for office this year. Disinformation experts caution that even if internet companies have vowed to be vigilant, radicals and others seeking to taint the informational environment continue developing their methods, raising the prospect of fresh vulnerabilities the platforms haven’t seen.
According to a blog post by the corporation, YouTube has already started taking down midterm-related films that violate its regulations by making incorrect predictions regarding the 2020 election.
YouTube stated: “This includes content that violated our election integrity policy by saying there was widespread fraud, mistakes, or faults in the 2020 U.S. presidential election, or by asserting the election was stolen or rigged.
This strategy differs from the midterm election announcements by Twitter and Meta, the parent company of Facebook and Instagram. Twitter’s civic integrity policy, which is in effect for the midterm elections, forbids statements intended to “undermine public confidence” in the official results. However, the company stopped short of promising to remove tweets that raised doubts about the outcome.
This month, Meta announced that as part of its midterm strategy, it would stop making statements about who can vote and how and demands violence related to elections. However, Meta refrained from outright outlawing allegations of rigged or corrupt elections, and the business informed The Washington Post that such allegations will not be eliminated.
Both Meta and Twitter will rely on categorizing allegations of election tampering, but each platform seems to be adopting a different approach. Twitter stated that it tried new misinformation labels last year that were more successful at halting the spread of incorrect information, indicating that the firm may rely even more on labelling. However, Meta has stated that it will probably label less than it did in 2020 due to “feedback from consumers that these labels were over-used.”
Tech businesses still need to heavily rethink their key features, according to Karen Kornbluh, director of the Digital Innovation and Democracy Initiative at the German Marshall Fund, beyond acting on false claims and misinformation or boosting genuine information.
According to Kornbluh, “the system’s design is what promotes incendiary content and permits manipulation of users.” “We may observe on other sites that algorithms inherently encourage radical organizing, ” the Facebook whistleblower demonstrated. We know that in the run-up to January 6, threat actors organized extremist groups using social media as a CRM system. They collaborate across platforms to plan, create invitation lists, and then produce decentralized new groups of foot troops. The platforms must fix these flaws in the design.”