Digital security experts and the FBI are issuing strong warnings to parents about the dangers of posting images of their children on social media platforms.
The proliferation of advanced generative artificial intelligence (AI) programs has made it alarmingly easy for criminals to manipulate publicly posted photos of minors, creating realistic-looking explicit materials.
As the volume of Child Sexual Abuse Materials (CSAM) continues to rise, child protection organizations and law enforcement agencies find themselves overwhelmed by the surge in reports related to child sex abuse. Many experts believe that the current law enforcement system is ill-equipped to keep up with online sexual predators.
Read also: South Africa limits offensive content on Facebook, WhatsApp
Attorneys General Petition for Action
The issue has grown to such an extent that over 50 attorneys general from various states petitioned Congress to establish a national commission dedicated to studying AI-generated CSAM. They are also advocating for updated legislation to better protect children and effectively combat CSAM.
In their letter to Congress, the attorneys general stated, “While internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes prosecution more difficult. We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act.”
Legislative Initiatives
Several legislators across the United States have introduced bills in the past year aimed at safeguarding children from CSAM:
STOP CSAM Act of 2023: Senator Dick Durbin (D-Ill.) proposed this legislation in April, allowing victims of CSAM to sue social media platforms hosting sexually explicit content.
Child Online Safety Modernization Act of 2023: In August, Representatives Ann Wagner (R-Mo.) and Sylvia Garcia (D-Texas) introduced this act. It aims to enhance the information provided to the National Center for Missing and Exploited Children’s (NCMEC) tip line. Currently, many reports lack sufficient information to track child abusers, leading to inefficiencies.
FBI Warnings and Urgent PSA
The FBI issued a public service announcement earlier, urgently advising parents and guardians against posting photos of their children online. The agency highlighted the risks associated with malicious actors creating synthetic content, often referred to as “deepfakes,” by manipulating benign photographs or videos to target victims.
According to the FBI, victims, including minors and non-consenting adults, have reported cases where their photos or videos were altered into explicit content. These manipulated images or videos were then widely circulated on social media or adult websites, often as part of harassment or sextortion schemes.
The Harms of Deepfakes and Sextortion
Yaron Litwin, a digital safety expert, emphasized the severe emotional, mental, and physical harm that deepfakes and sextortion can inflict on minors. Cybercriminals can rapidly create deepfakes using generative AI and use the threat of publishing these manipulated images to extort money or sexual acts from minors. In such situations, victims face the difficult choice of public humiliation, complying with the criminal’s demands, or even resorting to extreme measures to escape the situation.
The National Center for Missing and Exploited Children (NCMEC) reported a staggering increase in CSAM reports, with over 32 million in 2022, averaging around 87,000 reports per day. This represents an 82% surge compared to 2021.
Recommendations for Parents
Digital safety experts recommend that parents exercise caution when posting images of their children online, particularly on open social media platforms. It is advisable to restrict sharing of such images within closed networks where individuals are known and trusted. Moreover, open and direct communication between parents and children about online risks and the posting of photos is crucial in protecting minors from CSAM threats.
The rise in AI-generated CSAM underscores the growing need for responsible online behaviour and robust legal frameworks to address this evolving challenge effectively.