Friday, October 18, 2024

Rise of AI-Generated Child Sexual Abuse Imagery Alarmingly High, Warns Internet Watch Foundation

 

The prevalence of child sexual abuse imagery created by artificial intelligence tools is reaching alarming levels, according to a recent report by the Internet Watch Foundation (IWF). The organization revealed that the amount of AI-generated illegal content detected over the past six months has already surpassed the total recorded for the entire previous year.

Derek Ray-Hill, the IWF’s interim chief executive, emphasized that nearly all of this troubling content is found in publicly accessible areas of the internet, rather than on the dark web, which requires specialized browsers for access. He noted that the sophistication of these AI-generated images suggests they have been trained on images and videos of actual victims, stating, “Recent months show that this problem is not going away and is in fact getting worse.”

An analyst at the IWF warned that the situation is reaching a “tipping point,” where safety watchdogs and law enforcement agencies may struggle to determine whether an image involves a real child in need of assistance. In the six months leading up to September, the IWF responded to 74 reports of AI-generated child sexual abuse material (CSAM) that were realistic enough to violate UK law, compared to 70 reports for the entire previous year. Notably, a single report may include multiple images found on a webpage.

The IWF has encountered various forms of AI-generated content, including “deepfake” videos that manipulate adult pornography to resemble CSAM. The organization has also highlighted disturbing practices where AI is used to create images of celebrities who have been “de-aged” and depicted in sexual abuse scenarios, as well as AI-generated alterations of images of clothed children to depict them inappropriately.

Over half of the flagged AI-generated content was hosted on servers located in Russia and the United States, with Japan and the Netherlands also accounting for significant amounts. The IWF compiles a list of URLs containing this imagery, which is shared with the tech industry to facilitate blocking and make the content inaccessible.

Notably, eight out of ten reports of illegal AI-generated images originated from members of the public who discovered them on platforms like forums and AI galleries.

In response to growing concerns about online safety, Instagram has announced new measures to combat sextortion, a tactic where users are tricked into sharing intimate images and subsequently blackmailed. The platform will implement a feature that blurs any nude images received via direct messages, allowing users to decide whether to view them. Once a blurred image is received, users will receive a reminder that they can block the sender and report the conversation to Instagram.

This feature will be automatically enabled for teenagers' accounts globally starting this week and will also apply to encrypted messages. However, for adults, it will be an optional feature. Instagram will also hide follower and following lists from potential sextortion scammers, who often use these details to threaten victims with intimate images.

No comments:

Post a Comment