NC Attorney General one of four AGs to initiate request Congress study AI

Published 12:05 am Wednesday, September 6, 2023

North Carolina Attorney General Josh Stein is one of four states’ Attorneys General to request the U.S. Congress to consider the ways artificial intelligence (AI) may harm children.

Stein said in a release that those who have signed on to the request are specifically concerned “AI can and is being used to exploit children through child sexual abuse material.” The Attorneys General are asking Congress to propose and pass legislation to protect children from these abuses. Stein led the coalition alongside the Attorneys General of South Carolina, Mississippi and Oregon. The request has now been signed off on by 54 states and U.S. territories.

The letter to Congress reads as follows:

“We, the attorneys general of the 54 undersigned states and U.S. territories, are writing to you today to request that an expert commission be established by Congress to study the means and methods of artificial intelligence (AI) used to exploit children specifically, such as through the generation of child sexual abuse material (CSAM), and to propose solutions to deter and address such exploitation in an effort to protect America’s children. As the world has become increasingly aware, AI is rapidly transforming the landscape of what is possible. While AI has the potential to bring about remarkable advances in our society, it also has the potential to inflict serious harms. In recent months, Congress has committed itself to studying AI and to beginning the process of developing a regulatory framework to address some of these harms.

While we commend Congress for these initial efforts, we write to highlight an underreported and understudied aspect of the AI problem — namely, the exploitation of children through AI technology. As Attorneys General of our respective states and territories, we have a deep and grave concern for the safety of the children within our respective jurisdictions. We also have a responsibility to hold accountable those who seek to harm children in our states and territories. And while internet crimes against children are already being actively prosecuted, we are concerned that AI is creating a new frontier for abuse that makes such prosecution more difficult. With these concerns in mind, we write to provide background on this unique problem and urge Congress to take action to address it.

Exploitation of Children Through AI

As we all learn more about the capabilities of AI, it is becoming increasingly apparent that the technology can be used to exploit children in innumerable ways. AI has the potential to be used to identify someone’s location, mimic their voice, and generate “deepfakes.” As a matter of physical safety, using AI tools, images of anyone, including children, can be scoured and tracked across the internet and used to approximate or even anticipate a victim’s location.

As a matter of personal privacy, AI can even study short recordings of a person’s voice, such as from voicemail or social media posts, and convincingly mimic that voice to say things that person never said. This technology has already been used by scammers to fake kidnappings.

Most disturbingly, AI is also being used to generate child sexual abuse material (CSAM).

For example, AI tools can rapidly and easily create “deepfakes” by studying real photographs of abused children to generate new images showing those children in sexual positions. This involves overlaying the face of one person on the body of another.

Deepfakes can also be generated by overlaying photographs of otherwise unvictimized children on the internet with photographs of abused children to create new CSAM involving the previously unharmed children. Additionally, AI can combine data from photographs of both abused and nonabused children to animate new and realistic sexualized images of children who do not exist, but who may resemble actual children. Creating these images is easier than ever, as anyone can download the AI tools to their computer and create images by simply typing in a short description of what the user wants to see. And because many of these AI tools are “opensource,” the tools can be run in an unrestricted and unpoliced way.

Prior to AI, it was possible for skilled photo editors to “photoshop” images by modifying their appearance with computer software tools. However, AI has made it quick and easy for even the least-proficient user to generate deepfake images.

Whether the children in the source photographs for deepfakes are physically abused or not, creation and circulation of sexualized images depicting actual children threatens the physical, psychological, and emotional wellbeing of the children who are victimized by it, as well as that of their parents. Even in situations where the CSAM images generated by AI are not deepfakes but are realistic animations depicting children who do not actually exist, these creations are still problematic for at least four reasons: 1) this AI-generated CSAM is still often based on source images of abused children; 2) even if some of the children in the source photographs have never been abused, the AI-generated CSAM often still resembles actual children, which potentially harms and endangers those otherwise unvictimized children, as well as their parents; 3) even if some AI-generated CSAM images do not ultimately resemble actual children, the images support the growth of the child exploitation market by normalizing child abuse and stoking the appetites of those who seek to sexualize children; and 4) just like deepfakes, these unique images are quick and easy to generate using widely available AI tools.

Call to Action

While we know Congress is aware of concerns surrounding AI, and legislation has been recently proposed at both the state and federal level to regulate AI generally, much of the focus has been on national security and education concerns. And while those interests are worthy of consideration, the safety of children should not fall through the cracks when evaluating the risks of AI.

Conclusion

We believe the following would be a good start in the race to protect our children from the dangers of AI: First, Congress should establish an expert commission to study the means and methods of AI that can be used to exploit children specifically and to propose solutions to deter and address such exploitation. This commission would operate on an ongoing basis due to the rapidly evolving nature of this technology to ensure an up-to-date understanding of the issue. While we are aware that several governmental offices and committees have been established to evaluate AI generally,11 a working group devoted specifically to the protection of children from AI is necessary to ensure the vulnerable among us are not forgotten. Second, after considering the expert commission’s recommendations, Congress should act to deter and address child exploitation, such as by expanding existing restrictions on CSAM to explicitly cover AI-generated CSAM. This will ensure prosecutors have the tools they need to protect our children. We are engaged in a race against time to protect the children of our country from the dangers of AI. Indeed, the proverbial walls of the city have already been breached. Now is the time to act. We appreciate your consideration.

The four co-sponsors of this letter, the attorneys general of Mississippi, North Carolina, South Carolina, and Oregon, are joined by the undersigned attorneys general across the U.S. states and its territories.”