Principal Deputy Assistant Attorney General Nicole M. Argentieri Delivers Remarks at the Computer Crime and Intellectual Property Section’s Symposium on Artificial Intelligence in the Justice Department at Center for Strategic and International Studies
Remarks as Prepared for Delivery
Good morning. I’m Nicole Argentieri, Principal Deputy Assistant Attorney General and Head of the Criminal Division. I’d like to thank the Center for Strategic and International Studies and our own Computer Crime and Intellectual Property Section, CCIPS, for the opportunity to address this symposium.
This is a timely discussion. The promises and perils of artificial intelligence are top of mind for the Criminal Division and for the Department of Justice more broadly. As I expect others will discuss today, these new technologies offer important opportunities to enhance our investigations and for law enforcement to become more effective and efficient in protecting the public and upholding the rule of law. The department has already deployed AI to triage reports about potential crimes, connect the dots across large datasets, and identify the origin of seized narcotics.
As with any transformative technology, AI also presents risks for misuse, especially as generative AI makes it easier for criminals to commit crimes and harder for all of us — law enforcement and civilians alike — to know what is real and what is not. For example, criminals can take advantage of the trust we place in a family member’s voice or a celebrity’s image to mislead and defraud people out of their life’s savings. Predators have used AI to turn innocent pictures of real children into child sexual abuse material. And cybercriminals can exploit open-source AI models to create chatbots that write phishing emails and malicious code.
Criminals have always sought to exploit new technology. While their tools may change, our mission does not. The Criminal Division will continue to work closely with its law enforcement partners to aggressively pursue criminals — including cybercriminals — who exploit AI and other emerging technologies and hold them accountable for their misconduct. And as Deputy Attorney General Monaco has announced, where misconduct is made significantly more dangerous by misuse of AI, department prosecutors will seek stiffer sentences.
To reaffirm our commitment and articulate our strategic approach to combatting cybercrime and other offenses enabled by emerging technologies, I am excited to announce that, today, the Criminal Division is releasing a new Strategic Approach to Countering Cybercrime. It’s easy to find on the CCIPS website, cybercrime.gov.
The Strategic Approach to Countering Cybercrime emphasizes the division’s focus on using all tools to disrupt criminal activity and hold criminal actors accountable, developing law and policy to prevent and prosecute cybercrime, and promoting cybersecurity through capacity building and public education. It also highlights the division’s expertise in collecting and using electronic evidence.
CCIPS is a leader in the division’s and the department’s efforts to combat technology-enabled crime — including crime facilitated by artificial intelligence. As set out in the Strategic Approach, CCIPS — working with other offices across the division and the department and with our domestic and international law enforcement partners — is focused on pursuing three critical goals:
Leading the department in deterring and disrupting cybercrime, such as ransomware schemes, through criminal prosecutions, seizures of criminal infrastructure, and other enforcement actions targeting the most significant cyber-criminal activity;
Ensuring the department has effective tools and policies to combat cybercrime, puts victims first, and protects civil rights in its cybercrime enforcement work; and
Promoting national cybersecurity and the government’s ability to address cybercrime through capacity-building, public education, and information sharing.
While the Strategic Approach is new, the Criminal Division has long been focused on combatting cybercrime and crimes enabled by emerging technologies. I would like to take this opportunity to highlight some of the work we have already done to advance each of the Strategic Approach’s three goals.
First, our Strategic Approach emphasizes the importance of targeting the most significant cybercrime actors.
The division has a long track record of successful cyber disruptions and prosecutions against prolific cybercrime groups. In just the past year, the Criminal Division has partnered with the FBI and other investigative agencies to conduct highly successful enforcement activity against ransomware actors, groups that infect victims’ computers with malicious software, and cryptocurrency money-laundering operations.
These actions included disrupting notorious ransomware groups like LockBit and AlphV/Blackcat, which were the world’s first and second most prolific ransomware variants and collectively targeted over 2,000 victims. Working with the U.S. Attorney’s Office for the District of New Jersey, the FBI, and partners around the world, we seized control of LockBit’s infrastructure, charged its creator and administrator, and convicted key LockBit affiliates. In partnership with CCIPS and the U.S. Attorney’s Office in the Southern District of Florida, the FBI developed and implemented a decryption tool that saved multiple victims from ransom demands by AlphV/Blackcat totaling approximately $68 million.
We also dismantled botnets — networks of computers infected with malicious software and, without their owners’ knowledge, controlled by criminal actors. Earlier this year, we worked with the U.S. Attorney’s Office for the Eastern District of Texas and the FBI to dismantle the 911 S5 botnet, which had infected millions of residential computers worldwide. The botnet’s administrator generated millions of dollars by selling access to the infected computers to cybercriminals, who used the computers to commit billions of dollars in fraud, carry out cyberattacks, access child exploitation materials, and make bomb threats.
And, alongside the U.S. Attorney’s Office here in Washington, we prosecuted cryptocurrency money laundering operations, including convicting at trial the operator of Bitcoin Fog, the longest-running bitcoin laundering service on the darknet.
While we are proud of these successes, we know our work is not done. We remain especially vigilant about the next generation of technology that criminals are employing to commit cybercrime. Because bad actors are already exploiting AI for criminal purposes.
Just a few weeks ago, the Criminal Division’s Child Exploitation and Obscenity Section (CEOS) and the U.S. Attorney’s Office in Alaska charged an Army soldier stationed in Anchorage who allegedly used AI to morph pictures of real children — including children he knew — into images depicting their violent sexual abuse. According to court documents, the defendant used online AI chatbots to create these images.
And in May, following a tip from a major social media company to the National Center for Missing and Exploited Children, CEOS indicted a Wisconsin man for producing thousands of illicit images of prepubescent minors with generative AI and distributing them to a minor. According to court documents, the defendant generated these images wholly through AI, using specific, sexually explicit text prompts. Using AI to produce sexually explicit — and increasingly photorealistic — images of children is illegal, and we will not hesitate to hold accountable those who possess, produce, or distribute AI-generated child sexual abuse material.
Second, our Strategic Approach also emphasizes the Criminal Division’s role in ensuring that the department has effective tools to combat cybercrime and that we protect civil rights in cybercrime investigations.
The investigations and prosecutions I’ve just described illustrate that the criminal misuse of AI is not only a domestic problem. It is also a global concern. Cybercrime is increasingly transnational in nature. Criminals, the infrastructure they use to carry out their criminal schemes, evidence of their misconduct, and victims of criminal schemes can be — and often are — located in other countries.
We have always relied on international partnerships in our fight against cybercrime. That is why we have designated an experienced prosecutor in CCIPS to act as the Cybercrime Operations International Liaison, or COIL, to build and strengthen relationships with key foreign partners and coordinate major international cyber operations.
It is also why the Criminal Division was pleased to participate in negotiating the first international agreement encouraging governments to use AI in responsible ways that respect civil rights. The Council of Europe convened these negotiations, which resulted in a first-of-its-kind treaty. The treaty provides an opportunity for rights-respecting governments to set forth a shared baseline for how we will use AI in a way that is consistent with respect for human rights, democracy, and the rule of law. It codifies key principles related to AI such as transparency, accountability, non-discrimination, reliability, and privacy; it establishes minimum risk management practices; and establishes a forum for like-minded democracies to coordinate on AI’s impact. I could say more, but we are lucky to have Thomas Schneider here with us today to further discuss the treaty and the need for global rules around government use of AI that align with our shared values.
Of course, the AI treaty was not the only recent effort undertaken by the Criminal Division to memorialize global rules that align with civil rights. I also could not be more proud of the Criminal Division’s role in negotiating and drafting the U.N. Convention Against Cybercrime. The Convention was recently advanced by consensus at the United Nations by more than 140 member states. If the General Assembly ultimately adopts the Convention and member states join it, the Convention would significantly advance the department’s ability to fight cybercrime and crimes involving child sexual abuse material, while also providing unprecedented protections for human rights.
The Convention would expand the global fight against cybercrime and crimes against children — and deal a blow to cybercriminals and nations that give them refuge. By expanding the United States’ ability to obtain evidence and extradite defendants from foreign countries, the Convention would make it much harder for Russia, China, and Iran to shield cybercriminals, including state-sponsored hackers, from the U.S. criminal justice system. Under the Convention, we would have better access to evidence located abroad, and cybercriminals would have fewer places to hide. The Convention would also enhance our ability to investigate and prosecute those who possess, distribute, and create child sexual abuse materials, including materials that sexually exploit American children. While our adversaries had hoped for an agreement that would enshrine authoritarian values, the Convention would do the opposite: it would codify in international law a requirement that cooperation under the Convention may take place only with respect to cases that do not suppress human rights or fundamental freedoms.
I look forward to continuing to work with our interagency and international partners in the global fight against cybercrime, which is one of the most pervasive challenges of our time, affecting communities around the world.
Finally, the Strategic Approach’s third goal is focused on promoting cybersecurity through capacity building, public education, and information sharing.
Among the many things the Criminal Division is doing to advance cybersecurity, we are actively building relationships with AI companies to better understand how criminals are exploiting AI and identify ways that the private sector can work with the department to combat criminal activity on their platforms. These relationships are essential — the private sector has a critical role to play in addressing the criminal misuse of AI.
I am encouraged to see that many companies developing AI products are proactively working to do so with safety and security in mind. Last year, the government secured voluntary commitments from major developers of generative AI models to promote the safe, secure, and transparent development and use of their technology. These commitments contemplate investments in cybersecurity and security testing, including facilitating third-party discovery and reporting of vulnerabilities in their AI systems.
The Criminal Division has long recognized the importance of security research, and we have encouraged vulnerability disclosure programs for exactly this reason. Encouraging and facilitating third-party discovery and vulnerability reporting has been an important way of bolstering security. To that end, entities in the public and private sector have adopted and expanded mechanisms for reporting vulnerabilities that are discovered by security researchers and penetration testing companies.
Much of this research, even by third parties, can be lawful — when it is conducted in good faith and without doing harm. That is why we supported CCIPS’s recommendation that the United States Copyright Office consider clarifying its exemption for good-faith security research, to ensure the exemption applies to AI systems and similar algorithmic models. That would include research into bias and other harmful and unlawful outputs of such systems.
Companies that have not signed onto the White House’s voluntary commitments for leading AI companies, which include facilitating third-party discovery and vulnerability-reporting, should consider implementing a vulnerability disclosure program or extending existing programs to cover their new AI products. In fact, independent research on the functioning and security of AI systems — often referred to as “AI red-teaming” — will be essential to ensuring the integrity and safety of AI systems, in much the same way that computer security research more broadly has helped to protect the integrity of computer systems and networks.
AI red-teaming has an additional important role to play beyond ensuring the security of AI systems. It can also help protect against discrimination, bias, and other harmful outcomes. As AI becomes more prevalent in our lives, it is critical that we do not let it undermine our shared national principles of fairness and equity. Good-faith research can help identify systems whose operations or outputs are unsafe, inaccurate, or ineffective for their intended uses. And good faith research can protect against potentially serious harms to individual rights.
When good-faith research is done in the public interest, it should be encouraged and supported. Fostering the responsible use of vulnerability testing and reporting is a crucial step toward mitigating the harms that might arise from AI systems. We stand ready to support it.
That is why I am pleased to announce that the Criminal Division plans to take another step in support of AI research. CCIPS is hard at work preparing a revised Vulnerability Disclosure Framework, which will update advice that we published in 2017 to help public and private organizations create effective vulnerability disclosure programs.
The original framework described how to build vulnerability disclosure programs for IT systems that accounted for issues that might arise under the Computer Fraud and Abuse Act. And it minimized legal jeopardy for security researchers. We are revising our framework to address the reporting of vulnerabilities for AI systems and to contemplate issues that might arise under intellectual property laws as well.
As we work on updating this document, the Criminal Division, along with other department components, will engage with external stakeholders, including researchers and companies, to solicit feedback and so that they may share any concerns they have about the potential applicability of criminal statutes to good faith AI red-teaming efforts. Although we believe that good-faith research can coexist with the statutes we enforce in the Criminal Division, we are just at the beginning of the AI revolution. There is much that is still unsettled in the world of AI, from some of the core terminology to the technical details of AI systems and how research into them is most effectively conducted.
Let me conclude where I began. As Deputy Attorney General Monaco has said, AI is a double-edged sword. It can be used to detect, disrupt, and deter criminal activity. But it can also facilitate criminal activity by bad actors who exploit it and lower the barriers to entry for criminals.
We must be vigilant in understanding this rapidly evolving technology. The Criminal Division stands ready to take on this role by leading the department’s efforts to combat the criminal misuse of AI, to uphold the rule of law both at home and abroad when it comes to governments’ use of AI, and to facilitate partnerships with the private sector to ensure that public-interest research advances, while malicious actors are held accountable.
I am glad you are all here for this important discussion. Thank you again to CCIPS, to CSIS, and to all of the speakers and participants for your work on this event. I look forward to the discussion today and in the months and years ahead.