BOULDER — In our modern society, we are surrounded by video. For criminal cases, video evidence is often critical in the courtroom.
But one local study believes the U.S. criminal justice system is not adequately prepared for the rise of artificial intelligence (AI) and how it could impact video evidence.
The inaugural report from the newly founded Visual Evidence Lab at the University of Colorado at Boulder (CU) began with a discussion earlier in the year where experts analyzed the challenges and solutions related to AI within a courtroom.
The report begins with a look at the scope of the issue, citing a statistic from the Bureau of Justice Assistance at the U.S. Department of Justice that estimated roughly 80% of criminal cases in 2016 included video as evidence. However, those who authored the report said the country does not have unified guidance when it comes to such widespread video evidence.
“Video can be critical evidence in civil and criminal trials, but short of consistent guidelines for how we're using the materials, we're risking unfair renderings of justice," said Sandra Ristovska, the founding director of the Visual Evidence Lab and an associate professor of Media Studies at CU Boulder. "There is not one easy, bulletproof solution, right? And the problem needs to be addressed at several layers.”
The report continues to claim that "the legal system’s unregulated approach to video evidence risks discrepant, and even erroneous, interpretations."
Three central concerns regarding AI and the criminal justice system were highlighted in the study: the issues with detecting and verifying media created by AI, uncertainty when it comes to what kind of technological enhancement is permissible in court, and the possibility of the "deepfake defense" becoming prevalent — when authentic footage is challenged in court as being fake.
"Now, in a court case, we may see jurors potentially giving little weight or no weight at all to authentic footage that could be crucial evidence," said Ristovska. "We need to have a shared understanding of truth and facts to keep living in a democratic society and to keep really holding on to this important idea about equal and fair justice.”
Ristovska fears the rise of AI could result in more doubt on the reliability and integrity of any image, even authentic ones.
“This is why consistent guidelines are of the utmost importance to ensure that courts recognize and uphold civil rights and human rights in the age of AI," Ristovska said.
Ultimately, that is key to the recommended solutions produced by the report, which identified four priorities: the development of an infrastructure that stores and allows access to evidentiary videos, training for judges, instructions for jurors that are rooted in science, and implementing safeguards in the event AI-based evidence is admitted into court.
National News
AI safety report warns industry is 'structurally unprepared' for rising risks
Chief Deputy District Attorney Joel Zink works in Arapahoe County and agreed with the need for some kind of regulations related to AI and the criminal justice system.
“Frankly, we're already past the point where we should have done this, but we should have in place policies around the use of AI," Zink said. "I think that that's going to require a lot more collaboration and partnership and bridge-building with state legislatures, federal government, with local law enforcement agencies."
Zink, who said video evidence is one of the best ways for a jury to make a determination about a case, said investigators must sort through hours and hours of such video in any one case. In that way, he believes AI can be helpful.
“Any kind of tool that would allow my prosecutors, or me if I'm working on a case, to be able to more quickly identify key points within that video, hours and hours of it, that's something that we have to be looking at to harness," Zink said. "On the other side of that coin, we've seen, unfortunately, and I can't talk about any active cases, but we've seen situations where child predators would create what appears to be very realistic life-like child sexual abuse material, and they're generating these things based on images of faces of real children in our community.”
Within the 18th Judicial District, Zink has not seen any videos that have been altered by AI attempt to be admitted into a courtroom as evidence. He is worried about the "deepfake defense" described within the report out of CU Boulder.
"We should all be concerned that deepfakes are happening and that that could be misused, but it's also equally concerning to me as a prosecutor that juries would start to not take legitimate video evidence for what it is," said Zink. "It's been some of the strongest evidence in our cases for a very long time. Now, we want to be able to continue to rely on it.”
While Zink believes AI can be incredibly useful within the criminal justice system, he has seen instances of it being detrimental throughout the country.
“It has happened across the country in a number of places, and with enough frequency that we should all be concerned. As lawyers, people are using large language models to try to generate their motions and briefs for the court, and then, without properly checking all of the citations for a reference to a particular case, they're then submitting those to the court," Zink explained. "The court tries to go check the citation. Turns out it's just not a real case. Like that, the citation doesn't exist... To submit a motion to the court without actually checking your citations is a dereliction of your duties as an attorney.”
Zink believes AI is something the criminal justice system must adapt to — both the good, and the bad.
"I really do think that we should be cautious and we should institute rules and regulations. We also shouldn't be afraid of this," Zink said. “If we can find ways to use this as a society, I think we have incredible possibilities ahead of us.”
