NewsContact Denver7Denver7 Investigates

Actions

Altered photos from Centennial Airport runway incident sparks AI transparency debate

South Metro Fire says edits were for privacy; experts warn disclosure is key
Centennial Airport plane incident_South Metro Fire Rescue
AI infiltrates public agencies
Posted

CENTENNIAL, Colo. — Edited photos of a small plane that ran off the runway at Centennial Airport are raising questions about transparency in how public safety agencies share images and whether artificial intelligence played a role.

On Sunday, South Metro Fire Rescue responded to a Cessna 172 plane that went off the runway. South Metro Fire Rescue checked out the pilot and posted several photos on social media.

Often, public agencies obscure aircraft tail numbers to protect privacy.

But in the images from Sunday’s incident, parts of the plane appear altered beyond simply obscuring numbers, including shapes on the tail and a shadow visible in one image but missing in another.

Watch Jaclyn Allen's full report below on the slippery slope when public safety agencies edited photos.

AI infiltrates public agencies

Christopher Jennings, chair of journalism and media production at Metropolitan State University of Denver, reviewed the images and said they raise questions about transparency.

“We looked at a couple photos that had a shadow compared to others that didn't have a shadow. And I'm guessing that AI just took the liberties to make those changes,” he said.

In a statement, South Metro Fire Rescue denied using artificial intelligence. The agency said “the software Adobe Lightroom was used solely to remove the aircraft’s identifying tail number to protect privacy. No other elements of the image were changed.”

Jennings said sometimes people don’t realize AI features are embedded in common editing programs.

“I don't know if they knew — sometimes people don't realize that AI is being used when they use a program like Lightroom,” Jennings said, adding that while the alterations are minor, the issue is serious. “We often take these things for granted, even minor things can end up being a major problem down the road.”

Similar incidents have made headlines elsewhere.

Last year, a police department in Maine apologized after an officer used an AI tool to add the department’s logo to a photo from a drug arrest, inadvertently altering evidence in the image.

In Connecticut, a state agency posted a hunting photo where AI editing added an extra finger to the subject.

And earlier this year, the White House shared a digitally altered image of a civil rights attorney’s arrest, prompting renewed calls for transparency in the use of AI‑generated visuals.

Casey Fiesler, professor of information science at the University of Colorado Boulder, said incidents like this underscore why disclosure matters.

“Many AI editing tools are basically designed to generate pixels that are visually plausible, but might not represent what is really there. This is similar to how language models like ChatGPT generate probable responses, not necessarily accurate ones,” Fiesler said in an email.

“If AI editing resulted in fabricating visual details, then that’s a factual misrepresentation — presumably on an official public record — even if it was unintentional. This is why it’s important to disclose the use of generative AI even in processes like editing — especially because, yes, I think incidents like this can erode public trust,” she wrote.

Fiesler added that there is “a reasonable chance that whoever used the editing tool had no idea that it was generating new visual content rather than simply obscuring what was there.”

Denver7 Investigates has asked South Metro Fire to explain why the number was removed rather than blurred or blacked out, which would make the edit obvious to viewers.

Below is SMFR's full response:

"South Metro Fire Rescue’s highest priority is protecting the safety and privacy of the community members we serve. We do not use artificial intelligence (AI) tools to digitally alter images or modify media shared with the public. The images from the aircraft emergency at Centennial Airport on April 19, 2026 were not altered using AI. In accordance with our long‑standing communications practices and Health Insurance Portability and Accountability Act (HIPAA) requirements, the software Adobe Lightroom was used solely to remove the aircraft’s identifying tail number to protect privacy. No other elements of the image were changed. We remain committed to transparency in our communications and equally committed to safeguarding patient and citizen privacy on every incident we respond to."


investigates-banner.png
Got a tip? Send it to the Denver7 Investigates team
Use the form below to send us a comment or story idea you'd like the Denver7 Investigates team to check out. You can also email investigates@Denver7.com or call our newsroom at 303-832-0200.