Kathrine Nero
Chances are within the last few days you’ve seen an image or a video you weren’t quite sure was real. Is it artificial intelligence? A deepfake? Asking those questions is the first step.
We’ve entered the era of synthetic truth, where deepfakes and AI-generated content are muddying the waters between fact and fiction. And while this may sound like the plot of a Black Mirror episode, it’s a very real, very current problem.
The question now is: Can journalism — especially local journalism — keep up?
What Is a Deepfake, anyway?
Let’s back up. A deepfake is video or audio that has been digitally manipulated to make someone appear to say or do something they didn’t. Thanks to powerful AI tools, creating these fakes no longer requires Hollywood-level tech or expertise. Anyone with the right app and enough motivation can generate a convincing fake in minutes.
In January 2024, a deepfake robocall impersonating Joe Biden made national news. It urged voters in New Hampshire to “stay home” from the primary. The voice sounded like him. The timing was perfect. The goal? Suppress votes through confusion. That wasn’t a fringe stunt. It was a glimpse of what’s coming.
Now imagine that kind of tactic at a local level — a fake video of aCincinnati mayoral candidate making a controversial statement days before an election. Or a doctored news clip suggesting a city council member said something offensive. Without careful scrutiny and fast correction, damage like that could spread before anyone knows it’s fake.
Journalism vs. Generative Chaos
Here’s the good news: Journalists are adapting.
Some are learning forensic media skills, using tools to spot the tiny glitches and metadata trails that expose a deepfake. Others are working with AI in a responsible way, using it to transcribe meetings faster or analyze public records more efficiently, so they can spend more time investigating.
But the real power lies in journalistic skepticism. The best reporters question everything. They verify, re-verify, and then explain what they’ve found in clear, plain language. This is especially true for local journalists, who know their communities and can spot when something doesn’t add up. They’re the ones who know how a council member speaks, or whether a certain policy proposal sounds like something a candidate would say. That context is everything.
The Role of the Public: Don’t Just Consume — Think
But this isn’t just the responsibility of journalists alone. Healthy skepticism can stop misinformation from spreading, and that’s on all of us. As traditional media has morphed into social media, our consumption can’t be blind any more. We need to ask questions and verify if something doesn’t quite feel right. Bottom line: we have to take responsibility as consumers of information.
Don’t assume a video is real because it looks real. Don’t trust a screenshot just because it came from a friend. Do you know where they got it? Is it being reported anywhere else? If not, why?
Journalists can’t fight this alone. Democracy, after all, depends on a well-informed public. We have the tools right there in the palm of our hand. The very device that brings us sometimes questionable information is also the solution to figuring out if that information is truthful.
And if we don’t support reporters — by reading, subscribing, sharing, and holding them accountable — the deepfakes will win. Not because they’re perfect. But because we stopped asking whether they were real in the first place.
So the next time you see something shocking, ask, “Has anyone credible reported this?”
If not, stop before you share. The truth - and our democracy - might just depend on it.