How To Deal With AI Deep Fakes And Rumor Control
Article
I had a request from the media regarding the ability of law enforcement or government to identify artificial intelligence deep fake photos, video, or audio. I reminded her that deep fakes have been with us for decades. Yes, AI-generated media makes the government’s job of establishing the validity of any media placed online harder, but every state has the capacity to accurately gauge what’s offered via social media and offer the best possible information to the public.
I have 35 years of directing public information for national and state criminal justice agencies, including law enforcement. The Maryland Emergency Management Agency was part of the Maryland Department of Public Safety when I was their director of public information.
The issue isn’t AI, although that makes the mission of providing accurate information more challenging.
There have always been obstacles in providing accurate information to the public. Photoshop images, the ability to buy commercially available video footage, and the use of green screen technology have been with us for decades. I maintain that anyone with minimal technology backgrounds can create very realistic photos, videos, and audio mimicking television or radio productions and share them on social media platforms.
For example, if the issue is the stability of flood waters breaching a dam, sending a video simulating the dam bursting via social media can create immense panic.
I have the tools in my home to create a very realistic video using a news set with footage instantly available from commercial sites. I can create massive panic.
The topic or situation doesn’t matter; if I want to create a disturbance or induce hysteria, I can do it.
The Real Issue Is Rumor Control
This is why the public affairs section of any police, criminal justice, or emergency management agency needs to be prepared. It requires a vast undertaking of public information officers from a variety of agencies to address, validate, or dispel rumors, false videos, audio, or fake photos. It takes experts from a variety of agencies to do this.
As the director of public information for the Maryland Department of Public Safety and the Maryland Emergency Management Agency, I called FEMA evaluated practice sessions focusing on nuclear power plant meltdowns or chemical weapons drills to make sure that the right information was available. But the event didn’t matter; the state offered its resources for any requesting agency, and then I called up to 30 experts from a variety of agencies to serve as my rumor control team.
There was one person in charge whose primary job was to insist on verification of any misinformation.
We monitored the news media and social media for any sign of false information or suspicious media being released.
Once verified or clarified, I, as the primary spokesperson, would release that information to the public after consultation with the agency involved.
We would also send public information officers to the scene of any emergency, so we would have people who could verify information immediately.
Again, there is a process that exists in every state to track down false or misleading information. Yes, AI makes it more challenging, but we had people trained on verifying false images, misleading videos, or audio.
Conclusion
Few in the public understand what states do to verify information and the process of getting verified facts to the public. We know the reporters, and we know how to reach them via the Associated Press or direct contacts.
Any agency in the country has access to its state emergency management function and the resources at its disposal.
In Maryland, we trained the team on a twice-a-year basis, and the primary spokespeople were often FEMA-trained. Within my department, we trained our part-time spokespeople to address emergencies within our 12 departments, but we also used our part-time PIOs to assist other agencies.
It’s all part of a very precise plan to put the right people in the right place to make quick judgments that are in the public’s best interest. It’s more than possible to discover false AI-generated products principally through our network of experts or by having people stationed at the scene.
The video or photo of a major dam bursting doesn’t do sustained damage if you have personnel on the scene who can verify that it’s fake.
It’s the same with AI versions of any media.
|
|






