Leonard Sipes.Com

Public Relations-Media Relations-Emergency Management

Viewing Deep Fake Photos
Emergency Media Relations Spam and Negative Comments

Deepfakes: Understanding The Risks, How To Spot Them

Article

Reprinted with the permission of the PIO Toolkit team.

Deepfakes have become increasingly common and are now starting to impact every kind of communication. While it may not seem like it could affect you or your agency, it’s important to have a better understanding of what deepfakes are and how they could cause issues. Here are some top level points that could help you in the future.

There are several reasons why someone might create a deepfake about your agency or a member of your leadership team. A disgruntled citizen may use a deepfake to create false information about your organization or to discredit your leadership team. Alternatively, an activist group or individual might use a deepfake to spread misinformation or propaganda that supports their own agenda.

In some cases, an upset employee or former employee might use a deepfake to seek revenge against your organization or to damage your reputation. By understanding these potential motivations, you can better prepare yourself to identify and respond to any deepfake-related risks that may arise.

What are deepfakes?

A deepfake is a type of artificial intelligence (AI) technology that is used to create realistic-looking videos, images, or audio recordings that are not real. This technology works by using machine learning algorithms to analyze and manipulate existing content, such as video footage or audio recordings, and then to generate new content that looks or sounds authentic.

The potential risks of deepfakes

Deepfakes can be used to spread misinformation or propaganda, to impersonate individuals or organizations, or to create fake news stories. For example, a deepfake video could be used to create a false impression of an event, such as a political rally or a crime scene, or to make it appear as though someone said something they didn’t actually say.

How to spot a deepfake

Fortunately, there are some steps you can take to spot a deepfake. One of the easiest ways to do this is to look for signs of manipulation, such as mismatched shadows or inconsistencies in the lighting or background. You can also look for signs of editing, such as abrupt changes in the camera angle or facial expressions that don’t quite match the audio.
Another way to spot a deepfake is to use a reverse image search tool, such as Google Image Search, to see if the image has been used elsewhere on the internet. If the image appears to be unique or if it has been posted in multiple places with different captions, it could be a deepfake.
What can you do to protect your organization?
As a public information officer, it’s important to take steps to protect your organization from the potential risks of deepfakes. One way to do this is to stay vigilant and to educate your staff about the risks of deepfakes and how to spot them.
You can also consider using digital authentication tools, such as watermarking or digital signatures, to verify the authenticity of images, videos, or audio recordings. Additionally, you can work with your IT team to monitor social media and other online channels for signs of deepfake activity and to take swift action if a deepfake is detected.
If a deepfake appears in the media, there are several steps you can take to mitigate its impact and protect your organization. Here are some key actions you can take:
  • Act quickly: Time is of the essence when it comes to dealing with deepfakes. The longer a deepfake is allowed to circulate online, the more damage it can cause. As soon as you become aware of a deepfake, take immediate action to address it.
  • Confirm its authenticity: Before taking any action, it’s important to confirm whether the video or image is actually a deepfake. You can do this by analyzing the footage for signs of manipulation or by using digital authentication tools, such as watermarking or digital signatures.
  • Contact the media outlet: If the deepfake has been published by a media outlet, contact them as soon as possible to inform them that the content is fake. Provide them with evidence to back up your claim, such as a statement from an expert in deepfake detection or a comparison with the original footage.
  • Release a statement: If the deepfake has been widely circulated, it may be necessary to release a statement to the public. In this statement, explain that the content is fake and provide evidence to support your claim. Be transparent and open about what steps you are taking to address the issue.
  • Monitor social media: Keep an eye on social media and other online channels for signs of deepfake-related activity. If you see any suspicious activity, take swift action to address it, such as reporting the content to the platform or contacting the user directly.
  • Educate your staff: Finally, it’s important to educate your staff about the risks of deepfakes and how to spot them. By training your team to be vigilant and proactive, you can help prevent deepfakes from causing damage to your organization in the future.
  • Remember that prevention is key, so make sure to stay informed and prepared to deal with deepfake-related risks before they arise.

Resources

Deeptrace: Deeptrace is a website that provides information about deepfakes and their impact on society. The website offers resources such as reports, articles, and case studies that can help you better understand the risks and potential impact of deepfakes.
The Future of Humanity Institute: The Future of Humanity Institute at the University of Oxford has published a research paper that explores the risks of deepfakes and how they can be mitigated. The paper offers insights into the potential impact of deepfakes on society and provides recommendations for policymakers and individuals.
The AI Now InstituteThe AI Now Institute is a research institute at New York University that focuses on the social and political implications of artificial intelligence. The institute has published a report that examines the risks of deepfakes and offers recommendations for policymakers, technologists, and the public.
The National Institute of Standards and Technology (NIST): NIST is a U.S. government agency that provides resources and guidance on a range of topics, including cybersecurity and emerging technologies. The agency has published a report on deepfake detection that provides an overview of the current state of the technology and offers recommendations for improving detection capabilities.

 

 


Amazing online learning ✅ Quarterly magazine ✅ Online tools ✅ More content ✅

Click here to join us in the PIO Toolkit Community Plus

Share