Leveraging on Artificial Intelligence in a battle against Artificial Intelligence
With the increasing popularity of Artificial Intelligence, there is also an increasing exposure to Cyber Threats. One of the reasons for this is that AI-powered tools/software are becoming more accessible to the general public.
Tools for AI generation are increasingly sophisticated and according to Kaspersky blog post, “Each new version allows for the creation of impossibly realistic-looking pictures and extremely convincing audio”. This security threat is known as Deepfake.
What is Deepfake?
Deepfake (a combination of deep learning and fake) is the synthesis of fake images, video and sound using artificial intelligence. It works by using AI tools to manipulate images, sound, and videos to produce a made-up situation.
In a news article by Dawn, it states that “Deepfakes are not only a threat to businesses but also individual users as it spreads misinformation”. It is used by Cyber criminals for scams, blackmail, identity theft, and consequently damage to reputation.
An example of a Deepfake that went viral was the picture of Pope Francis in a puffer jacket. Cbsnews reported that it was an artificial intelligence rendering generated using the AI software Midjourney.
Although AI-powered tools are the driving force of this security challenge, they also serve as the best tools for detecting and combating Deepfakes.
5 Emerging AI Detection tools to be aware of include:
- Sentinel: This is an AI-powered tool that automatically detects if a digital media is AI-generated. It allows you to upload digital media through the website or API. This system automatically analyses for AI forgery and determines if the media uploaded is a forgery. A visual representation of manipulation is shown.
- Reality Defender: Reality Defender leverages AI to detect AI-generated threats. This tool uses multiple models to concurrently detect AI-generated and manipulated audio, video, images, and text. Results are probabilistic, rather than deterministic, which means the models don’t require watermarks or prior authentication to test for authenticity. concurrently detect AI generation and manipulation in audio, video, images, and text. Results are probabilistic, rather than deterministic, which means the models don’t require watermarks or prior authentication to test for authenticity.
- Microsoft Video Authentication tool: Video Authenticator which was launched by Microsoft in 2020 is a tool that can analyze a still photo or video to provide a percentage chance, or confidence score, that the media is artificially manipulated.
- Sensity: This AI tool carries out a range of functions such as Deepfake detection, liveness check, fraudulent document detection, face matching, and ID document verification. Sensity detection technology is modeled after cybersecurityโs standard of Defense in Depth (DiD).
- Deepwave: Deepwave is a tool that is used to scan and detect Deepfake videos. It has an AI-powered scanner that processes an uploaded video or video link to verify its authenticity or to determine and manipulate content. When a “scan” request to Deepware API is sent, this scan request will be added to a queue for processing. After that, the video uploaded will be scanned with the Deepware AI model.
- Hive Moderation: Hive detector is used to detect AI-generated content. It can Identify AI-generated text from ChatGPT, GPT-3, and other popular engines, detect AI-generated visual media from popular tools like DALL-E, Midjourney, and Stable Diffusion, and detect AI-generated audio files from various sources.
Conclusion
TechRound in a blog post stated that โAccurately recreating eye movements can be challenging. Users should look out for unnatural eye movements such as not blinking. Additionally, pay attention to signs of poor lip synchronization and peculiar or robotic pronunciation of words, especially if the individual displays jerky or disjointed movementsโ.
Individuals, businesses, and Cybersecurity experts should invest in a proper AI-powered detection tool to stay on top of their game in Deepfake detection.