Home » Blog » A “fakeness score” could help people identify AI generated content

A “fakeness score” could help people identify AI generated content

by
0 comments

  • New deepfake detection tool helps to crack down on fake content
  • A “deepfake score” helps users spot AI generated video and audio
  • The tool is free to use to help mitigate the impact of fake content

Deepfake technology uses artificial intelligence to create realistic, yet entirely fabricated images, videos, and audio, with the manipulated media often imitating famous individuals or ordinary people for the use of fraudulent purposes, including financial scams, political disinformation, and identity theft.

In order to combat the rise in such scams, security firm CloudSEK has launched a new Deep Fake Detection Technology, designed to counter the threat of deepfakes and provide users with a way to identify manipulated content.

CloudSEK’s detection tool aims to help organizations identify deepfake content and prevent potential damage to their operations and credibility, assessing the authenticity of video frames, focusing on facial features and movement inconsistencies that might indicate deepfake tampering, such as facial expressions with unnatural transitions, and unusual textures in the background and on faces.

The rise of deepfakes but there is a solution

Audio analysis is also used, where the tool detects synthetic speech patterns that signal the presence of artificially generated voices. The system also transcribes audio and summarizes key points, allowing users to quickly assess the credibility of the content they are reviewing. The final result is an overall “Fakeness Score,” which indicates the likelihood that the content has been artificially altered.

This score helps users understand the level of potential manipulation, offering insights into whether the content is AI-generated, mixed with deepfake elements, or likely human-generated.

A Fakeness score of 70% and above is AI-generated, 40% to 70% is dubious and possibly a mix of original and deep fake elements while 40% and below is likely human-generated.

In the finance sector, deepfakes are being used for fraudulent activities like manipulating stock prices or tricking customers with fake video-based KYC processes.

The healthcare sector has also been affected, with deepfakes being used to create false medical records or impersonate doctors, while government entities face threats from election-related deepfakes or falsified evidence.

Similarly, media and IT sectors are equally vulnerable, with deepfakes being used to create fake news or damage brand reputations.

“Our mission to predict and prevent cyber threats extends beyond corporations. That’s why we’ve decided to release the Deepfakes Analyzer to the community,” said Bofin Babu, Co-Founder, CloudSEK.

You might also like

You may also like

Are you sure want to unlock this post?
Unlock left : 0
Are you sure want to cancel subscription?

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

-
00:00
00:00
Update Required Flash plugin
-
00:00
00:00
Verified by MonsterInsights