An article by George Lawton published on TechTarget discusses the growing threat of deepfake attacks in the era of generative AI and provides recommendations for preventing and detecting them, featuring insights from Greg Hatcher, co-founder of White Knight Labs. The article highlights the importance of developing strong security procedures, including multistep authentication processes, and staying up to date on the latest tools and technologies to thwart increasingly sophisticated deepfakes.
Greg Hatcher emphasizes the telltale signs of audio deepfakes, such as choppy sentences, unusual word choices, and abnormal inflection or tone of voice, and the use of forensic analysis and specialized software for reverse image searches to detect manipulation or alteration. The article cites examples of deepfake attacks, including a $243,000 bank transfer resulting from a phone request impersonation and a $35 million fraudulent bank transfer timed perfectly with a company acquisition.
The article also warns of the likely proliferation of deepfakes as a service and the need for collaboration between the public and private sectors to promote truth in social engagement. By staying vigilant and informed, businesses and individuals can better protect themselves against the deceptive power of deepfakes in this rapidly advancing era of generative AI.