Cobalt Strike may be a double-edged sword but pentesting tools are invaluable, says expert

In the article by Damien Black and posted on CyberNews, Greg Hatcher, co-founder of White Knight Labs, discusses the growing concern surrounding the use of Cobalt Strike, a widely recognized penetration testing tool, by cybercriminals for malicious activities. The article highlights the challenges in stopping malware created with Cobalt Strike and explores the potential implications of advancements in artificial intelligence and machine learning on malware development. Hatcher also mentions other dual-use tools like BloodHound and Burp Suite, which can be utilized by both attackers and defenders in the cybersecurity realm. He emphasizes the significance of strong cybersecurity practices, such as implementing multifactor authentication and keeping operating systems updated, to prevent cyberattacks. Furthermore, the article cautions that cybercriminals are increasingly incorporating code from pentesting tools into their custom malware, underlining the need for constant vigilance within the cybersecurity industry. Read full article
How to prevent deepfakes in the era of generative AIĀ

An article by George Lawton published on TechTarget discusses the growing threat of deepfake attacks in the era of generative AI and provides recommendations for preventing and detecting them, featuring insights from Greg Hatcher, co-founder of White Knight Labs. The article highlights the importance of developing strong security procedures, including multistep authentication processes, and staying up to date on the latest tools and technologies to thwart increasingly sophisticated deepfakes. Greg Hatcher emphasizes the telltale signs of audio deepfakes, such as choppy sentences, unusual word choices, and abnormal inflection or tone of voice, and the use of forensic analysis and specialized software for reverse image searches to detect manipulation or alteration. The article cites examples of deepfake attacks, including a $243,000 bank transfer resulting from a phone request impersonation and a $35 million fraudulent bank transfer timed perfectly with a company acquisition. The article also warns of the likely proliferation of deepfakes as a service and the need for collaboration between the public and private sectors to promote truth in social engagement. By staying vigilant and informed, businesses and individuals can better protect themselves against the deceptive power of deepfakes in this rapidly advancing era of generative AI. Read article