A LastPass employee was the target of a phishing attempt that leverage Artificial Intelligence (AI) deepfake technology to impersonate the company’s CEO. Deepfakes leverage generative AI and machine learning to create highly realistic fabricated videos and audio that can be difficult to distinguish from the genuine piece.
Screenshot of the attack attempt (Source: LastPass)
According to a blog post by Mike Kosak, a Senior Principal Intelligence Analyst at LastPass, the attack involved a series of calls, texts, and at least one voicemail featuring an audio deepfake of the CEO’s voice contacting the employee via WhatsApp.
“In our case, an employee received a series of calls, texts, and at least one voicemail featuring an audio deepfake from a threat actor impersonating our CEO via WhatsApp,” Kosak explained.
Fortunately, the employee did not fall for the attack, recognizing evident signs of a social engineering attempt such as forced urgency and communications occurring outside normal business channels. The incident was promptly reported to LastPass’s internal security team.
“To be clear, there was no impact to our company. However, we did want to share this incident to raise awareness that deepfakes are increasingly not only the purview of sophisticated nation-state threat actors and are increasingly being leveraged for executive impersonation fraud campaigns,” Kosak stated.
The rapidly advancing capabilities and ease of access to deepfake technologies, even for those with limited technical skills, raised alarms about potential abuse cases ranging from sophisticated disinformation operations to financial fraud schemes and cyber-enabled crime.
Just a few months ago, in January, a robocall campaign in New Hampshire used an artificial voice mimicking President Joe Biden in an apparent voter suppression effort ahead of the state’s presidential primary.
As deepfakes become more realistic and easier to create, cybersecurity experts warn that organizations must ramp up employee training and awareness initiatives. This will help them identify these emerging threats that blur the lines between what is real or fake.