In a startling development that highlights the continuous threat of artificial intelligence in cyber warfare, United States Senator Ben Cardin, Chair of the Senate Foreign Relations Committee, was recently targeted by a highly sophisticated deepfake operation.
The incident involved an AI-generated impersonation of former Ukrainian Foreign Minister Dmytro Kuleba during a Zoom video call. Sources close to the investigation revealed that Senator Cardin’s office received an email on September 19 from an individual claiming to be Kuleba, requesting a video conference. During the subsequent Zoom call, the impersonator posed a series of politically charged questions that raised suspicions among the Senator and his staff.
The deepfake was alarmingly convincing in its “technical sophistication and believability” stated a Senate security office notice distributed to senior staff members. The notice added that the impersonator’s appearance and voice were “consistent in appearance and sound to past encounters” between the Senator and the real Kuleba.
The fake Kuleba reportedly asked provocative questions related to the ongoing conflict in Ukraine and the upcoming U.S. presidential election. One such question, as reported by sources, was: “Do you support long-range missiles into Russian territory? I need to know your answer.” These queries were apparently designed to get potentially compromising responses from the Senator. Becoming suspicious of the nature of the questions, Senator Cardin terminated the call and promptly informed the U.S. State Department, which confirmed that the individual on the call was not the real Dmytro Kuleba. The Federal Bureau of Investigation (FBI) has since launched an investigation into the incident.
This latest incident is part of a growing trend of AI-driven cyber threats targeting political processes. The Q2 AI Cyber Insights white paper report highlights several significant incidents from early 2024, including a fake robocall campaign mimicking President Biden’s voice that attempted to suppress voter turnout during the New Hampshire Primary in January. In April, a former athletic director used AI-generated audio to impersonate a school principal, broadcasting racist and antisemitic comments. And, in June, a convincing AI-generated deepfake video of Donald Trump was live streamed on a fake YouTube channel before a U.S. Presidential debate, quickly gaining 1.38 million subscribers and promoting fraudulent cryptocurrency donations.
In response to these escalating threats, the Senate’s security office has issued warnings to other congressional offices about an “active social engineering campaign” targeting senators and their staff, aimed at discrediting victims or gaining sensitive information.
As the 2024 U.S. presidential election approaches, these events highlight the urgent need for improved cybersecurity measures and increased awareness among political figures and their staff, while raising important questions about the future of diplomatic communications in a world where seeing – and hearing – can no longer be equated with believing.