Matt Harrison Clough
In July 2024, an executive at luxury sports car manufacturer Ferrari received several messages that appeared to have been sent by CEO Benedetto Vigna on the messaging and calling platform WhatsApp. The messages, which originated from an unfamiliar number, mentioned an impending significant acquisition, urged the executive to sign a nondisclosure agreement immediately, and claimed that Italy’s market regulator and the Italian stock exchange had already been informed about the transaction.
Despite the convincing nature of the messages, which also included a profile picture of Vigna standing in front of the Ferrari logo, the executive grew suspicious. Although the voice mimicked Vigna’s Southern Italian accent, the executive noticed slight inconsistencies in tone during a follow-up call in which he was again urged to assist with the confidential and urgent financial transaction.
Sensing that something was amiss, the executive asked the caller a question that only Vigna would know the answer to — the title of a book Vigna had recommended days earlier. Unable to answer the question, the scammer abruptly ended the call. The executive’s simple test prevented what could have been a major financial loss and reputational damage for Ferrari.
Get Updates on Leading With AI and Data
Get monthly insights on how artificial intelligence impacts your organization and what it means for your company and customers.
Please enter a valid email address
Thank you for signing up
Understanding Deepfakes
The attempt to exploit Ferrari is an example of a deepfake — a highly realistic video, image, text, or voice that has been fully or partially generated using artificial intelligence algorithms, machine learning techniques, and generative adversarial networks, or GANs.
Deloitte’s Center for Financial Services predicts that fraud enabled by generative AI could reach $40 billion in losses in the United States by 2027.
GANs are a type of AI model in which two neural networks — one generating content and the other evaluating it — compete to create highly realistic videos or audio that mimic real individuals. One network, called the generator, creates the fake media while the other, called the discriminator, evaluates how real or fake the generated content looks. This process continues until the generator produces media so realistic that the discriminator can no longer discern whether it’s fake.
Scammers generate deepfakes using large data sets that include photos, audio clips, and videos of the individual they want to impersonate. The more data that’s available, the more realistic the deepfake will appear. For this reason, celebrities, politicians, and public figures with extensive media presence are often impersonated in deepfakes.
References
1 “Increasing Threat of Deepfake Identities,” PDF file (Washington, D.C.: U.S. Department of Homeland Security, 2019), www.dhs.gov.
2 “Contextualizing Deepfake Threats to Organizations,” PDF file (Washington, D.C.: National Security Agency, FBI, and Cybersecurity and Infrastructure Security Agency, September 2023), https://media.defense.gov.
Reprint #:
“The MIT Sloan Management Review is a research-based magazine and digital platform for business executives published at the MIT Sloan School of Management.”
Please visit the firm link to site