The problem of deepfakes is real and growing. This is why We4C's UnCognito offers a cutting-edge solution for detecting audio deepfakes in real-time and at large scale.
Phishing isn't limited to your inbox anymore.
ZDNet. Written by Lance Whitney March 4, 2025 at 6:25 a.m.
Cybercriminals and hackers employ a variety of methods to access and steal sensitive information from individuals and organizations. One increasingly popular approach is vishing, or voice phishing. Here, the attacker tricks someone into sharing account credentials or other information through a simple phone call. According to the latest data from security firm CrowdStrike, these types of attacks have been skyrocketing.
In its 11th annual 2025 CrowdStrike Global Threat Report, the security provider revealed that vishing attacks jumped 442% in the second half of 2024 compared with the first half. Throughout the year, CrowdStrike Intelligence tracked at least six similar but distinct campaigns in which attackers pretending to be IT staffers called employees at different organizations.
In these particular campaigns, the scammers tried to convince their intended victims to set up remote support sessions, typically using the Microsoft Quick Assist tool built into Windows. In many of these, the attackers used Microsoft Teams to make the phone calls. At least four of the campaigns seen by CrowdStrike used spam bombing to send thousands of junk emails to the targeted users as a pretext for the alleged support call.
The type of vishing used in these attacks is often known as help desk social engineering. Here, the cybercriminal posing as a help desk or IT professional stresses the urgency of the call as a response to some made-up threat. In some cases, the attacker requests the person's password or other credentials. In other cases, such as the ones documented in the report, the scammer tries to gain remote access to the victim's computer.
Another tactic seen by CrowdStrike is callback phishing. Here, the criminal sends an email to an individual over some type of urgent but phony matter. This could be a claim for an overdue invoice, a notice that they've subscribed to some service, or an alert that their account has been compromised. The email contains a phone number for the recipient to call. But naturally, that number leads them directly to the scammer, who tries to con them into sharing their credit card details, account credentials, or other information.
Because these attacks are usually aimed at organizations, ransomware is another key component. By gaining access to network resources, user or customer accounts, and other sensitive data, the attackers can hold the stolen information for ransom.
In its report, CrowdStrike identified a few different cybercrime groups that use vishing and callback phishing in their attacks. One group known as Chatty Spider focuses mostly on the legal and insurance industries and has demanded ransoms as high as $8 million. Another group called Plump Spider targeted Brazil-based businesses throughout 2024 and uses vishing calls to direct employees to remote support sites and tools.
"Similar to other social engineering techniques, vishing is effective because it targets human weakness or error rather than a flaw in software or an operating system (OS)," CrowdStrike said in its report. "Malicious activity may not be detected until later in an intrusion, such as during malicious binary execution or hands-on-keyboard activity, which can delay an effective response. This gives the threat actor an advantage and puts the onus on users to recognize potentially malicious behavior."
Other security firms have seen a dramatic rise in vishing attacks. Last October, Zimperium's zLabs research team uncovered a malware known as FakeCall, notable for its advanced use of vishing. Here, the scammers use phone calls to try to trick potential victims into sharing sensitive information such as credit card numbers and banking credentials. FakeCall itself works by hijacking the call functions on Android phones to install the malware.
To protect yourself, your employees, and your organization from vishing attacks and similar threats, CrowdStrike offers the following tips:
A couple of security experts also shared their recommendations with ZDNET.
"Taking systems offline as soon as a threat is detected is a vital first step in containment, but it is inadequate on its own," said Patrick Tiquet, vice president of security and architecture at Keeper Security.
"To counteract secondary tactics, such as vishing, security teams should swiftly inform customers and partners about the breach through official channels, providing clear guidance on how to protect themselves against these threats," Tiquet added. "Training sessions for employees and stakeholders on recognizing these attempts and verifying any unsolicited communications before sharing sensitive information are crucial."
Individual users and consumers should also be cautious about unexpected phone calls that sound legitimate.
"When I talk to colleagues, friends, and family, I remind them that if a call is unexpected and asks for personal or financial information, it's time to question everything," said Akhil Mittal, senior manager at security provider Black Duck.
"I also stress the importance of slowing down, verifying who's calling, and never hesitating to hang up. Use the official number from a bank's website or statement to call back and confirm," Mittal added. "Finally, just because a caller knows your address or part of your account number doesn't make them legit; criminals often have that info beforehand. If the caller pressures you to act fast, it's a sign you should stop and verify."
What is vishing? Voice phishing is surging - expert tips on how to spot it and stop it | ZDNET
Researchers from TikTok owner ByteDance have demoed a new AI system, OmniHuman-1, that can generate perhaps the most realistic deepfake videos to date.
Deepfaking AI is a commodity. There’s no shortage of apps that can insert someone into a photo, or make a person appear to say something they didn’t actually say. But most deepfakes — and video deepfakes in particular — fail to clear the uncanny valley. There’s usually some tell or obvious sign that AI was involved somewhere.
Not so with OmniHuman-1 — at least from the cherry-picked samples the ByteDance team released.
According to the ByteDance researchers, OmniHuman-1 only needs a single reference image and audio, like speech or vocals, to generate a clip of an arbitrary length. The output video’s aspect ratio is adjustable, as is the subject’s “body proportion” — i.e. how much of their body is shown in the fake footage.
Trained on 19,000 hours of video content from undisclosed sources, OmniHuman-1 can also edit existing videos — even modifying the movements of a person’s limbs. It’s truly astonishing how convincing the result can be.
Granted, OmniHuman-1 isn’t perfect. The ByteDance team says that “low-quality” reference images won’t yield the best videos, and the system seems to struggle with certain poses. Note the weird gestures with the wine glass in this video:
Still, OmniHuman-1 is easily heads and shoulders above previous deepfake techniques, and it may well be a sign of things to come. While ByteDance hasn’t released the system, the AI community tends not to take long to reverse-engineer models like these.
The implications are worrisome.
Last year, political deepfakes spread like wildfire around the globe. On election day in Taiwan, a Chinese Communist Party-affiliated group posted AI-generated, misleading audio of a politician throwing his support behind a pro-China candidate. In Moldova, deepfake videos depicted the country’s president, Maia Sandu, resigning. And in South Africa, a deepfake of rapper Eminem supporting a South African opposition party circulated ahead of the country’s election.
Deepfakes are also increasingly being used to carry out financial crimes. Consumers are being duped by deepfakes of celebrities offering fraudulent investment opportunities, while corporations are being swindled out of millions by deepfake impersonators. According to Deloitte, AI-generated content contributed to more than $12 billion in fraud losses in 2023, and could reach $40 billion in the U.S. by 2027.
Last February, hundreds in the AI community signed an open letter calling for strict deepfake regulation. In the absence of a law criminalizing deepfakes at the federal level in the U.S., more than 10 states have enacted statutes against AI-aided impersonation. California’s law — currently stalled — would be the first to empower judges to order the posters of deepfakes to take them down or potentially face monetary penalties.
Unfortunately, deepfakes are hard to detect. While some social networks and search engines have taken steps to limit their spread, the volume of deepfake content online continues to grow at an alarmingly fast rate.
In a May 2024 survey from ID verification firm Jumio, 60% of people said they encountered a deepfake in the past year. Seventy-two percent of respondents to the poll said they were worried about being fooled by deepfakes on a daily basis, while a majority supported legislation to address the proliferation of AI-generated fakes.
by Emma Roth The Verge, Feb 12, 2025, 11:17 AM EST
Scarlett Johansson is calling on the government to pass a law limiting the use of AI after a video featuring an AI deepfake of the actress circulated online. In a statement to People, Johansson said, “It is terrifying that the U.S. government is paralyzed when it comes to passing legislation that protects all of its citizens against the imminent dangers of A.I.”
The video in question shows Johansson, along with other Jewish celebrities including Jerry Seinfeld, Mila Kunis, Jack Black, Drake, Jake Gyllenhaal, Adam Sandler, and others, wearing a t-shirt that shows the name “Kanye” along with an image of a middle finger that has the Star of David in the center. Ye (formerly known as Kanye West) returned to X last week to post antisemitic comments. He also began selling shirts with a swastika on his website, which has since been taken down.
“I am a Jewish woman who has no tolerance for antisemitism or hate speech of any kind,” Johansson said, according to People. “But I also firmly believe that the potential for hate speech multiplied by A.I. is a far greater threat than any one person who takes accountability for it. We must call out the misuse of A.I., no matter its messaging, or we risk losing a hold on reality.”
Johansson said that she urges lawmakers “to make the passing of legislation limiting A.I. use a top priority,” adding that “it is a bipartisan issue that enormously affects the immediate future of humanity at large.” Johansson has been outspoken about AI since the technology started becoming more accessible. In 2023, she sued an AI app developer for using her name and likeness in an online ad.
She later called out OpenAI for using a voice that sounded a lot like hers in ChatGPT, leading OpenAI to stop using the voice.
Last year, lawmakers introduced a bill to combat sexually explicit deepfakes, but there has been little movement on other forms of AI regulation. California governor Gavin Newsom vetoed a major AI safety bill in September 2024, while President Donald Trump reversed Joe Biden’s executive order to establish safety guidelines for AI.
This week, the US and UK also declined to sign an international AI declaration that promotes the “ethical” use of the technology.
New online videos recently investigated by VOA's Russian and Ukrainian services show how artificial intelligence is likely being used to try to create provocative deepfakes that target Ukrainian refugees.
In one example, a video appears to be a TV news report about a teenage Ukrainian refugee and her experience studying at a private school in the United States.
But the video then flips to footage of crowded school corridors and packets of crack cocaine, while a voiceover that sounds like the girl calls American public schools dangerous and invokes offensive stereotypes about African Americans.
"I realize it's quite expensive [at private school]," she says. "But it wouldn't be fair if my family was made to pay for my safety. Let Americans do it."
Those statements are total fabrications. Only the first section — footage of the teenager — is real.
The offensive voiceover was likely created using artificial intelligence to realistically copy her voice, resulting in something known as a deepfake.
And it appears to be part of the online Russian information operation called Matryoshka — named for the Russian nesting doll — that is now targeting Ukrainian refugees.
VOA found that the campaign pushed two deepfake videos that aimed to make Ukrainian refugees look greedy and ungrateful, while also spreading deepfakes that appeared to show authoritative Western journalists claiming that Ukraine — and not Russia — was the country spreading falsehoods.
The videos reflect the most recent strategy among Russia's online disinformation campaign, according to Antibot4Navalny, an X account that researches Russian information operations and has been widely cited by leading Western news outlets.
Russia's willingness to target refugees, including a teenager, shows just how far the Kremlin, which regularly denies having a role in disinformation, is prepared to go in attempting to undermine Western support for Ukraine.
Targeting the victims
A second video targeting Ukrainian refugees begins with real footage from a news report in which a Ukrainian woman expresses gratitude for clothing donations and support that Denmark has provided to refugees.
The video then switches to generic footage and a probable deepfake as the woman's voice begins to complain that Ukrainian refugees are forced to live in small apartments and wear used clothing.
VOA is not sharing either video to protect the identities of the refugees depicted in the deepfakes, but both used stolen footage from reputable international media outlets.
That technique — altering the individual's statements while replicating their voice — is new for Matryoshka, Antibot4Navalny told VOA.
"In the last few weeks, almost all the clips have been built according to this scheme," the research group wrote.
But experts say the underlying strategy of spoofing real media reports and targeting refugees is nothing new.
After Russia's deadly April 2022 missile strike on Ukraine's Kramatorsk railway station, for example, the Kremlin created a phony BBC news report blaming Ukrainians for the strike, according to Roman Osadchuk, a resident fellow at the Atlantic Council's Digital Forensic Research Lab.
During that same period, he noted, Russia also spread disinformation in Moldova aimed at turning the local population against Ukrainian refugees.
"Unfortunately, refugees are a very popular target for Russian disinformation campaigns, not only for attacks on the host community ... but also in Ukraine," Osadchuk told VOA.
When such disinformation operations are geared toward a Ukrainian audience, he added, the goal is often to create a clash between those who left Ukraine and those who stayed behind.
Deepfakes of journalists, however, appear designed to influence public opinion in a different way. One video that purports to contain audio of Bellingcat founder Eliot Higgins, for example, claims that Ukraine's incursion into Russia's Kursk region is just a bluff.
"The whole world is watching Ukraine's death spasms," Higgins appears to say. "There's nothing further to discuss."
In another video, Shayan Sardarizadeh, a senior journalist at BBC Verify, appears to say that "Ukraine creates fakes so that fact-checking organizations blame Russia," something he then describes as part of a "global hoax."
In fact, both videos appear to be deepfakes created according to the same formula as the ones targeting refugees.
Higgins tells VOA that the entirety of the audio impersonation of his own voice appears to be a deepfake. He suggests the goal of the video was to engage fact-checkers and get them to accidentally boost its viewership.
"I think it's more about boosting their stats so [the disinformation actors] can keep milking the Russian state for money to keep doing it," he told VOA by email.
Sardarizadeh did not respond to a request for comment in time for publication.
Fake video, real harm
The rapid expansion of AI over the past few years has drawn increased attention to the problem of deepfake videos and AI images, particularly when these technologies are used to create nonconsensual, sexually explicit imagery.
Researchers have estimated that over 90% of deepfakes online are sexually explicit. They have been used both against ordinary women and girls and celebrities.
Deepfakes also have been used to target politicians and candidates for public office. It remains unclear, however, whether they have actually influenced public opinion or election outcomes.
Researchers from Microsoft's Theat Analysis Center have found that "fully synthetic" videos of world leaders are often not convincing and are easily debunked. But they also concluded that deepfake audio is often more effective.
The four videos pushed by Matryoshka — which primarily uses deepfake audio — show that the danger of deepfakes isn't restricted to explicit images or impersonations of politicians. And if your image is available online, there isn't much you can do to fully protect yourself.
Today, there's always a risk in "sharing any information publicly, including your voice, appearance or pictures," Osadchuk said.
The damage to individuals can be serious.
Belle Torek, an attorney who specializes in tech policy and civil rights, said that people whose likenesses are used without consent often experience feelings of violation, humiliation, helplessness and fear.
"They tend to report feeling that their trust has been violated. Knowing that their image is being manipulated to spread lies or hate can exacerbate existing trauma," she said. "And in this case here, I think that those effects are going to be amplified for these [refugee] communities, who are already enduring displacement and violence."
How effective are deepfakes?
While it is not difficult to understand the potential harm of deepfakes, it is more challenging to assess their broader reach and impact.
An X post featuring phony videos of refugees received over 55,000 views. That represents significant spread, according to Olga Tokariuk, a senior analyst at the Institute for Strategic Dialogue.
"It is not yet viral content, but it is no longer marginal content," she said.
Antibot4Navalny, on the other hand, believes that Russian disinformation actors are largely amplifying the X posts using other controlled accounts and very few real people are seeing them.
But even if large numbers of real people did view the deepfakes, that doesn't necessarily mean the videos achieved the Kremlin's goals.
"It is always difficult ... to prove with 100% correlation the impact of these disinformation campaigns on politics," Tokariuk said.
Mariia Ulianovska contributed to this report.
Anna McAdams has always kept a close eye on her 15-year-old daughter Elliston Berry's life online. So it was hard to come to terms with what happened 15 months ago on the Monday morning after Homecoming in Aledo, Texas.
A classmate took a picture from Elliston's Instagram, ran it through an artificial intelligence program that appeared to remove her dress and then sent around the digitally altered image on Snapchat.
"She came into our bedroom crying, just going, 'Mom, you won't believe what just happened,'" McAdams said.
Last year, there were more than 21,000 deepfake pornographic videos online — up more than 460% over the year prior. The manipulated content is proliferating on the internet as websites make disturbing pitches — like one service that asks, "Have someone to undress?"
"I had PSAT testing and I had volleyball games," Elliston said. "And the last thing I need to focus and worry about is fake nudes of mine going around the school. Those images were up and floating around Snapchat for nine months."
In San Francisco, Chief Deputy City Attorney Yvonne Mere was starting to hear stories similar to Elliston's — which hit home.
"It could have easily been my daughter," Mere said.
The San Francisco City Attorney's office is now suing the owners of 16 websites that create "deepfake nudes," where artificial intelligence is used to turn non-explicit photos of adults and children into pornography.
"This case is not about tech. It's not about AI. It's sexual abuse," Mere said.
These 16 sites had 200 million visits in just the first six months of the year, according to the lawsuit.
City Attorney David Chiu says the 16 sites in the lawsuit are just the start.
"We're aware of at least 90 of these websites. So this is a large universe and it needs to be stopped," Chiu said.
Republican Texas Sen. Ted Cruz is co-sponsoring another angle of attack with Democratic Minnesota Sen. Amy Klochubar. The Take It Down Act would require social media companies and websites to remove non-consensual, pornographic images created with AI.
"It puts a legal obligation on any tech platform — you must take it down and take it down immediately," Cruz said.
The bill passed the Senate this month and is now attached to a larger government funding bill awaiting a House vote.
In a statement, a spokesperson for Snap told CBS News: "We care deeply about the safety and well-being of our community. Sharing nude images, including of minors, whether real or AI-generated, is a clear violation of our Community Guidelines. We have efficient mechanisms for reporting this kind of content, which is why we're so disheartened to hear stories from families who felt that their concerns went unattended. We have a zero tolerance policy for such content and, as indicated in our latest transparency report, we act quickly to address it once reported."
Elliston says she's now focused on the present and is urging Congress to pass the bill.
"I can't go back and redo what he did, but instead, I can prevent this from happening to other people," Elliston said.
By Emily Price PCMag.com June 24, 2024
Just a few days after rapper 50 Cent’s website and social accounts were used by hackers to promote a fake cryptocurrency, it looks like something similar is happening to Elon Musk.
A YouTube Live video today displayed a video of Musk with an AI-generated version of his voice suggesting that users go to a website to deposit Ethereum, Dogecoin, or Bitcoin, Engadget reports. The video clip promised viewers that depositing their cryptocurrency on the site would “automatically send back double the amount of the cryptocurrency you deposited.”
The stream ran for 5 hours, and at one point had over 30,000 concurrent viewers, bringing it to the top of YouTube’s Live Now recommendations. Both the video and the account associated with it have since been removed from YouTube.
It's not surprising that hackers chose Musk to deepfake to promote the site. Tweets from Musk have been known to have a significant impact on the crypto market, especially with meme coins such as Dogecoin, thanks to his dedicated following.
In 2020, Musk was one of several high-profile Twitter users—a group that also included Bill Gates, Barack Obama, and Joe Biden—who was briefly hacked to promote a Bitcoin scam.
Jun 9, 2024, 12:45 PM EDT
For the second time in a matter of months, OpenAI has found itself explaining its text-to-audio tool, reminding everyone that it is not, and may never be, widely available.
"It's important that people around the world understand where this technology is headed, whether we ultimately deploy it widely ourselves or not," the company said in a statement posted to its website on Friday. "Which is why we want to explain how the model works, how we use it for research and education, and how we are implementing our safety measures around it.
Late last year, OpenAI shared its Voice Engine, which relies on text inputs and 15-second audio clips of human voices to "generate natural-sounding speech that closely resembles the original speaker," with a small group of users outside the company. The tool can create voices that sound convincingly human in several languages.
At the time, the company said it was choosing to preview the technology but not widely release it to "bolster societal resilience" against the threat of "ever more convincing generative models."
As part of those efforts, OpenAI said it was actively working on phasing out voice-based authentication for accessing bank accounts, exploring policies to protect the use of individual's voices in AI, educating the public on the risks of AI, and accelerating development on tracking audiovisual content so users know whether they're interacting with real or synthetic content.
But despite such efforts, fear of the technology persists.
President Joe Biden's AI chief, Bruce Reed, once said that voice cloning is the one thing that keeps him up at night. And The Federal Trade Commission said in March that scammers were using AI to elevate their work, using voice cloning tools that make it harder to distinguish between AI-generated voices and human ones.
In its updated statement on Friday, OpenAI sought to assuage those worries.
"We continue to engage with US and international partners from across government, media, entertainment, education, civil society, and beyond to ensure we are incorporating their feedback as we build," the company said.
It also noted that once Voice Engine is equipped with its latest model, GPT4o, it'll also pose new threats. Internally, the company said it's "actively red-teaming GPT-4o to identify and address both known and unforeseen risks across various fields such as social psychology, bias and fairness, and misinformation."
The bigger question, of course, is what will happen when the technology is widely released. And it looks like OpenAI might be bracing itself, too.
OpenAI has raised tens of billions of dollars to develop AI technologies that are changing the world.
But there's one glaring problem: it's still struggling to understand how its tech actually works.
During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood.
"We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers.
When pushed during the event by The Atlantic CEO Nicholas Thompson, who asked if that shouldn't be an "argument to not keep releasing new, more powerful models," Altman was seemingly baffled, countering with a half-hearted reassurance that the AIs are "generally considered safe and robust."
Altman's unsatisfying answer highlights a real problem in the emerging AI space. Researchers have long struggled to explain the freewheeling "thinking" that goes on behind the scenes, with AI chatbots almost magically and effortlessly reacting to any query that's being thrown at them (lies and gaslighting aside).
But try as they might, tracing back the output to the original material the AI was trained on has proved extremely difficult. OpenAI, despite the company's own name and origin story, has also kept the data it trains its AIs on extremely tightly to its chest.
A panel of 75 experts recently concluded in a landmark scientific report commissioned by the UK government that AI developers "understand little about how their systems operate" and that scientific knowledge is "very limited."
"Model explanation and interpretability techniques can improve researchers’ and developers’ understanding of how general-purpose AI systems operate, but this research is nascent," the report reads.
Other AI companies are trying to find new ways to "open the black box" by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.
"Anthropic has made a significant investment in interpretability research since the company's founding, because we believe that understanding models deeply will help us make them safer," reads a recent blog post.
"But the work has really just begun," the company admitted. "The features we found represent a small subset of all the concepts learned by the model during training, and finding a full set of features using our current techniques would be cost-prohibitive."
"Understanding the representations the model uses doesn't tell us how it uses them; even though we have the features, we still need to find the circuits they are involved in," Anthropic wrote. "And we need to show that the safety-relevant features we have begun to find can actually be used to improve safety."
AI interpretability is an especially pertinent topic, given the heated debate surrounding AI safety and the risks of having an artificial general intelligence go rogue, which to some experts represents an extinction-level danger for humanity.
Altman himself recently dissolved the company's entire so-called "Superalignment" team, which was dedicated to finding ways to "steer and control AI systems much smarter than us" — only to anoint himself as the leader of a replacement "safety and security committee."
Given the embattled CEO's latest comments, the company has a long way to go before it'd be able to reign in any superintelligent AI.
Of course, it's in Altman's best financial interest to keep reassuring investors that the company is dedicated to safety and security — despite having no clue how its core products actually work.
"It does seem to me that the more we can understand what’s happening in these models, the better," he said during last week's conference. "I think that can be part of this cohesive package to how we can make and verify safety claims."
Deepfakes are about to explode in number and sophistication, especially because new generative AI video, audio, and image tools make it easier than ever before to generate and manipulate content.
What’s interesting is that most VC’s don’t seem to be paying much attention to the deepfake detection and anti-AI security space. More than $2.7 billion has been invested in consumer generative AI content tools, but only $500 million in deepfake detection (Pitchbook). That’s surprising, given deepfakes can cost companies millions, and according to one study, fake news cost the global economy $78 billion in 2020.
Are investors right?
Maybe deepfake detection tools simply can’t keep up, so we should just make creators and publishers embed provenance data and call it a day. That’s what C2PA, a joint effort among Adobe, Arm, Intel, Microsoft and Truepic, aims to do with its new technical standard.
To dig more into this, I looked into how startups and incumbents are fighting deepfakes (market map below):
There’s 3 major ways that players are addressing deepfakes:
Method #1: Detection tools use various techniques to determine whether an image or video has been manipulated or created by AI. Some of these companies, like BioID, Clarity, and Kroop, use AI models trained on real and fake images to spot the differences.
Others identify specific signs that images, videos, and audio have been manipulated. For example, Intel’s FakeCatcher analyzes patterns of blood flow to detect fake videos. DARPA’s Semantic Forensic project develops logic-based frameworks to find anomalies, like mismatched earrings. Startups working on this include Attestiv, DeepMedia, Duck Duck Goose, Illuminarty, Reality Defender, and Resemble AI.
ID verification tools are a subset of detection tools built to authenticate personal documents and user profiles. They often combine image analysis with liveness detection (i.e., when you’re asked to take a selfie or make a face). AuthenticID, Hyperverge, Idenfy, iProov, Jumio, and Sensity are some of the companies in this space.
Of course, detection-based approaches are inherently retroactive, so they have to constantly keep up with evolving generative AI models. But many of these tools have 80%+ accuracy rates, compared to only about 60% for humans.
Method #2: Certification tools, on the other hand, proactively embed provenance data into image and video files, with a record permanently stored on a blockchain. Truepic allows enterprises to add, verify, and view C2PA content credentials, including at the point of capture on smartphone cameras. Similarly, CertifiedTrue allows users to capture, store, and certify photos for legal proceedings. This information is then recorded on a blockchain, which makes it permanent, public, and unalterable.
The upside is that we’re beginning to establish a standard for content authenticity; the downside is that these programs are opt-in. Authenticating all or even most of the content that exists and will be generated will be a major challenge, though some camera makers, like Canon, are working on embedding authentication at the point of capture.
However, with the proliferation of deepfakes, the paradigm is shifting from “real until proven fake” to “fake until proven real”. Authentication at the hardware level will likely become the only way to prove humanity, since publisher- or social media-level authentication only proves where content first appeared, not whether a human made it.
Method #3: Lastly, narrative tracking platforms examine how fraud and disinformation spreads through the web, keeping corporations and governments informed of high-risk narratives. This is a bigger-picture approach to fighting deepfakes that tracks the spread of misinformation online and verifies content by examining it in context.
Players include startups like Blackbird.AI and Buster.AI, as well as public-private partnerships like the EU-funded project WeVerify. For example, large companies use Blackbird.AI’s Constellation Dashboard to track online narratives, which are given risk scores, so that they can mitigate misinformation.
There’s not a single tool or strategy that can completely protect against the impact of deepfakes, so individuals, enterprises, and governments will have to rely on a mix of solutions. There’s certainly room for entrants in the deepfake detection and anti-AI security space.
Here are some key opportunities for builders and investors:
There’s no magic formula for defending against deepfakes. But with deepfakes causing financial and reputational harm to people, organizations, and governments, deepfake detection is an area to watch.
Warren Buffett cautioned the tens of thousands of shareholders who packed an arena for his annual meeting that artificial intelligence scams could become "the growth industry of all time."
Doubling down on his cautionary words from last year, Buffett told the throngs he recently came face to face with the downside of AI.
And it looked and sounded just like him. Someone made a fake video of Buffett, apparently convincing enough that the so-called Oracle of Omaha himself said he could imagine it tricking him into sending money overseas.
The billionaire investing guru predicted scammers will seize on the technology, and may do more harm with it than society can wring good. "As someone who doesn't understand a damn thing about it, it has enormous potential for good and enormous potential for harm and I just don't know how that plays out," he said.
The day started early Saturday with Berkshire Hathaway announcing a steep drop in earnings as the paper value of its investments plummeted and it pared its Apple holdings.
The company reported a $12.7 billion profit, or $8,825 per Class A share, in first the quarter, down 64% from $35.5 billion, or $24,377 per A share a year ago. But Buffett encourages investors to pay more attention to the conglomerate's operating earnings from the companies it actually owns. Those jumped 39% to $11.222 billion, or $7,796.47 per Class A share, led by insurance companies' performance.
None of it that got in the way of the fun.
Throngs flooded the arena to buy up Squishmallows of Buffett and former Vice Chairman Charlie Munger, who died last fall. The event attracts investors from all over the world and is unlike any other company meeting. Those attending for the first time are driven by an urgency to get here while the 93-year-old Buffett is still alive.
"This is one of the best events in the world to learn about investing. To learn from the gods of the industry," said Akshay Bhansali, who spent the better part of two days traveling from India to Omaha.
Devotees come from all over the world to vacuum up tidbits of wisdom from Buffett, who famously dubbed the meeting 'Woodstock for Capitalists.' But a key ingredient was missing this year: It was the first meeting since Munger died. The meeting opened with a video tribute highlighting some of his best known quotes, including classic lines like "If people weren't so often wrong, we wouldn't be so rich." The video also featured skits the investors made with Hollywood stars over the years, including a "Desperate Housewives" spoof where one of the women introduced Munger as her boyfriend and another in which actress Jaimie Lee Curtis swooned over him.
As the video ended, the arena erupted in a prolonged standing ovation honoring Munger, whom Buffett called "the architect of Berkshire Hathaway." Buffett said Munger remained curious about the world up until the end of his life at 99, hosting dinner parties, meeting with people and holding regular Zoom calls.
"Like his hero Ben Franklin, Charlie wanted to understand everything," Buffett said.
For decades, Munger and Buffett functioned as a classic comedy duo, with Buffett offering lengthy setups to Munger's witty one-liners. He once referred to unproven internet companies as "turds."
Together, the pair transformed Berkshire from a floundering textile mill into a massive conglomerate made up of a variety of interests, from insurance companies such as Geico to BNSF railroad to several major utilities and an assortment of other companies.
Munger often summed up the key to Berkshire's success as "trying to be consistently not stupid, instead of trying to be very intelligent." He and Buffett also were known for sticking to businesses they understood well.
"Warren always did at least 80% of the talking. But Charlie was a great foil," said Stansberry Research analyst Whitney Tilson, who was looking forward to his 27th consecutive meeting.
Next-gen leaders
Munger's absence, however, created space for shareholders to get to know better the two executives who directly oversee Berkshire's companies: Ajit Jain, who manages the insurance units; and Abel, who handles everything else and has been named Buffett's successor. The two shared the main stage with Buffett this year.
The first time Buffett kicked a question to Abel, he mistakenly said "Charlie?" Abel shrugged off the mistake and dove into the challenges utilities face from the increased risk of wildfires and some regulators' reluctance to let them collect a reasonable profit.
Morningstar analyst Greggory Warren said he believes Abel spoke up more Saturday and let shareholders see some of the brilliance Berkshire executives talk about.
Abel offered a twist on Munger's classic "I have nothing to add" line by often starting his answers Saturday by saying "The only thing I would add."
"Greg's a rock star," said Chris Bloomstran, president of Semper Augustus Investments Group. "The bench is deep. He won't have the same humor at the meeting. But I think we all come here to get a reminder every year to be rational."
A look to the future
Buffett has made clear that Abel will be Berkshire's next CEO, but he said Saturday that he had changed his opinion on how the company's investment portfolio should be handled. He had previously said it would fall to two investment managers who handle small chunks of the portfolio now. On Saturday, Buffett endorsed Abel for the gig, as well as overseeing the operating businesses and any acquisitions.
"He understands businesses extremely well. and if you understand businesses, you understand common stocks," Buffett said. Ultimately, it will be up to the board to decide, but the billionaire said he might come back and haunt them if they try to do it differently.
Overall, Buffett said Berkshire's system of having all the noninsurance companies report to Abel and the insurers report to Jain is working well. He himself hardly gets any calls from managers anymore because they get more guidance from Abel and Jain. "This place would work extremely well the next day if something happened to me," Buffett said.
Nevertheless, the best applause line of the day was Buffett's closing remark: "I not only hope that you come next year but I hope that I come next year."
A high school athletic director was arrested after an AI-generated voice recording of his school's principal making racist comments went viral.
Baltimore County Police arrested the former athletic director of Pikesville High School on Thursday, alleging he used an AI voice clone to impersonate the school’s principal, leading the public to believe Principal Eric Eiswert had made racist and antisemitic comments, according to The Baltimore Banner.
Dazhon Darien was stopped at a Baltimore airport on Thursday morning attempting to board a flight to Houston with a gun, according to the Banner. Investigators determined Darien faked Eiswert’s voice using an AI cloning tool. The AI voice recording, which was circulated widely on social media, made disparaging comments about Black students and the Jewish community.
“Based on an extensive investigation, detectives now have conclusive evidence the recording was not authentic,” the Baltimore County Police said in a press release. “As part of their investigation, detectives requested a forensic analyst contracted with the FBI to analyze the recording. The results from that analysis indicated the recording contained traces of AI-generated content.”
This deepfake reportedly led to a public outrage causing Principal Eiswert to receive a wave of hateful messages and forcing his temporary removal from the school. The school’s front desk was flooded with calls from concerned parents. The Pikesville school district ultimately arranged for a police presence at the school and Eiswert’s house to restore a sense of safety.
Baltimore Police officials say the former athletic director made the AI recording to retaliate against the school’s principal. A month before the recording went viral, The Banner reports that Eiswert launched an investigation into Darien for potential theft of school funds. Darien authorized a $1,916 payment to the school’s JV basketball coach, who was also his roommate, bypassing proper procedures. Darien submitted his resignation earlier in April, according to school documents.
Police say Darien was the first of three teachers to receive the audio clip the night before it went viral. The Banner reports another teacher who received the recording sent it to students, media outlets, and the NAACP. Police wrote in charging documents that Darien used the school network to search for OpenAI tools and use large language models on multiple occasions. However, a lot of people use these AI tools these days. It’s unclear at this time how investigators were able to pinpoint Darien as the creator of this voice recording.
The creation of AI-generated audio deepfakes is an increasingly large problem facing the tech world. The Federal Communications Commission took steps in February to outlaw deepfake robocalls after a Joe Biden deepfake misled New Hampshire voters.
In this case, AI experts were able to identify the alleged audio of the Baltimore principal was a fake. However, this came two months after the audio went viral, and the damage may have already been done. AI deepfakes really need to be stopped early on to minimize harm, but that’s easier said than done.
Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.
“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new,” said Bill Cassidy, chief information officer at New York Life.
Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast,” said Kyle Kappel, U.S. Leader for Cyber at KPMG.
How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse.
Sora, OpenAI’s new text-to-video AI model, can create realistic scenes. In an exclusive interview, WSJ’s Joanna Stern sat down with the company’s CTO, Mira Murati, who explained how it works but ducked questions about how the model was trained.
Among the concerns are that bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts. Chase Bank was fooled recently by an AI-generated voice during an experiment. The bank said that to complete transactions and other financial requests, customers must provide additional information.
Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year, according to a recent report by identity verification platform Sumsub.
Companies say they are working to put more guardrails in place to prepare for an incoming wave of generative AI-fueled attackers. For example, Cassidy said he is working with New York Life’s venture-capital group to identify startups and emerging technologies designed to combat deepfakes. “In many cases, the best defense of this generative AI threat is some form of generative AI on the other side,” he said.
Bad actors could also use AI to generate photos of fake driver’s licenses to set up online accounts, so Alex Carriles, chief digital officer of Simmons Bank, said he is changing some identity verification protocols. Previously, one step in setting up an account online with the bank involved customers uploading photos of driver’s licenses. Now that images of driver’s licenses can be easily generated with AI, the bank is working with security vendor IDScan.net to improve the process.
Rather than uploading a pre-existing picture, Carriles said, customers now must photograph their driver’s licenses through the bank’s app and then take selfies. To avoid a situation where they hold cameras up to a screen with an AI-generated visual of someone else’s face, the app instructs users to look left, right, up or down, as a generic AI deepfake won’t necessarily be prepared to do the same.
It can be difficult balancing giving users a good experience and making the process so seamless that attackers can coast through, Carriles said.
Not all banks are ringing alarm bells. KeyBank CIO Amy Brady said the bank was a technology laggard when it came to adopting voice authentication software. Now, Brady said she considers that was lucky given the risk of deepfakes.
Brady said she is no longer looking to implement voice authentication software until there are better tools for unmasking impersonations. “Sometimes being a laggard pays off,” she said.
Write to Isabelle Bousquette at isabelle.bousquette@wsj.com
Copyright ©2024 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
Appeared in the April 4, 2024, print edition as 'Deepfakes Are New Threat To Finance'.