The problem of deepfakes is real and growing. This is why We4C's UnCognito offers a cutting-edge solution for detecting audio deepfakes in real-time and at large scale.
A French woman who was conned out of €830,000 (£700,000; $850,000) by scammers posing as actor Brad Pitt has faced a huge wave of mockery, leading French broadcaster TF1 to withdraw a programme about her.
The primetime programme, which aired on Sunday, attracted national attention on interior designer Anne, 53, who thought she was in a relationship with Pitt for a year and a half.
She has since told a popular French YouTube show that she was not "crazy or a moron": "I just got played, I admit it, and that's why I came forward, because I am not the only one."
A representative for Pitt told US outlet Entertainment Weekly that it was "awful that scammers take advantage of fans' strong connection with celebrities" and that people shouldn't respond to unsolicited online outreach "especially from actors who have no social media presence."
Hundreds of social media users mocked Anne, who the programme said had lost her life's savings and tried to take her own life three times since the scam came to light.
Netflix France put out a post on X advertising "four films with Brad Pitt (for real)", while, in a now-deleted post, Toulouse FC said: "Hi Anne, Brad told us he would be at the stadium on Wednesday... and you?"
The club has since apologised for the post.
On Tuesday, TF1 said it had pulled the segment on Anne after her testimony had sparked "a wave of harassment" - although the programme can still be found online.
In the report, Anne said her ordeal began when she downloaded Instagram in February 2023, when she was still married to a wealthy entrepreneur.
She was immediately contacted by someone who said they were Pitt's mother, Jane Etta, who told Anne her son "needed a woman just like her".
Somebody purporting to be Pitt got in touch the next day, which set off alarm bells for Anne. "But as someone who isn't very used to social media, I didn't really know what was happening to me," she said.
At one point, "Brad Pitt" said he tried to send her luxury gifts but that he was unable to pay customs on them as his bank accounts were frozen due to his divorce proceedings with actor Angelina Jolie, prompting Anne to transfer €9000 to the scammers.
"Like a fool, I paid... Every time I doubted him, he managed to dissipate my doubts," she said.
The requests for money ramped up when the fake Pitt told Anne he needed cash to pay for kidney cancer treatment, sending her multiple AI-generated photos of Brad Pitt in a hospital bed. "I looked those photos up on the internet but couldn't find them so I thought that meant he had taken those selfies just for me," she said.
Meanwhile, Anne and her husband divorced, and she was awarded €775,000 - all of which went to the scammers.
"I told myself I was maybe saving a man's life," Anne said, who is in cancer remission herself.
Anne's daughter, now 22, told TF1 she tried to "get her mother to see reason" for over a year but that her mother was too excited. "It hurt to see how naive she was being," she said.
When images appeared in gossip magazines showing the real Brad Pitt with his new girlfriend Ines de Ramon, awakening suspicions in Anne, the scammers sent her an fake news report in which the AI-generated anchor talked about Pitt's "exclusive relationship with one special individual... who goes by the name of Anne."
The video comforted Anne for a short time, but when the real Brad Pitt and Ines de Ramon made their relationship official in June 2024, Anne decided to end things.
After scammers tried to get more money out of her under the guise of "Special FBI Agent John Smith," Anne contacted the police. An investigation is now under way.
The TF1 programme said the events left Anne broke, and that she has tried to end her life three times.
"Why was I chosen to be hurt this way?," a tearful Anne said. "These people deserve hell. We need to find those scammers, I beg you - please help me find them."
But in the YouTube interview on Tuesday Anne hit back at TF1, saying it had left out details on her repeated doubts over whether she was talking to the real Brad Pitt, and added that anyone could've fallen for the scam if they were told "words that you never heard from your own husband."
Anne said she was now living with a friend: "My whole life is a small room with some boxes. That's all I have left."
While many online users overwhelmingly mocked Anne, several took her side.
"I understand the comic effect but we're talking about a woman in her 50s who got conned by deepfakes and AI which your parents and grandparents would be incapable to spot," one popular post on X read.
An op-ed in newspaper Libération said Anne was a "whistleblower": "Life today is paved with cybertraps... and AI progress will only worsen this scenario."
Anna McAdams has always kept a close eye on her 15-year-old daughter Elliston Berry's life online. So it was hard to come to terms with what happened 15 months ago on the Monday morning after Homecoming in Aledo, Texas.
A classmate took a picture from Elliston's Instagram, ran it through an artificial intelligence program that appeared to remove her dress and then sent around the digitally altered image on Snapchat.
"She came into our bedroom crying, just going, 'Mom, you won't believe what just happened,'" McAdams said.
Last year, there were more than 21,000 deepfake pornographic videos online — up more than 460% over the year prior. The manipulated content is proliferating on the internet as websites make disturbing pitches — like one service that asks, "Have someone to undress?"
"I had PSAT testing and I had volleyball games," Elliston said. "And the last thing I need to focus and worry about is fake nudes of mine going around the school. Those images were up and floating around Snapchat for nine months."
In San Francisco, Chief Deputy City Attorney Yvonne Mere was starting to hear stories similar to Elliston's — which hit home.
"It could have easily been my daughter," Mere said.
The San Francisco City Attorney's office is now suing the owners of 16 websites that create "deepfake nudes," where artificial intelligence is used to turn non-explicit photos of adults and children into pornography.
"This case is not about tech. It's not about AI. It's sexual abuse," Mere said.
These 16 sites had 200 million visits in just the first six months of the year, according to the lawsuit.
City Attorney David Chiu says the 16 sites in the lawsuit are just the start.
"We're aware of at least 90 of these websites. So this is a large universe and it needs to be stopped," Chiu said.
Republican Texas Sen. Ted Cruz is co-sponsoring another angle of attack with Democratic Minnesota Sen. Amy Klochubar. The Take It Down Act would require social media companies and websites to remove non-consensual, pornographic images created with AI.
"It puts a legal obligation on any tech platform — you must take it down and take it down immediately," Cruz said.
The bill passed the Senate this month and is now attached to a larger government funding bill awaiting a House vote.
In a statement, a spokesperson for Snap told CBS News: "We care deeply about the safety and well-being of our community. Sharing nude images, including of minors, whether real or AI-generated, is a clear violation of our Community Guidelines. We have efficient mechanisms for reporting this kind of content, which is why we're so disheartened to hear stories from families who felt that their concerns went unattended. We have a zero tolerance policy for such content and, as indicated in our latest transparency report, we act quickly to address it once reported."
Elliston says she's now focused on the present and is urging Congress to pass the bill.
"I can't go back and redo what he did, but instead, I can prevent this from happening to other people," Elliston said.
By Emily Price PCMag.com June 24, 2024
Just a few days after rapper 50 Cent’s website and social accounts were used by hackers to promote a fake cryptocurrency, it looks like something similar is happening to Elon Musk.
A YouTube Live video today displayed a video of Musk with an AI-generated version of his voice suggesting that users go to a website to deposit Ethereum, Dogecoin, or Bitcoin, Engadget reports. The video clip promised viewers that depositing their cryptocurrency on the site would “automatically send back double the amount of the cryptocurrency you deposited.”
The stream ran for 5 hours, and at one point had over 30,000 concurrent viewers, bringing it to the top of YouTube’s Live Now recommendations. Both the video and the account associated with it have since been removed from YouTube.
It's not surprising that hackers chose Musk to deepfake to promote the site. Tweets from Musk have been known to have a significant impact on the crypto market, especially with meme coins such as Dogecoin, thanks to his dedicated following.
In 2020, Musk was one of several high-profile Twitter users—a group that also included Bill Gates, Barack Obama, and Joe Biden—who was briefly hacked to promote a Bitcoin scam.
Jun 9, 2024, 12:45 PM EDT
For the second time in a matter of months, OpenAI has found itself explaining its text-to-audio tool, reminding everyone that it is not, and may never be, widely available.
"It's important that people around the world understand where this technology is headed, whether we ultimately deploy it widely ourselves or not," the company said in a statement posted to its website on Friday. "Which is why we want to explain how the model works, how we use it for research and education, and how we are implementing our safety measures around it.
Late last year, OpenAI shared its Voice Engine, which relies on text inputs and 15-second audio clips of human voices to "generate natural-sounding speech that closely resembles the original speaker," with a small group of users outside the company. The tool can create voices that sound convincingly human in several languages.
At the time, the company said it was choosing to preview the technology but not widely release it to "bolster societal resilience" against the threat of "ever more convincing generative models."
As part of those efforts, OpenAI said it was actively working on phasing out voice-based authentication for accessing bank accounts, exploring policies to protect the use of individual's voices in AI, educating the public on the risks of AI, and accelerating development on tracking audiovisual content so users know whether they're interacting with real or synthetic content.
But despite such efforts, fear of the technology persists.
President Joe Biden's AI chief, Bruce Reed, once said that voice cloning is the one thing that keeps him up at night. And The Federal Trade Commission said in March that scammers were using AI to elevate their work, using voice cloning tools that make it harder to distinguish between AI-generated voices and human ones.
In its updated statement on Friday, OpenAI sought to assuage those worries.
"We continue to engage with US and international partners from across government, media, entertainment, education, civil society, and beyond to ensure we are incorporating their feedback as we build," the company said.
It also noted that once Voice Engine is equipped with its latest model, GPT4o, it'll also pose new threats. Internally, the company said it's "actively red-teaming GPT-4o to identify and address both known and unforeseen risks across various fields such as social psychology, bias and fairness, and misinformation."
The bigger question, of course, is what will happen when the technology is widely released. And it looks like OpenAI might be bracing itself, too.
OpenAI has raised tens of billions of dollars to develop AI technologies that are changing the world.
But there's one glaring problem: it's still struggling to understand how its tech actually works.
During last week's International Telecommunication Union AI for Good Global Summit in Geneva, Switzerland, OpenAI CEO Sam Altman was stumped after being asked how his company's large language models (LLM) really function under the hood.
"We certainly have not solved interpretability," he said, as quoted by the Observer, essentially saying the company has yet to figure out how to trace back their AI models' often bizarre and inaccurate output and the decisions it made to come to those answers.
When pushed during the event by The Atlantic CEO Nicholas Thompson, who asked if that shouldn't be an "argument to not keep releasing new, more powerful models," Altman was seemingly baffled, countering with a half-hearted reassurance that the AIs are "generally considered safe and robust."
Altman's unsatisfying answer highlights a real problem in the emerging AI space. Researchers have long struggled to explain the freewheeling "thinking" that goes on behind the scenes, with AI chatbots almost magically and effortlessly reacting to any query that's being thrown at them (lies and gaslighting aside).
But try as they might, tracing back the output to the original material the AI was trained on has proved extremely difficult. OpenAI, despite the company's own name and origin story, has also kept the data it trains its AIs on extremely tightly to its chest.
A panel of 75 experts recently concluded in a landmark scientific report commissioned by the UK government that AI developers "understand little about how their systems operate" and that scientific knowledge is "very limited."
"Model explanation and interpretability techniques can improve researchers’ and developers’ understanding of how general-purpose AI systems operate, but this research is nascent," the report reads.
Other AI companies are trying to find new ways to "open the black box" by mapping the artificial neurons of their algorithms. For instance, OpenAI competitor Anthropic recently took a detailed look at the inner workings of one of its latest LLMs called Claude Sonnet as a first step.
"Anthropic has made a significant investment in interpretability research since the company's founding, because we believe that understanding models deeply will help us make them safer," reads a recent blog post.
"But the work has really just begun," the company admitted. "The features we found represent a small subset of all the concepts learned by the model during training, and finding a full set of features using our current techniques would be cost-prohibitive."
"Understanding the representations the model uses doesn't tell us how it uses them; even though we have the features, we still need to find the circuits they are involved in," Anthropic wrote. "And we need to show that the safety-relevant features we have begun to find can actually be used to improve safety."
AI interpretability is an especially pertinent topic, given the heated debate surrounding AI safety and the risks of having an artificial general intelligence go rogue, which to some experts represents an extinction-level danger for humanity.
Altman himself recently dissolved the company's entire so-called "Superalignment" team, which was dedicated to finding ways to "steer and control AI systems much smarter than us" — only to anoint himself as the leader of a replacement "safety and security committee."
Given the embattled CEO's latest comments, the company has a long way to go before it'd be able to reign in any superintelligent AI.
Of course, it's in Altman's best financial interest to keep reassuring investors that the company is dedicated to safety and security — despite having no clue how its core products actually work.
"It does seem to me that the more we can understand what’s happening in these models, the better," he said during last week's conference. "I think that can be part of this cohesive package to how we can make and verify safety claims."
Deepfakes are about to explode in number and sophistication, especially because new generative AI video, audio, and image tools make it easier than ever before to generate and manipulate content.
What’s interesting is that most VC’s don’t seem to be paying much attention to the deepfake detection and anti-AI security space. More than $2.7 billion has been invested in consumer generative AI content tools, but only $500 million in deepfake detection (Pitchbook). That’s surprising, given deepfakes can cost companies millions, and according to one study, fake news cost the global economy $78 billion in 2020.
Are investors right?
Maybe deepfake detection tools simply can’t keep up, so we should just make creators and publishers embed provenance data and call it a day. That’s what C2PA, a joint effort among Adobe, Arm, Intel, Microsoft and Truepic, aims to do with its new technical standard.
To dig more into this, I looked into how startups and incumbents are fighting deepfakes (market map below):
There’s 3 major ways that players are addressing deepfakes:
Method #1: Detection tools use various techniques to determine whether an image or video has been manipulated or created by AI. Some of these companies, like BioID, Clarity, and Kroop, use AI models trained on real and fake images to spot the differences.
Others identify specific signs that images, videos, and audio have been manipulated. For example, Intel’s FakeCatcher analyzes patterns of blood flow to detect fake videos. DARPA’s Semantic Forensic project develops logic-based frameworks to find anomalies, like mismatched earrings. Startups working on this include Attestiv, DeepMedia, Duck Duck Goose, Illuminarty, Reality Defender, and Resemble AI.
ID verification tools are a subset of detection tools built to authenticate personal documents and user profiles. They often combine image analysis with liveness detection (i.e., when you’re asked to take a selfie or make a face). AuthenticID, Hyperverge, Idenfy, iProov, Jumio, and Sensity are some of the companies in this space.
Of course, detection-based approaches are inherently retroactive, so they have to constantly keep up with evolving generative AI models. But many of these tools have 80%+ accuracy rates, compared to only about 60% for humans.
Method #2: Certification tools, on the other hand, proactively embed provenance data into image and video files, with a record permanently stored on a blockchain. Truepic allows enterprises to add, verify, and view C2PA content credentials, including at the point of capture on smartphone cameras. Similarly, CertifiedTrue allows users to capture, store, and certify photos for legal proceedings. This information is then recorded on a blockchain, which makes it permanent, public, and unalterable.
The upside is that we’re beginning to establish a standard for content authenticity; the downside is that these programs are opt-in. Authenticating all or even most of the content that exists and will be generated will be a major challenge, though some camera makers, like Canon, are working on embedding authentication at the point of capture.
However, with the proliferation of deepfakes, the paradigm is shifting from “real until proven fake” to “fake until proven real”. Authentication at the hardware level will likely become the only way to prove humanity, since publisher- or social media-level authentication only proves where content first appeared, not whether a human made it.
Method #3: Lastly, narrative tracking platforms examine how fraud and disinformation spreads through the web, keeping corporations and governments informed of high-risk narratives. This is a bigger-picture approach to fighting deepfakes that tracks the spread of misinformation online and verifies content by examining it in context.
Players include startups like Blackbird.AI and Buster.AI, as well as public-private partnerships like the EU-funded project WeVerify. For example, large companies use Blackbird.AI’s Constellation Dashboard to track online narratives, which are given risk scores, so that they can mitigate misinformation.
There’s not a single tool or strategy that can completely protect against the impact of deepfakes, so individuals, enterprises, and governments will have to rely on a mix of solutions. There’s certainly room for entrants in the deepfake detection and anti-AI security space.
Here are some key opportunities for builders and investors:
There’s no magic formula for defending against deepfakes. But with deepfakes causing financial and reputational harm to people, organizations, and governments, deepfake detection is an area to watch.
Warren Buffett cautioned the tens of thousands of shareholders who packed an arena for his annual meeting that artificial intelligence scams could become "the growth industry of all time."
Doubling down on his cautionary words from last year, Buffett told the throngs he recently came face to face with the downside of AI.
And it looked and sounded just like him. Someone made a fake video of Buffett, apparently convincing enough that the so-called Oracle of Omaha himself said he could imagine it tricking him into sending money overseas.
The billionaire investing guru predicted scammers will seize on the technology, and may do more harm with it than society can wring good. "As someone who doesn't understand a damn thing about it, it has enormous potential for good and enormous potential for harm and I just don't know how that plays out," he said.
The day started early Saturday with Berkshire Hathaway announcing a steep drop in earnings as the paper value of its investments plummeted and it pared its Apple holdings.
The company reported a $12.7 billion profit, or $8,825 per Class A share, in first the quarter, down 64% from $35.5 billion, or $24,377 per A share a year ago. But Buffett encourages investors to pay more attention to the conglomerate's operating earnings from the companies it actually owns. Those jumped 39% to $11.222 billion, or $7,796.47 per Class A share, led by insurance companies' performance.
None of it that got in the way of the fun.
Throngs flooded the arena to buy up Squishmallows of Buffett and former Vice Chairman Charlie Munger, who died last fall. The event attracts investors from all over the world and is unlike any other company meeting. Those attending for the first time are driven by an urgency to get here while the 93-year-old Buffett is still alive.
"This is one of the best events in the world to learn about investing. To learn from the gods of the industry," said Akshay Bhansali, who spent the better part of two days traveling from India to Omaha.
Devotees come from all over the world to vacuum up tidbits of wisdom from Buffett, who famously dubbed the meeting 'Woodstock for Capitalists.' But a key ingredient was missing this year: It was the first meeting since Munger died. The meeting opened with a video tribute highlighting some of his best known quotes, including classic lines like "If people weren't so often wrong, we wouldn't be so rich." The video also featured skits the investors made with Hollywood stars over the years, including a "Desperate Housewives" spoof where one of the women introduced Munger as her boyfriend and another in which actress Jaimie Lee Curtis swooned over him.
As the video ended, the arena erupted in a prolonged standing ovation honoring Munger, whom Buffett called "the architect of Berkshire Hathaway." Buffett said Munger remained curious about the world up until the end of his life at 99, hosting dinner parties, meeting with people and holding regular Zoom calls.
"Like his hero Ben Franklin, Charlie wanted to understand everything," Buffett said.
For decades, Munger and Buffett functioned as a classic comedy duo, with Buffett offering lengthy setups to Munger's witty one-liners. He once referred to unproven internet companies as "turds."
Together, the pair transformed Berkshire from a floundering textile mill into a massive conglomerate made up of a variety of interests, from insurance companies such as Geico to BNSF railroad to several major utilities and an assortment of other companies.
Munger often summed up the key to Berkshire's success as "trying to be consistently not stupid, instead of trying to be very intelligent." He and Buffett also were known for sticking to businesses they understood well.
"Warren always did at least 80% of the talking. But Charlie was a great foil," said Stansberry Research analyst Whitney Tilson, who was looking forward to his 27th consecutive meeting.
Next-gen leaders
Munger's absence, however, created space for shareholders to get to know better the two executives who directly oversee Berkshire's companies: Ajit Jain, who manages the insurance units; and Abel, who handles everything else and has been named Buffett's successor. The two shared the main stage with Buffett this year.
The first time Buffett kicked a question to Abel, he mistakenly said "Charlie?" Abel shrugged off the mistake and dove into the challenges utilities face from the increased risk of wildfires and some regulators' reluctance to let them collect a reasonable profit.
Morningstar analyst Greggory Warren said he believes Abel spoke up more Saturday and let shareholders see some of the brilliance Berkshire executives talk about.
Abel offered a twist on Munger's classic "I have nothing to add" line by often starting his answers Saturday by saying "The only thing I would add."
"Greg's a rock star," said Chris Bloomstran, president of Semper Augustus Investments Group. "The bench is deep. He won't have the same humor at the meeting. But I think we all come here to get a reminder every year to be rational."
A look to the future
Buffett has made clear that Abel will be Berkshire's next CEO, but he said Saturday that he had changed his opinion on how the company's investment portfolio should be handled. He had previously said it would fall to two investment managers who handle small chunks of the portfolio now. On Saturday, Buffett endorsed Abel for the gig, as well as overseeing the operating businesses and any acquisitions.
"He understands businesses extremely well. and if you understand businesses, you understand common stocks," Buffett said. Ultimately, it will be up to the board to decide, but the billionaire said he might come back and haunt them if they try to do it differently.
Overall, Buffett said Berkshire's system of having all the noninsurance companies report to Abel and the insurers report to Jain is working well. He himself hardly gets any calls from managers anymore because they get more guidance from Abel and Jain. "This place would work extremely well the next day if something happened to me," Buffett said.
Nevertheless, the best applause line of the day was Buffett's closing remark: "I not only hope that you come next year but I hope that I come next year."
A high school athletic director was arrested after an AI-generated voice recording of his school's principal making racist comments went viral.
Baltimore County Police arrested the former athletic director of Pikesville High School on Thursday, alleging he used an AI voice clone to impersonate the school’s principal, leading the public to believe Principal Eric Eiswert had made racist and antisemitic comments, according to The Baltimore Banner.
Dazhon Darien was stopped at a Baltimore airport on Thursday morning attempting to board a flight to Houston with a gun, according to the Banner. Investigators determined Darien faked Eiswert’s voice using an AI cloning tool. The AI voice recording, which was circulated widely on social media, made disparaging comments about Black students and the Jewish community.
“Based on an extensive investigation, detectives now have conclusive evidence the recording was not authentic,” the Baltimore County Police said in a press release. “As part of their investigation, detectives requested a forensic analyst contracted with the FBI to analyze the recording. The results from that analysis indicated the recording contained traces of AI-generated content.”
This deepfake reportedly led to a public outrage causing Principal Eiswert to receive a wave of hateful messages and forcing his temporary removal from the school. The school’s front desk was flooded with calls from concerned parents. The Pikesville school district ultimately arranged for a police presence at the school and Eiswert’s house to restore a sense of safety.
Baltimore Police officials say the former athletic director made the AI recording to retaliate against the school’s principal. A month before the recording went viral, The Banner reports that Eiswert launched an investigation into Darien for potential theft of school funds. Darien authorized a $1,916 payment to the school’s JV basketball coach, who was also his roommate, bypassing proper procedures. Darien submitted his resignation earlier in April, according to school documents.
Police say Darien was the first of three teachers to receive the audio clip the night before it went viral. The Banner reports another teacher who received the recording sent it to students, media outlets, and the NAACP. Police wrote in charging documents that Darien used the school network to search for OpenAI tools and use large language models on multiple occasions. However, a lot of people use these AI tools these days. It’s unclear at this time how investigators were able to pinpoint Darien as the creator of this voice recording.
The creation of AI-generated audio deepfakes is an increasingly large problem facing the tech world. The Federal Communications Commission took steps in February to outlaw deepfake robocalls after a Joe Biden deepfake misled New Hampshire voters.
In this case, AI experts were able to identify the alleged audio of the Baltimore principal was a fake. However, this came two months after the audio went viral, and the damage may have already been done. AI deepfakes really need to be stopped early on to minimize harm, but that’s easier said than done.
Deepfakes have long raised concern in social media, elections and the public sector. But now with technology advances making artificial intelligence-enabled voice and images more lifelike than ever, bad actors armed with deepfakes are coming for the enterprise.
“There were always fraudulent calls coming in. But the ability for these [AI] models now to imitate the actual voice patterns of an individual giving instructions to somebody with the phone to do something—these sorts of risks are brand new,” said Bill Cassidy, chief information officer at New York Life.
Banks and financial services providers are among the first companies to be targeted. “This space is just moving very fast,” said Kyle Kappel, U.S. Leader for Cyber at KPMG.
How fast was demonstrated earlier this month when OpenAI showcased technology that can recreate a human voice from a 15-second clip. Open AI said it would not release the technology publicly until it knows more about potential risks for misuse.
Sora, OpenAI’s new text-to-video AI model, can create realistic scenes. In an exclusive interview, WSJ’s Joanna Stern sat down with the company’s CTO, Mira Murati, who explained how it works but ducked questions about how the model was trained.
Among the concerns are that bad actors could use AI-generated audio to game voice-authentication software used by financial services companies to verify customers and grant them access to their accounts. Chase Bank was fooled recently by an AI-generated voice during an experiment. The bank said that to complete transactions and other financial requests, customers must provide additional information.
Deepfake incidents in the fintech sector increased 700% in 2023 from the previous year, according to a recent report by identity verification platform Sumsub.
Companies say they are working to put more guardrails in place to prepare for an incoming wave of generative AI-fueled attackers. For example, Cassidy said he is working with New York Life’s venture-capital group to identify startups and emerging technologies designed to combat deepfakes. “In many cases, the best defense of this generative AI threat is some form of generative AI on the other side,” he said.
Bad actors could also use AI to generate photos of fake driver’s licenses to set up online accounts, so Alex Carriles, chief digital officer of Simmons Bank, said he is changing some identity verification protocols. Previously, one step in setting up an account online with the bank involved customers uploading photos of driver’s licenses. Now that images of driver’s licenses can be easily generated with AI, the bank is working with security vendor IDScan.net to improve the process.
Rather than uploading a pre-existing picture, Carriles said, customers now must photograph their driver’s licenses through the bank’s app and then take selfies. To avoid a situation where they hold cameras up to a screen with an AI-generated visual of someone else’s face, the app instructs users to look left, right, up or down, as a generic AI deepfake won’t necessarily be prepared to do the same.
It can be difficult balancing giving users a good experience and making the process so seamless that attackers can coast through, Carriles said.
Not all banks are ringing alarm bells. KeyBank CIO Amy Brady said the bank was a technology laggard when it came to adopting voice authentication software. Now, Brady said she considers that was lucky given the risk of deepfakes.
Brady said she is no longer looking to implement voice authentication software until there are better tools for unmasking impersonations. “Sometimes being a laggard pays off,” she said.
Write to Isabelle Bousquette at isabelle.bousquette@wsj.com
Copyright ©2024 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8
Appeared in the April 4, 2024, print edition as 'Deepfakes Are New Threat To Finance'.