![Is this the year that AI breaks into our bank accounts? (2) Is this the year that AI breaks into our bank accounts? (2)](https://i0.wp.com/images-prod.dazeddigital.com/900/azure/dazed-prod/1340/3/1343494.jpg)
Ask an expert
Biometric security checks – from voice recognition, to face and fingerprint scans – are under threat from artificial intelligence, but what can we do about it?
TextThom Waite
You probably use biometric security checks on a daily basis, even if they’re so embedded in our technological landscape that you’re hardly aware you’re doing it. We’re talking about the fingerprint scan you use to unlock your smartphone, the facial recognition that gets you into your bank account and through airport security, or the voice authentication that’s used to confirm your identity over the phone. In 2023, these are the invisible locks that secure our phones, our social media feeds, our bank accounts, and the rest of our data – gone are the days of tapping in a PIN number or scrawling all of your passwords on a scrap of paper. But what if those locks started to break, all at once?
Thanks to developments in artificial intelligence, this is an immediate concern, according to Tristan Harris and Aza Raskin, co-founders of the Centre for Humane Technology. In a recent presentation on the “AI dilemma” delivered to leading technologists and decision-makers, they stated: “This is the year that all content-based verification breaks... it just does not work.” And, in even worse news: “None of our institutions [have] thought about it. They’re not able to stand up to it.”
Read More
Tradwives are levelling up
The ‘10-foot aliens’ spotted in Miami? Now they’re on holiday in Brazil
Judith Butler wants TERFs to wake up
So you want to mix cocaine and ketamine?
This is a scary prospect, of course, and amid widespread panic about the exponential progress of AI (which has even seen figures like Elon Musk call on developers to slow down) it’s tempting to dismiss it as mere fearmongering. Surely governments and major institutions like banks would have spotted such a massive problem looming on the horizon, right? Surely they would have done something to protect our money and our precious data? Then again, watching politicians grill tech leaders like Mark Zuckerberg or TikTok CEO Shou Zi Chew is notoriously cringeworthy, exposing the ruling class as out-of-touch and unwilling to understand new technologies. Is it possible that all of our institutions are run by the same kinds of people, boomers leading us blindly into a future beyond their comprehension?
“Absolutely,” says Alex Polyakov, the founder and CEO of Adversa AI, a company dedicated to researching trustworthy AI. He also agrees with the Centre for Humane Technology’s claim that 2023 will be a pivotal year for AI disruption, adding: “Content-based verification might be one of the first victims of real cyberattacks, because it grants direct access to critical information.”
In fact, we’ve already seen some alarming examples of AI-based cyberattacks against biometric security systems in the real world. By animating faces lifted from ID cards or social media profiles, deepfake videos have been used to pass “liveness checks” used by everything from banks, to dating apps and crypto companies, as well as tricking government systems that rely on facial recognition technology. Deepfake audio has been used to hack bank accounts, or authorise millions of dollars of fraudulent payments. Even biometric identifiers like fingerprints and eye patterns aren’t safe. “AI algorithms can generate fake fingerprints and iris patterns that might deceive security checks,” says Polyakov. “Such attacks are currently being demonstrated in labs, but we are very close to seeing them in real scenarios, or they may already be happening but remain undetected.”
If it’s possible to sneak by a bank’s security system with the help of AI, then imagine how easy it is to fool the average grandma, who was already on the fence about that email from a Nigerian prince. Unfortunately, more personal examples of AI scams are also well-documented, making use of technology that can simulate someone’s voice based on just three seconds of audio.
On April 10, an Arizona woman received a phone call demanding a ransom for her daughter, Payton Bock, whose voice could be heard by the alleged kidnapper on the other end of the line, even though she was actually safe in her bedroom. “It was completely her voice,” the mother told news outlet WKYT. “It was her inflection. It was the way she would have cried.” Bock herself has explained the scam in a TikTok, saying: “This guy had my voice. I was bawling, saying, ‘Mom, I don’t wanna die’.”
The comments of Bock’s TikTok suggest that voice cloning scams are much more widespread, with followers saying that their own family members have had similar experiences. Back in March, the Washington Post also reported that such scams are on the rise, including an example in which a man’s grandmother was convinced to take thousands of dollars of cash out of the bank to bail him out of jail.
@payton.bock AI VOICE CLONING SCAM PSA!!!!! #fyp #ai #scam #voicecloning ♬ original sound - PAYTON
“AI imitates human voices using deep learning techniques, specifically by training on large datasets of human speech,” explains Polyakov. “By learning the nuances of a person’s voice, AI can generate realistic-sounding voice samples.” This technology is useful, of course –it powers accessibility tools, voice assistants, and various forms of entertainment –but undeniably dangerous in the wrong hands. It’s also very difficult to detect. “According to our internal tests at Adversa AI Security Research Lab, only a handful of research techniques can distinguish these imitations,” he adds. “We might soon find ourselves in a situation where it is virtually impossible to tell the difference, unless AI developers inject specific watermarks.” Even then, tech-savvy criminals are likely to find exploits, building their own AI voice cloning tools with increasingly competent technology.
Needless to say, this is a worrying prospect, and the lacklustre response from regulators isn’t particularly reassuring. So how close are we to the point where it “all breaks”, as described in that Center for Humane Technology presentation? According to Polyakov: “It’s already falling apart.” As evidence, he cites Adversa’s “red team” experiments (which challenge systems and policies by adopting the role of the antagonist). These experiments have seen the company break through “very sophisticated facial recognition algorithms” using tech that goes above and beyond mere deepfakes. It has also developed a pair of glasses that can fool any IRL facial recognition systems into thinking you’re Elon Musk, which is obviously quite funny, but also raises some important questions about future criminals concealing their identities.
“The bad guys are usually ahead of the good guys” – Alex Polyakov
At this point you might be wondering: what can we do to prevent AI from cracking open the virtual safe that contains all of our private information? Or how can we stop it being used to scam our closest friends and family? “We can’t trust everything we see, hear, and read – that’s the unfortunate truth,” says Polyakov, recommending that any suspicious messages sent to our personal devices should be double-checked on another platform or communication channel. “For instance, if it was a phone call, check on WhatsApp; if it was an Instagram message, call their direct number, and so on.”
The deception of larger security systems is a bit more complex, of course, and Polyakov says that in AI –as with all cybersecurity issues –“the bad guys are usually ahead of the good guys”. There is hope, however. “Traditional systems always had a trade-off between usability and security,” he adds. “If we build our new systems based on AI, and if [they’re] trained properly, more secure systems can be more robust and accurate.” Basically, we’ll need to train up increasingly sophisticated security systems –a level of progress that can only be achieved with AI –to have a hope of keeping up with the AI tools used to attack them.
More specifically, Polyakov calls for more AI “red teaming” to ensure that existing AI systems can withstand attacks from a range of bad actors, as well as investment in new AI systems to keep up with fast-paced developments. On an individual level, it’s important to raise public awareness of AI security threats and encourage vigilance about AI-generated content (the same way we’ve learned to spy a dodgy link or a spam email). Strong legal frameworks to dissuade the misuse of AI would also be nice, but if we’ve learned to expect anything from our politicians, it’s that these kinds of policies will probably come half a decade too late.
Ask an expertArtificial IntelligenceTechnology
Download the app📱
- Build your network and meet other creatives
- Be the first to hear about exclusive Dazed events and offers
- Share your work with our community
Join Dazed Club