Is this the year that AI breaks into our bank accounts? (2024)

Is this the year that AI breaks into our bank accounts? (2)

Ask an expert

Biometric security checks – from voice recognition, to face and fingerprint scans – are under threat from artificial intelligence, but what can we do about it?

You probably use biometric security checks on a daily basis, even if they’re so embedded in our technological landscape that you’re hardly aware you’re doing it. We’re talking about the fingerprint scan you use to unlock your smartphone, the facial recognition that gets you into your bank account and through airport security, or the voice authentication that’s used to confirm your identity over the phone. In 2023, these are the invisible locks that secure our phones, our social media feeds, our bank accounts, and the rest of our data – gone are the days of tapping in a PIN number or scrawling all of your passwords on a scrap of paper. But what if those locks started to break, all at once?

Thanks to developments in artificial intelligence, this is an immediate concern, according to Tristan Harris and Aza Raskin, co-founders of the Centre for Humane Technology. In a recent presentation on the “AI dilemma” delivered to leading technologists and decision-makers, they stated: “This is the year that all content-based verification breaks... it just does not work.” And, in even worse news: “None of our institutions [have] thought about it. They’re not able to stand up to it.”

Read More

Tradwives are levelling up

The ‘10-foot aliens’ spotted in Miami? Now they’re on holiday in Brazil

Judith Butler wants TERFs to wake up

So you want to mix cocaine and ketamine?

This is a scary prospect, of course, and amid widespread panic about the exponential progress of AI (which has even seen figures like Elon Musk call on developers to slow down) it’s tempting to dismiss it as mere fearmongering. Surely governments and major institutions like banks would have spotted such a massive problem looming on the horizon, right? Surely they would have done something to protect our money and our precious data? Then again, watching politicians grill tech leaders like Mark Zuckerberg or TikTok CEO Shou Zi Chew is notoriously cringeworthy, exposing the ruling class as out-of-touch and unwilling to understand new technologies. Is it possible that all of our institutions are run by the same kinds of people, boomers leading us blindly into a future beyond their comprehension?

“Absolutely,” says Alex Polyakov, the founder and CEO of Adversa AI, a company dedicated to researching trustworthy AI. He also agrees with the Centre for Humane Technology’s claim that 2023 will be a pivotal year for AI disruption, adding: “Content-based verification might be one of the first victims of real cyberattacks, because it grants direct access to critical information.”

In fact, we’ve already seen some alarming examples of AI-based cyberattacks against biometric security systems in the real world. By animating faces lifted from ID cards or social media profiles, deepfake videos have been used to pass “liveness checks” used by everything from banks, to dating apps and crypto companies, as well as tricking government systems that rely on facial recognition technology. Deepfake audio has been used to hack bank accounts, or authorise millions of dollars of fraudulent payments. Even biometric identifiers like fingerprints and eye patterns aren’t safe. “AI algorithms can generate fake fingerprints and iris patterns that might deceive security checks,” says Polyakov. “Such attacks are currently being demonstrated in labs, but we are very close to seeing them in real scenarios, or they may already be happening but remain undetected.”

If it’s possible to sneak by a bank’s security system with the help of AI, then imagine how easy it is to fool the average grandma, who was already on the fence about that email from a Nigerian prince. Unfortunately, more personal examples of AI scams are also well-documented, making use of technology that can simulate someone’s voice based on just three seconds of audio.

On April 10, an Arizona woman received a phone call demanding a ransom for her daughter, Payton Bock, whose voice could be heard by the alleged kidnapper on the other end of the line, even though she was actually safe in her bedroom. “It was completely her voice,” the mother told news outlet WKYT. “It was her inflection. It was the way she would have cried.” Bock herself has explained the scam in a TikTok, saying: “This guy had my voice. I was bawling, saying, ‘Mom, I don’t wanna die’.”

The comments of Bock’s TikTok suggest that voice cloning scams are much more widespread, with followers saying that their own family members have had similar experiences. Back in March, the Washington Post also reported that such scams are on the rise, including an example in which a man’s grandmother was convinced to take thousands of dollars of cash out of the bank to bail him out of jail.

“AI imitates human voices using deep learning techniques, specifically by training on large datasets of human speech,” explains Polyakov. “By learning the nuances of a person’s voice, AI can generate realistic-sounding voice samples.” This technology is useful, of course –it powers accessibility tools, voice assistants, and various forms of entertainment –but undeniably dangerous in the wrong hands. It’s also very difficult to detect. “According to our internal tests at Adversa AI Security Research Lab, only a handful of research techniques can distinguish these imitations,” he adds. “We might soon find ourselves in a situation where it is virtually impossible to tell the difference, unless AI developers inject specific watermarks.” Even then, tech-savvy criminals are likely to find exploits, building their own AI voice cloning tools with increasingly competent technology.

Needless to say, this is a worrying prospect, and the lacklustre response from regulators isn’t particularly reassuring. So how close are we to the point where it “all breaks”, as described in that Center for Humane Technology presentation? According to Polyakov: “It’s already falling apart.” As evidence, he cites Adversa’s “red team” experiments (which challenge systems and policies by adopting the role of the antagonist). These experiments have seen the company break through “very sophisticated facial recognition algorithms” using tech that goes above and beyond mere deepfakes. It has also developed a pair of glasses that can fool any IRL facial recognition systems into thinking you’re Elon Musk, which is obviously quite funny, but also raises some important questions about future criminals concealing their identities.

“The bad guys are usually ahead of the good guys” – Alex Polyakov

At this point you might be wondering: what can we do to prevent AI from cracking open the virtual safe that contains all of our private information? Or how can we stop it being used to scam our closest friends and family? “We can’t trust everything we see, hear, and read – that’s the unfortunate truth,” says Polyakov, recommending that any suspicious messages sent to our personal devices should be double-checked on another platform or communication channel. “For instance, if it was a phone call, check on WhatsApp; if it was an Instagram message, call their direct number, and so on.”

The deception of larger security systems is a bit more complex, of course, and Polyakov says that in AI –as with all cybersecurity issues –“the bad guys are usually ahead of the good guys”. There is hope, however. “Traditional systems always had a trade-off between usability and security,” he adds. “If we build our new systems based on AI, and if [they’re] trained properly, more secure systems can be more robust and accurate.” Basically, we’ll need to train up increasingly sophisticated security systems –a level of progress that can only be achieved with AI –to have a hope of keeping up with the AI tools used to attack them.

More specifically, Polyakov calls for more AI “red teaming” to ensure that existing AI systems can withstand attacks from a range of bad actors, as well as investment in new AI systems to keep up with fast-paced developments. On an individual level, it’s important to raise public awareness of AI security threats and encourage vigilance about AI-generated content (the same way we’ve learned to spy a dodgy link or a spam email). Strong legal frameworks to dissuade the misuse of AI would also be nice, but if we’ve learned to expect anything from our politicians, it’s that these kinds of policies will probably come half a decade too late.

Ask an expertArtificial IntelligenceTechnology

Download the app📱

  • Build your network and meet other creatives
  • Be the first to hear about exclusive Dazed events and offers
  • Share your work with our community

Join Dazed Club

Is this the year that AI breaks into our bank accounts? (2024)

FAQs

How will AI disrupt banking? ›

AI algorithms can help FIs combat fraud and other cybersecurity by analyzing customer data, including transaction records, to establish behavioral baselines.

Which bank is using AI? ›

Per the report, both Capital One and RBC have shown consistent strength across AI patents, research and partnerships throughout 2023. Notably, Capital One has supplanted JPMorgan Chase in AI development and data engineering talent metrics. Other high-performing banks include Wells Fargo, UBS and CommBank.

What are the disadvantages of AI in banking? ›

Disadvantages of Artificial Intelligence in Commercial Banking
  • Expensive: Artificial intelligence is a very expensive technology to implement! ...
  • High Cost of Error: Not only is artificial intelligence very expensive to implement, but the cost of errors made by it can also be very large.

Is voice ID safe with AI? ›

Voice recognition systems in banking rely on a person saying something aloud, such as a unique catchphrase or password. This is vulnerable to exploitation because synthetic AI-generated voice technology has evolved to such an extent that it is indistinguishable from real voices.

Is AI a threat to finance? ›

However, hallucination, algorithmic bias and vulnerability to data quality issues present risks to the accuracy of AI predictions. If financial entities base their decisions on faulty AI predictions which are not checked, this could lead to outcomes that may result in economic losses or even disorderly market moves.

Will finance be replaced by AI? ›

AI still has a long way to go before taking all of our jobs but it is up to us as finance professionals to learn about it, evolve our thoughts, and upskill to prevent that from happening. Financial markets and business conditions can change rapidly, requiring quick adjustments and strategic decision-making.

Which Bank has the best AI? ›

Preliminary research by Evident shows that the banks ranking highest on its AI Index — JPMorgan, Capital One, Royal Bank of Canada, Wells Fargo and UBS — saw an almost 34% year-over-year increase in their share prices.

Does Bank of America use AI? ›

Dive Insight:

AI-powered chatbots and digital assistants are nothing new to Bank of America's technology suite. The company launched virtual customer service tool Erica in 2018, added its capabilities to the CashPro commercial banking platform last year and assisted 18 million unique users in Q4, according to Moynihan.

Does Chase use AI? ›

It employs more than 2,000 machine learning and AI experts and data scientists worldwide, with than 400 AI use cases already deployed in areas such as marketing, fraud, and risk, as detailed in Dimon's latest letter to shareholders.

Will AI affect investment banking? ›

AI will change how businesses operate and can transform investment banking, but it won't replace bankers soon. AI may simplify tasks and improve decision-making, but investment banking relies on human perception and connections.

What is the risk of technology in banking? ›

Banks face technology risk from the use of a computer network system for the conduct of business and the creation of electronic channels for providing off-site services to customers. The vulnerability of the security system in preventing unauthorized use of computers is a significant source of technology risk.

Can someone tell if you use AI? ›

Can AI content be detected? Yes, Originality.ai, Sapling, and Copyleaks are AI content detectors that identify AI-generated content. Originality.ai is praised for its accuracy in verifying authenticity.

How do I get rid of AI voice? ›

How to delete Voice.AI
  1. Open the “Start” menu and go to the “Control panel.”
  2. Select “Programs” > “Programs and features.”
  3. Locate Voice.AI, right-click, and select “Uninstall.” Then, follow the on-screen instructions.
Dec 8, 2023

Is chatting with AI safe? ›

How to stay safe while using chatbots. Chatbots can be hugely valuable and are typically very safe, whether you're using them online or in your home via a device such as the Alexa Echo Dot. A few telltale signs may indicate a scammy chatbot is targeting you.

How will generative AI affect banking? ›

Adding Gen AI to existing processes helps banks convert customer call to data, search knowledge repositories, integrate with pricing engine for quotations, generate prompt engineering, and provide real-time audio response to customers.

How can AI disrupt the economy? ›

Roughly half the exposed jobs may benefit from AI integration, enhancing productivity. For the other half, AI applications may execute key tasks currently performed by humans, which could lower labor demand, leading to lower wages and reduced hiring. In the most extreme cases, some of these jobs may disappear.

How does AI affect banks risk management approach? ›

By generating and improving code to detect suspicious activity and analyze transactions, the tech can improve transaction monitoring. Credit risk. By summarizing customer information (for example, transactions with other banks) to inform credit decisions, gen AI can help accelerate banks' end-to-end credit process.

References

Top Articles
Latest Posts
Article information

Author: Edwin Metz

Last Updated:

Views: 6601

Rating: 4.8 / 5 (78 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Edwin Metz

Birthday: 1997-04-16

Address: 51593 Leanne Light, Kuphalmouth, DE 50012-5183

Phone: +639107620957

Job: Corporate Banking Technician

Hobby: Reading, scrapbook, role-playing games, Fishing, Fishing, Scuba diving, Beekeeping

Introduction: My name is Edwin Metz, I am a fair, energetic, helpful, brave, outstanding, nice, helpful person who loves writing and wants to share my knowledge and understanding with you.