How to keep up with today’s smartest identity thieves

OSTN Staff

Businessman in the shadows
  • Cybercriminals represent some of today’s most prolific tech innovators.
  • Deep fakes and AI audio manipulation are just some of the tactics being used.
  • New and layered identity verification tactics are required to preserve trust and safety.

Meet today’s identity scammers. They aren’t like yesterday’s at all. When Frank Abagnale, the real-life protagonist of “Catch Me if You Can,” forged bank checks, the internet was in its infancy. Today’s ID thieves commit push-button attacks online against victims half a world away. 

These tech-savvy cybercriminals always have new tricks in play, and experts at identity verification company AU10TIX have seen them all. “Just as technology has revolutionized our lives for good, it has also advanced the capabilities of bad actors,” Gaby Kozakov, chief technology officer, said. “There is no shortage of unique techniques, and fraudsters never cease to innovate and experiment.” 

The move to online services sparked a flurry of innovation among fraudsters. They replaced physical document forgeries with synthetic digital ones. They cut and pasted a mixture of real and fake identity information into digital images supposed to be pictures of physical documents. 

When companies noticed this problem, criminals evolved again. They realized their synthetic ID documents were too perfect. A real photograph of a physical ID might show wear and tear, blur from camera shake, or glare from poor lighting. Modern algorithms can detect their absence and sound the alarm. “So they applied filters to make the assembled identity look less perfect,” Kozakov said. 

Criminals also use other technologies to gain an edge over defenders. Artificial intelligence makes identity fraud more subtle and harder to spot. 

For example, there are deep fakes, which are photo-realistic pictures of people who never existed, generated with neural networks. Spies are already using these to create fake online profiles. Others use AI-generated audio representations of real people for manipulative phone calls, some to get someone to divulge PII or others to impersonate high-value targets capable of unlocking millions of dollars. 

Automation is accelerating everything 

Legitimate businesses automate tasks to improve efficiency. Criminal groups are following suit, using software automation techniques from machine learning to bots in their attempt to generate illicit revenue and information. This is helping them turn fake IDs into a volume industry. 

“We’re monitoring professional-looking websites where you can create and purchase your own synthetic identity card,” Doron Mustafi, VP of customer success at AU10TIX, said. “At the end of the process, there’s a button to take you straight to a crypto exchange or payment platform to open an account with it.” 

Automation inspires the criminal imagination, Mustafi adds. On top of obvious malicious activities, like money laundering, AU10TIX encounters cases where crooks are committing innocuous frauds such as submitting a synthetic identity via a web form to illegally claim a $50 gift card as a reward. Automating that process with bots could enable them to steal hundreds or thousands of cards. 

And the effects of modern fraud aren’t just financial. They affect companies, communities, and customers in new and worrying ways. 

“In newer identity-dependent sectors like the sharing economy, identity fraud can cause physical damage,” Kozakov said. This can include stolen or destroyed property, or injury. Uber had to fight back user fraud to curb carjackings in Chicago last year.

Fraud can also damage an organization’s reputation. When the state of North Carolina introduced an ID management system to verify unemployment payment applicants, the enrollment process took too long. Scammers capitalized on the delay by impersonating the ID management service and scamming citizens to steal their information.

How to fight sophisticated fraud 

How can companies improve their ID verification and fraud detection? One approach is defense in depth, in which you use multiple layers of protection to increase your chances of detection. 

“Examining just an ID itself is limiting,” Kozakov said, adding that the best approach involves combining it with other factors, including selfies. “You need to constantly introduce new processes that aren’t studied or expected.” 

Behavioral analysis can also influence your confidence in someone’s identity. If the same person tries to access a service from opposite sides of the world in an hour, an algorithm should get suspicious. 

Kozakov also sees power in collaboration between expanding sources, thanks to digital life, that could share online data and behaviors. Cross-referencing these could help members of a consortium improve identity confidence for everyone. 

The future will see organizations putting more control in consumers’ hands, allowing them to redact some information when handing their ID to a merchant. That reduces the risk of ID fraud for the user and data handling liability for the merchant. AU10TIX has already released a functionality like this for businesses wishing to only collect the necessary information, not all information, during their verification flows. 

The company’s experts expect to see new types of biometrics, along with more granular risk calculations linked to more user touch points, providing greater context when authenticating identity. 

The FBI used shoe leather detective work to track down Frank Abagnale in the late 1960s. Now. Thanks to modern technology, today’s fraud detection is less like “Catch Me If You Can” and more like “Minority Report” — where we can spot ID fraud as it happens, and sometimes even before.

Find out more about how AU10TIX is staying ahead of this new breed of cybercriminals.

This post was created by Insider Studios with AU10TIX. 

 

Read the original article on Business Insider

Powered by WPeMatico

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.