INSIGHT: Deep learning, biometrics, and the battle against financial fraud, by Antony Bream of AimBrain

As technology becomes more important to us, the threat of fraud grows with it. We’re shopping online, we’re sharing more details of our lives on social media than ever before, we’re digitising and automating wherever we can – and it’s leaving us more and more exposed to malicious fraudsters.

A PwC survey revealed that, in 2018, cybercrime supplanted asset theft as the most commonly experienced type of fraud – one that affects almost a third (31 percent) of people across the globe. In finance, the Revised Payment Services Directive (PSD2) introduced a requirement for a second factor of authentication (from the options of knowledge, inherence and possession) and, as such, many organisations are turning to biometrics as the second factor. Firstly, because they are becoming aware that authenticating a device is not the same as authenticating a person; secondly because biometrics lend themselves to a risk-based (rather than yes/no, rules-based) score; and more recently because of the costs and vulnerabilities of SS7 as a way to send one-time passcodes. On the radar for years yet never quite fixed, the vulnerabilities are still today providing the infrastructure loophole for fraudsters to intercept one-time passcodes sent via SMS.

A prudent approach is using multiple factors to authenticate a user offers greater assurance that a user is who they say they are. Yet advancements in biometric technology can now add even more value and protect against fraud as early as the new account enrolment stage.

Understanding fraud

In an increasingly digital world, there exists a large and growing range of fraud types, including:

New account fraud: Users create new accounts using stolen credentials with the purpose of generating an online persona or defrauding in the future. These attacks can help to create models of behaviour patterns indicative of fraud, providing a framework upon which new account generation patterns can be assessed.

Account takeovers: Hidden in phishing emails or seemingly harmless applications, malware can be executed to take control of a session once a user has been logged in successfully, bypassing the security questions such as passwords or 2FA passcodes. Spotting this type of change in behaviour can help an organisation apply a block / investigate / approve approach to security, based on their own risk appetite and the severity of the change.

Automated attacks: Scripted attacks and bot attacks can cause a multitude of problems for organisations, attacking infrastructures and wasting resources. Anomalous behaviour that signifies bots such as those used to generate spam accounts can be quickly detected and isolated.

Loyalty fraud: Also known as sign-up fraud, organisations suffer from individuals signing up for new accounts with the sole purpose of take advantage of credits, abuse loyalty points or other self-serving benefits. Knowing the behavioural patterns of this type of fraud, and recognising it at the sign up stage, can protect against future damage.

Getting to know fraudsters

Just like legitimate users, fraudsters and bots display trackable online behaviours. Using annotated or ‘labelled’ fraud data, organisations can create the models of fraudulent behaviour and continue to train them with every new interaction, using deep learning to continually refine accuracy. This allows them to highlight any suspect interactions even when it’s a new user, and stop malicious activity before it causes damage.

This information could be as rudimentary as how fluent a user is with an interface or navigation, propensity to use advanced keyboard shortcuts or even simply where information is cut and paste rather than entered freely. The subtler, invisible-to-human patterns however are detected via deep learning, and help organisations build an ever more comprehensive profile of attackers, comparing their behaviours to those of legitimate users and invoking extra authentication steps when something doesn’t seem quite right.

The biometric wall

Detecting and preventing fraud is a dual-step process. Step one: ‘Train’ machine learning tools to identify patterns that indicate fraud – they need a reference point to compare any new customers to in order to pinpoint any potentially criminal behaviour before triggering further security steps.

Step two: Known user authentication. Using behavioural analytics, it’s possible to monitor known users throughout a session, checking for any suspicious changes that might indicate an account takeover. If behaviour isn’t consistent with what the organisation considers acceptable, then again, more security measures can be triggered.

Each step is particularly useful because they’re both completely imperceptible and can work across any device, channel, or application. Suspicious profiles and individual behavioural profiles can be applied comprehensively on the web, on mobile platforms – fostering greater security invisibly.

If fraud can be detected ‘invisibly’, then it can be stopped before it does damage. An organisation’s reputation, bottom line and sensitive data are at stake – and staying one step ahead of fraud will ensure none of them are compromised.

Antony Bream

Author: Antony Bream, Global Head of Enterprise and New Business, AimBrain