Imagine the Head of Risk at a Malta-based online gambling company, the Friday before a big Champions League weekend. A known group of people who abuse bonuses has started probing the platform, so the team tightens the static rules. Over the next three days, they catch the group of criminals and secretly refuse a small amount of real VIP money along with them. The drop in revenue won’t show on any dashboard until Tuesday. By then, the customers will have switched to a different company.
That story is familiar to almost every C-level risk leader in the space. LexisNexis found US merchants now pay an average of $4.61 for every $1 of actual fraud, once fines, labour, and chargeback recovery are included. The quieter number is worse. J.P. Morgan’s payments research shows false positives account for roughly 19% of the total cost of fraud, while actual fraud losses are closer to 7%. Put plainly – blocking good customers costs more than the criminals do.
This is why fraud detection is quickly becoming one of the most useful sources of business intelligence for modern operators. The teams that figure this out first tend to become better than the ones that don’t.
The old fraud stack was built to say “no”
For a long time, fraud tooling had one job. Catch the bad ones. Let the good ones through. Teams tracked three or four numbers – the chargeback ratio, the false positive rate, and the time to decide – and called it a day. The output was binary. Decide yes or no. The data that led to that decision was thrown out at some point and not looked at again.
That framing has aged badly. The Merchant Risk Council’s 2024 Global eCommerce Payments & Fraud Report, run by Cybersource, found that most merchants sit at a 2–10% false-positive rate, with one in five reporting rates above 10%. ClearSale’s research adds the tail – 41% of customers who experience a false decline never shop with that brand again, and roughly 65% of declined transactions turn out to be legitimate.
That’s a problem with how much money they make, and it’s risky. The signals that indicate whether someone is a fraudster are the same signals that indicate whether someone is about to churn, whether someone is our best customer, or whether an affiliate is bringing us real users. Most fraud stacks delete that information as soon as a decision is made.
What changed, and why now
Three shifts arrived at roughly the same time and showed what a modern fraud system can do.
The first was about rules. AML, KYC, PSD2, and now PSD3 in Europe have made operators collect and structure far more behavioural data than they used to. Audit-ready fraud logs look very similar to audit-ready customer analytics. The tables are the same, but the readers are different.
The second was device fingerprinting and behavioural biometrics. Modern fingerprint scripts gather lots of information every time you use them, like your hardware settings, the path your network follows, how fast you type and the way you move around the screen. It is more like a product analytics tool than a traditional rulebook.
The third was something called ‘dynamic scoring’. The old rules said that you should block if the deposit is more than X. The new ones say, “Block if this deposit is unusual for this user and this group, right now.” To know what’s unusual, the system first has to know what’s normal – and once it does, it has already done most of the work of dividing customers into groups.
The pressure is real. Sumsub’s 2024 iGaming Fraud Report found that fraud in online gaming grew 64% year-over-year between 2022 and 2024, with bonus abuse, affiliate fraud, chargebacks, money laundering, and account takeover making up the top five schemes. Industry analysis from Gambling IQ puts bonus abuse alone at close to 66% of online casino fraud. A static rulebook written in 2021 cannot keep up.
The same signal, two different questions
When you see fraud data as something that can be used for two purposes, you’ll see how many examples there are.
Try using multi-accounting detection. The rules that flag a user creating five accounts from one device can be inverted – users accessing one account from multiple devices aren’t necessarily suspicious. They work across different devices. They want your app on all phones, tablets and laptops. That’s a way of marketing it, not a way of measuring how risky it is.
Take geo mismatches, for example. If you have a card that was issued in Warsaw and is used in Lisbon at 2 a.m., it could be a sign of card theft. It can also mean the user is travelling. The same pattern, based on device history and past behaviour, tells you something about risk or the traveller segment in your base. Those people deserve localised checkout, not a blocked transaction.
One example of this is account takeovers, which are most common in a specific region. The fraud teams spot an attack. Product teams, if they’re involved, see signs of a weak authentication UX – a recovery flow that’s easy to abuse, a password policy that pushes users to reuse credentials. One signal, two conversations, one actual fix.
Where black-box AI quietly breaks down
It’s tempting to think that AI is the answer to all of this, but that’s the wrong way to think about it – and it’s dangerous when important decisions are based on it.
A fully automated model is a black box. If a viral campaign brings in a new demographic that behaves differently from historical data, an unsupervised system can interpret the spike as a coordinated attack and start quietly declining transactions. By the time it appears in the dashboards, the conversion for the whole campaign has disappeared.
This isn’t hypothetical – the LexisNexis 2025 study found poor user experience is the primary driver of abandonment at new account creation for 36% of retail and 37% of ecommerce merchants in the US, much of it traceable to over-cautious automated controls.
The useful version of AI here is the kind that highlights unusual activity, suggests limits, and explains its own thinking – not the kind that quietly changes its rules at 3 a.m. If you can’t answer “why did the system block this user?” in under a minute, you might not be in control of your risk strategy. You’re renting it from a model.
The architecture that makes it possible
You can’t stop fraud or analyse data if they are kept separate. The shift requires a platform that treats detection, investigation, and insight as one process.
A modern fraud detection solution handles this in a few ways. Dynamic rules are used to identify unusual behaviour and continuously define what “normal” looks like for every segment. This is also known as behavioural segmentation. These tools, which were originally used to track fraudsters, can also be used to see which partners are working well together, how good a customer is referring someone else, and to make sure that no two customers have the same CRM record. Device fingerprinting keeps a consistent identity across sessions, even when cookies expire. This is important because browsers are getting better at tracking users.
The outputs go in two directions. The fraud team is responsible for dealing with any risk issues that come up. Information about customers is sent to the right people, and is checked, organised and assessed. The gate becomes a data layer. If we can turn losses into profit, it will benefit the whole organisation, not just the risk department.
Three questions your fraud data can already answer
If you want to see quickly whether this is real for your business, just pick one and see how long it takes to get a good answer.
Which ways of getting users actually work? Every affiliate channel leaves a digital trail. Things like shared device configurations, drop-offs before the second deposit, and no support tickets are red flags, not signs of good marketing. Affiliate fraud sits in Sumsub’s top five iGaming schemes for a reason, and fraud data can answer the traffic-quality question faster than attribution software.
Who are your most important people? Most VIP programmes reward the users who make the most noise. Fraud scoring, when used in the opposite way, finds a different group – users whose behaviour is consistent, clean, have been members for a long time, and are valuable over time. Teams who work with data that shows when customers are dishonest can spot the fraud months before it would appear on a normal loyalty dashboard.
Where is your grey zone? Most users are in the middle. Bonus tourists. People who only abuse a little. Users are one regulatory change away from being a problem. Fraud scoring that gives a probability – not a definite answer – maps this grey zone exactly, and whether to tighten, loosen, or educate is then a business decision.
Who should care
iGaming operators feel this most directly. The global gambling industry sits at roughly $540B in 2023 and is projected to hit $744.8B by 2028, with fraud costs running near $1 billion a year. It’s good to have information about bonus budgets, how much is spent by affiliates, and how to keep customers. This is important for marketing and products, not just for checking for risk. Fintech and payment providers get an AML layer that helps them understand their customers better. E-commerce teams can now tell the difference between customers who make lots of returns and those who are difficult to please.
If your fraud team only reports to the CRO and not the CPO, CMO or COO, the data is probably not being used as well as it could be.
The question worth asking
The change from fraud-as-gate to fraud-as-data-layer doesn’t require any major changes. You need to point the outputs at more than one door.
If your fraud system stopped sending alerts and only provided customer intelligence, would the rest of the business still find it useful? If the answer is no, you have a fraud tool. If the answer is yes, you have something more useful – and you’re probably already one step ahead of everyone else in your market.
Frequently asked questions
What is a false positive in fraud detection? A real transaction or user that a fraud system incorrectly blocks or marks as suspicious. J.P. Morgan’s research into payments shows that false positive losses are around 19% of the total cost of fraud – almost three times the cost of the actual fraud itself.
How is dynamic scoring different to static rules? The rules are the same for everyone: if you deposit more than €500, you will be blocked. Dynamic scoring creates a behavioural baseline for each user and group, and only triggers when activity differs from that specific norm. There will be fewer false positives and less work for the people doing the manual reviews.
Why is device fingerprinting important in iGaming? It creates a stable identifier that survives password changes, VPNs, and new emails. In a sector where most fraud is caused by people having multiple accounts and abusing bonuses, it’s the difference between catching a fraudster and paying them a welcome bonus twice.
What is graph-based forensic analysis used for? We connect accounts to see if there are any shared devices, IP addresses, payment methods or withdrawal destinations. This helps us to spot hidden fraud rings. The same technique also maps affiliate traffic quality, which is why many risk teams use graphs for attribution, not just investigation.
How can AI help to reduce the amount of work for risk teams? It automatically filters out obvious legitimate traffic and highlights any real anomalies for human review. If it is done right, it can cut the amount of work that needs to be checked manually. This lets analysts focus on rare but important problems. If it’s not set up to allow human intervention or transparency, it can block a lot of good customers.


























![5 Best CFD Brokers for Beginners [UK, 2026]](https://todaynews.co.uk/wp-content/uploads/2026/03/Invest-360x180.jpg)
















































