Of every five biometric fraud attempts on Entrust's identity systems last year, one involved a deepfake. The figure comes from the firm's 2026 Identity Fraud Report, drawn from over a billion verifications across 195 countries between September 2024 and September 2025. In payments, 82% of fraud attempts now target the authentication process. Deepfaked selfies are up; injection attacks, where synthetic media is fed straight into the verification system rather than held up to a camera, are up roughly 40% year on year.
The headline number is the easy part. The meaning is more interesting. SCA gives us three factor categories: knowledge, possession, and inherence. They are not equivalent. They fail in different ways, at different costs, and against different attackers. Inherence has always been the awkward one. The Entrust figures are the clearest signal yet that treating it as the strong factor in a two-factor stack is no longer defensible.
Why inherence is different
Knowledge factors fail when a secret leaks. Possession factors fail when a device is stolen or cloned. In both cases, the defender knows what to harden. Inherence is different. The defender is not protecting a secret or a token, but an assumption: that the biometric pattern in front of the sensor genuinely came from the registered user. That assumption used to be cheap to defend, because impersonating a face required either a real face or a serviceable mask. Generative video has collapsed the cost of both routes at once.
The deeper problem is that the most damaging attacks in the Entrust data do not fool the camera. They bypass it. Injection attacks feed manipulated frames directly into the authentication system, behind whatever sensor the user is supposed to be standing in front of. Anti-spoofing measures designed to detect a printed photograph or a screen replay have nothing to say to an attack that never goes near a physical lens. The defence has to move further down the stack, into device attestation, signal provenance, and integrity checks on the capture environment itself.
The PSR has picked the right carve-out
The trilogue-approved PSR text makes this conversation operationally urgent. Under PSD2 and the SCA RTS, two SCA elements have to come from two different categories. The PSR keeps that default, with a single exception. Article 85(12) singles out inherence as the only category in which a payment service provider may use two elements from the same category, provided it can demonstrate to its national competent authority that the independence of the elements is at all times fully preserved and that the authentication procedure ensures at all times a high level of security. EBA Article 16 guidelines on how to assess that independence are due within 18 months of entry into force.
The legislator has, in effect, anticipated face-plus-fingerprint authentication and singled out the case that matters:
the only category where two-element stacking is permitted is also the one most exposed to generative AI.
A PSP operating under that exception will need to demonstrate that the failure modes of its two inherence elements are uncorrelated. The Entrust data is the empirical question against that legal one.
Liveness is the floor, not the ceiling
The industry response is converging on liveness detection, but the version of it that beats deepfakes is not the passive one. Passive liveness reads micro-movements and depth signals without prompting the user, and it handles the camera-facing attack reasonably well. It does less for an injection attack. Active liveness, which uses randomised motion prompts the attacker cannot pre-script, performs substantially better in Entrust's own data: the firm reports a fraud rate below 0.1% against its active-liveness product, and the share of deepfake attempts against that product fell in 2025 even as it climbed against passive selfie verification.
CEN/TS 18099, the European technical specification published in 2024, now provides the testing methodology, and independent labs are issuing certifications against it at Substantial and High assurance levels. The vendor question for 2026 is no longer "do you do liveness?" It is: which liveness, against which threat model, certified against which standard.
What this means for payments teams
If your fraud strategy treats biometric authentication as the strong leg of a two-factor stack, you are relying on a factor that an organised attacker can now reach for the cost of a consumer GPU and a weekend. The answer is not to abandon inherence. The answer is to stop treating it as a sealed black box and start asking the same questions of it that you would ask of any other security control: what does the threat model assume, how is the assumption defended, and how do you know the defence is still holding. If the PSR lands as drafted, the burden of proving genuine independence between two biometric factors will sit with the PSP, not the regulator. The vendors who have already answered that question cleanly will earn a different kind of conversation in 2026.