Working with biometrics is always a challenging activity. The inclusion of passwords, authentication is relatively simple — they either match or they don’t. But when a “password” is a part of a user’s body, be it a face scan, iris match, or a normal old fingerprint, systems will have to anticipate and account for a little bit of wiggle room. Because, in the end, it is not acceptable if a face scan fails because there is a pimple on the face or a fingerprint scan fails because the user touched the sensor at a slightly different angle each time. A new attack has started taking advantage of this flexibility that was programmed for ease of use by a user, to generate fake “universal” fingerprints.
These synthetic fingerprints have been known as DeepMasterPrints and was created by feeding neural network images of real fingerprints continuously until the network could generate its own fingerprints. These prints would then be analyzed using the same type of verification algorithms that have been employed by the scanners on our phones and were modified over and over in subtle ways until they passed the algorithm, even though it was a mismatch. By repeating this experiment with a large data set, the team was able to come up with fingerprint images which had enough elements in common with the average person’s prints such that the scanners could easily be tricked into giving a false positive result. The DeepMasterPrints have been designed to work equally well with any user.
The intensity of this depends on how precise the scanner is, which is trying to be fooled. Every fingerprint scanner has an acceptance rate of some false positives. These are situations where an unauthorized fingerprint is mistakenly interpreted as an authorized fingerprint. A tolerant scanner would probably have an acceptance rate of about 1.0% false positives when real fingerprints are used, but DeepMasterPrints have the ability to fool such kinds of scanners about 77% of the time, which is definitely shocking.
Stricter scanners which have acceptance rates of about 0.1% false positives could still be tricked by DeepMasterPrints more than 22% of the times, and those which have 0.01% acceptance rates can be tricked by DeepMasterPrints 1% of the time only.
This gives an eye-opening insight into how much security is being compromised just for the sake of convenience. Let us hope that future devices could be built with attacks like DeepMasterPrints in mind so that they can offer better and robust false-positive rejection.