Password fail: what to do about stolen biometrics?
Data breaches give pause for thought to the consequences of our identifying credentials – biometric data included – becoming available to fraudsters. And with every new information system that has our data on it (think about hotel and flight bookings, credit card applications, loyalty programs, and more) – the risk of that information ending up in the wrong hands is bumped up. More so, if one of those data stores happens to be an example of the low-hanging fruit that’s so frustratingly effective at grabbing the attention of bad actors.
Governments have long become fed up with the state of affairs and passed legislation such as GDPR in Europe and a raft of privacy laws in the US, and elsewhere. The thinking here is that companies – motivated to avoid large fines – will raise their security game over time, making our personal identifying information (PII) a little harder to reach. But governments can be targets too. Reminders of this include the widely reported theft, in 2015, of more than 5 million fingerprints (plus tens of millions of other records) from the US Office of Personnel Management (OPM) – the arm of the federal government responsible for managing the civil service.
Irreplaceable identifiers
The disclosure of biometric data is particularly troubling – as we’ll discover in more detail soon – because it’s not the same as starting over with a replacement credit card (with a different number and security code). Victims can’t request a new set of fingerprints, a pair of irises, or a new face. And even the brave souls who hazard plastic surgery as an option, have to admit that it’s a drastic solution to a data breach – and who would foot the bill for an incident affecting millions?
At the time of the OPM data breach, officials admitted that while the ability to misuse fingerprint data was limited, this probability could change over time as technology evolves. And – as if adversaries didn’t already have access to enough biometric data – security researchers showed in 2019 that other databases were vulnerable too. The warning bell this time around was for a Korean web-security provider, whose services are reportedly used by thousands of customers globally, including police forces. Naturally, when such security holes are found, concerns are raised about the ability of providers to guard their biometric assets. But maybe a wiser question to ask is whether biometric data is actually up for the job in the first place?
Renowned security expert Bruce Schneier summed it up well back in 1998 – pointing out that while biometrics are useful as unique identifiers, they need to be built on a trusted path from reader to verifier. And, noting a common pitfall, if you’re relying on biometrics to possess the characteristics of a key – to be secret, to be random, have the ability to be updated, or reset – then you’re staring at a major security problem. Biometrics fall down quickly as soon as you start to lean on them as secrets, and worse still when you need those secrets to be replaceable.
Liveness-as-a-service
That’s not to say that designers need to give up on the convenience of biometric data – instead, product teams just need to make sure that the information is being applied in the right context. And today, that includes exploring ways of building ‘liveness’ into the data capture process. US firm IDR&D is developing analytical methods ‘to determine if a biometric sample is being captured from a living subject who is present at the point of capture’.
The New York based company aims to stop fraudsters from succeeding with so-called presentation attacks (for example, where an adversary unlocks a device by showing a photo to mimic a real face) and has security solutions for facial, document, and voice data. Its developers are working with clients across a range of applications, including helping telecommunications firms to prevent subscriber fraud by incorporating liveness information into customer onboarding.
Other firms working in this space include Authenteq – based in Berlin, Germany – which began by providing a solution for verifying online personas and validating web reviews, and has expanded its activities to include use cases in finance, property management, mobility, marketplaces, and the sharing economy. Backers of the include the European Innovation Council – a flagship program designed to support ‘visionary entrepreneurs’. Authenteq’s AI-powered solution has been trained to validate over 5000 different government-issued identity documents – a data set sourced from almost 250 countries and featuring over 80 languages, according to information on the company’s website.
Refreshingly, this new wave of firms openly acknowledges that biometric data is not a secret, and offers solutions that build on multiple streams of information rather than relying on single-source strategies that can quickly unravel. That’s not to say that all of the issues have been solved. Adversaries will raise their game too – as is the cat and mouse nature of security – for example, by deploying voice-changing software to try and fool biometric algorithms. Adversarial AI methods are another potential concern. But good security design is a reassuringly strong platform to build on.