biometrics - TechHQ Technology and business Wed, 21 Feb 2024 18:27:13 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.4 Vietnamese government starts collecting biometrics https://techhq.com/2024/02/biometric-data-id-cards-vietnam-government-dna-too/ Wed, 21 Feb 2024 12:30:30 +0000 https://techhq.com/?p=232233

• Vietnam is set to collect an enormous amount of biometric data from its citizens. • Security in the system will obviously be paramount. • The development seems likely to generate whole new waves of crime by bad actors – biocrime. Biometric data is increasingly used in technological security systems, yet retina scans and voice... Read more »

The post Vietnamese government starts collecting biometrics appeared first on TechHQ.

]]>

• Vietnam is set to collect an enormous amount of biometric data from its citizens.
• Security in the system will obviously be paramount.
• The development seems likely to generate whole new waves of crime by bad actors – biocrime.

Biometric data is increasingly used in technological security systems, yet retina scans and voice recognition still call to mind the hi-tech lairs of fictional villains. Face ID seems a lot less glam when you’re trying to pay for a bus ticket with your phone.

Biometric data is the key to many a sci-fi smash.

Minority Report speculates on surveillance systems in 2054. Tom Cruise is there, too.

In Vietnam, citizens can now expect to give the government a slew of their biometric data, per the request of Prime Minister Pham Minh Chinh. Collection of biometric data will begin in July this year following an amendment to the Law of Citizen Identification passed in November 2023.

The amendment allows the collection of biometric data and record of blood type and other related information.

The Ministry of Public Security will collect the data, working with other areas of government to merge the new identification system into the national database. The new identification system will use iris scans, voice recordings and even DNA samples.

Vietnamese citizens’ sensitive data will be stored in a national database and shared across agencies to allow them to “perform their functions and tasks.” We’re sure the sharing of highly personal data won’t encounter any issues – accidental or otherwise.

Regarding the method of collection, the amended law says:

“Biometric information on DNA and voice is collected when voluntarily provided by the people or the agency conducting criminal proceedings, or the agency managing the person to whom administrative measures are applied in the process of settling the case according to their functions and duties whether to solicit assessment or collect biometric information on DNA, people’s voices are shared with identity management agencies for updating and adjusting to the identity database.”

Well, obviously.

Chairman of the National Defense and Security Committee, Le Tan Toi, has expressed the belief that a person’s iris is suitable for identification as it does not change over time and would serve as a basis for authenticating an identity.

As things currently stand, ID cards are issued to citizens older than 14, and aren’t mandatory for the six to 14 age range – though they can be issued if necessary. The new ID cards will look much the same but undergo several changes, not least the addition of holders’ biometric data.

They’ll incorporate the functions of some other ID documents too, including driver’s licenses, birth and marriage certificates, and health and social insurance documents. All of your personal information stored in the same place… What could go wrong?

Biometric data must be secured

Fingerprints on the ID card will be replaced by a QR code linked to the holder’s biometric and identifying data.

There are roughly 70 million adults in Vietnam, so the task of collecting the huge amount of data from them all will be no mean feat. In case you hadn’t got there yet: security will be paramount. The data on citizens is prime for identity theft; we might expect to see an increase in bad actor activity, including skimming to collect fingerprints from ATM machines.

Technology is always evolving, but it’s not necessarily guaranteed to evolve for the better. A group of researchers from China and America recently outlined a new attack surface, proposing a side-channel attack on the Automatic Fingerprint Identification system: “finger-swiping friction sounds can be captured by attackers online with a high possibility.”

Ensuring that the personal information of Vietnamese citizens is secure at every level is a responsibility the government must be prepared to take on.

There’s also the sticky issue of government surveillance that almost doesn’t bear thinking about. We’ll leave the tinfoil hat within reach.

From airport services to citizen ID…

The post Vietnamese government starts collecting biometrics appeared first on TechHQ.

]]>
Windows Hello says goodbye to laptop security in testing https://techhq.com/2023/12/why-are-manufacturers-failing-their-windows-hello-implementations/ Tue, 05 Dec 2023 15:00:31 +0000 https://techhq.com/?p=230479

• Windows Hello – the fingerprint-based biometric security system – has been cracked by vulnerability investigators. • Windows Hello depends on two key systems, the MOC sensor and the SDCP protocol. • Investigators from Blackwing found device manufacturers were not diligent in maximizing the protection of laptops. Hubris is a wonderfully entertaining thing to watch... Read more »

The post Windows Hello says goodbye to laptop security in testing appeared first on TechHQ.

]]>

• Windows Hello – the fingerprint-based biometric security system – has been cracked by vulnerability investigators.
• Windows Hello depends on two key systems, the MOC sensor and the SDCP protocol.
• Investigators from Blackwing found device manufacturers were not diligent in maximizing the protection of laptops.

Hubris is a wonderfully entertaining thing to watch play out on a stage. It’s significantly less wonderful if your laptop’s super-duper biometric security system comes with false pride built in. This does not appear to have occurred to the manufacturers of several laptops protected by Windows Hello – the fingerprint scanner that’s supposed to deliver personalized, biometric security for all your sins, secrets, and sales figures.

At least, it didn’t necessarily occur to them at the appropriate time.

The ideal time for comprehensive bug-testing and security audits is before you release a piece of hardware into the wild. But when Microsoft’s Offensive Research and Security Engineering (MORSE) asked Blackwing Intelligence to test out the security on a number of new laptops that were advertised as being compatible with Windows Hello, it’s probably safe to say they expected a glowing report.

If so, what followed was probably quite a gloomy day at MORSE headquarters.

MORSE gave Blackwing three of the most popular laptops that claim to be compatible with the Windows Hello fingerprint security protocols: the Dell Inspiron 15; the Lenovo ThinkPad T14; and the Microsoft Surface pro type cover with fingerprint ID (for Surface Pro 8/X).

Blackwing ran a vulnerability search process, including significant reverse engineering of both hardware and software, cracking cryptographic flaws, and deciphering and reimplementing proprietary protocols. And while all of those processes took the experience away from the Mission Impossible idea of rendering top-notch security invalid with a paperclip and a ballpoint pen, the end result of Blackwing’s efforts was a full bypass of Windows Hello.

On all three laptops.

While the number of processes through which Blackwing put the laptops to thoroughly diagnose their vulnerabilities was extensive, there were two core elements of the machines that allowed the investigators to entirely bypass Windows Hello.

They were 1) the match on chip sensors, and 2) the secure device connection protocol.

MOC the weak?

Match on chip describes the kind of sensors used in Windows Hello. There’s an alternative, match in sensor, but critically, Windows Hello doesn’t work with those sensors, so any machine that says it’s compatible with Windows Hello will have match on chip sensors.

That’s what bad actors call a single point of vulnerability, and it’s why, for instance, Blackwing was able to perform the same sort of bypass of Windows Hello across three entirely different machines made and branded by three different manufacturers.

Match on chip sensors (MOC sensors) contain a microprocessor and storage built into their chip (the clue is in the name). That setup means fingerprint matching is done locally too, the scanned print being checked against an on-chip database of templates (which you set up when you start to use Windows Hello).

In what is in fact fairly sound engineering theory, that should make for safer biometrics. The prints never leave the chip in the scanner, so the risk of them being stolen is vanishingly small.

Windows Hello uses a particular kind of sensor.

Fingerprint profiles are stored on chip in Windows Hello-compatible machines.

That theoretical extra safety measure is why Windows Hello requires the use of this kind of scanner.

So far, so good, right?

The second line of defense.

Yes, as far as it goes. Sadly, with a devious enough mind and something a touch more sophisticated than a screwdriver, it is still possible to get a malicious sensor to spoof a genuine one in its communications with the host and persuade the system that it’s been verified when it hasn’t.

Spoofing is equivalent to sliding into someone’s DMs… and then stealing their identity, their protocols, their privileges, and probably their house to boot. The malicious sensor sends communications to the host to say everything’s just fine and dandy, and it should allow the verification of whatever it’s being asked to verify, while the innocent sensor is, by contrast, baffled and silent, locked in the electronic equivalent of the basement.

So while match on chip sensors are technically extra secure, they do have a fairly well understood weakness.

The point being that Microsoft… knows that.

In fact, it knows it so thoroughly that it created a whole protocol to ensure the security of the connection between the sensor and the host, to effectively lock out any malicious sensors and make sure the host is communicating strictly, securely, with the innocent sensor with the match on chip.

The protocol is called the secure device connection protocol, or SDCP.

The second element of vulnerability that allowed Blackwing to leave Windows Hello weeping, in pieces, at the feet of all three laptops.

Windows Hello - not as vulnerable to hackers as it is to inattentive manufacturers.

Windows Hello – not as vulnerable to hackers as it is to inattentive manufacturers.

Again, in perfectly sound engineering and computing theory, the SDCP exists to do one thing – to make communication between the sensor and the host reliably secure.

To do that, it needs to make sure the fingerprint device is trusted and healthy, and it needs to ensure the security of the communication between the sensor and the host.

To achieve that, it needs to answer three questions about the fingerprint sensor: how can the host be certain it’s talking to a trusted device and not a malicious device?; how can it be sure the device hasn’t been compromised?; and how is the raw data protected and authenticated?

If it can answer all three of those questions, then in theory, it should be able to operate with a degree of communication certainty that would make Windows Hello as safe as it needs to be. And, crucially, as safe as it’s marketed to be, as a biometric security system to keep systems like laptops as private as we want in this supposedly vunerability-conscious age.

Three professional months to crack Windows Hello.

As with the MOC technology, the SDCP is actually a more-than-reasonably clever way of shutting down the sneaky operators who would try and get past even the most modern of security systems.

But Blackwing managed it. It managed it with a 100% reliable bypass rate, across three different machines.

What’s the takeaway? That Windows Hello is fatally flawed and not worth the silicon it’s written on?

Absolutely not. It’s worth noting that Blackwing does this sort of thing for a living, and it took its investigators a solid three months of daily access to work out how to compromise the system. Once it’s been done once – or indeed, three times – of course, the process can be sped up and streamlined, but still, the weakness appears not to be in Windows Hello itself.

In explaining the functions of the MOC and the SDCP, we’ve only taken you to the gates of the problem. If you appreciate extremely technological cleverness, reading the original Blackwing report on the process of breaking Windows Hello will make you both boggle and chuckle in techie.

Shields up, red alert! No, really – raise the shields!

Windows Hello has two lines of initial defense. One of them wasn't switched on in two out of three cases.

One of these things is not like the other… Device manufacturers would do well to tell them apart.

The point, as Blackwing concludes having spent three months on the problem, is not that Windows Hello is particularly weak, but that device manufacturers either don’t understand it, or don’t do a sufficient amount of configuration and testing before their machines are sent out into the world. Sensor encryption generally used poor quality code, and there’s a likelihood that the sensors used by manufacturers are subject to memory corruption.

But the big spoiler is one that will make bad actors belly laugh.

In two out of the three devices, Blackwing found that the SDCP – the protocol designed specifically to establish secure communication between the sensor and the host, and so close out the loophole in MOC sensors – wasn’t switched on by default.

There are reasons why system engineers twitch whenever anyone dares use the word “foolproof.”

Blackwing is hoping soon to turn its attention to systems by Apple, Android, and Linux.

Watch this gaping hole in security protocols for more as we get it…

 

The post Windows Hello says goodbye to laptop security in testing appeared first on TechHQ.

]]>
Quantum designs bury face biometrics under smartphone screens https://techhq.com/2023/08/quantum-designs-bury-face-biometrics-under-smartphone-screens/ Wed, 30 Aug 2023 17:36:01 +0000 https://techhq.com/?p=227633

High-end smartphones are works of art, packing many times the computing power needed to put humans on the moon into a sleek form factor topped with a pin-sharp high-resolution color display. However, some might argue that modern devices aren’t as beautiful as they once were. Bezel-less designs engineered to maximize screen size, force developers to... Read more »

The post Quantum designs bury face biometrics under smartphone screens appeared first on TechHQ.

]]>

High-end smartphones are works of art, packing many times the computing power needed to put humans on the moon into a sleek form factor topped with a pin-sharp high-resolution color display. However, some might argue that modern devices aren’t as beautiful as they once were. Bezel-less designs engineered to maximize screen size, force developers to place front-facing camera hardware and face biometrics into notches and pill-shaped cut-outs.

Apple’s designers have reconciled themselves, for the time being, to hiding the image sensor array – which features a regular camera, plus an infrared (IR) projector and camera – using cleverly designed graphics. Adjacent portions of the display are switched to black to mask the cutout sections of the screen – a feature dubbed ‘dynamic island’ as it includes app-specific information, which launched with the iPhone 14 Pro.

Prospects for under panel cameras and sensors

A more elegant solution would be to hide all of those components under the display, paving the way for full-screen notchless iPhones and other smart devices featuring face biometrics and front-facing cameras. But that’s easier said than done. Firstly, when it comes to cameras, burying the optics under the screen is a step backward in image quality, with less light hitting the sensor.

And things become particularly tricky when you examine how smartphones capture facial biometrics. Considering Apple’s Face ID, the iPhone’s front-facing image sensor array projects a pattern of structured light onto the face of the viewer and builds – using the built-in IR camera – a depth map of the target, which can be compared to a stored user profile.

The issue, as Jacky Qiu – VP of Strategy at OTI Lumionics – told TechHQ, concerns the display electrodes, which sandwich the smartphone screen’s light-emitting red, green, and blue pixels. “Metals are a good absorber of IR light,” he points out. “So if you put a camera behind the screen, the IR signal is attenuated.”

In fact, the attenuation factor can be as high as 2000, which doesn’t leave much IR information for the detector to work with. And so, based on that evidence, the prospects for burying face biometrics under smartphones don’t look promising. However, there are tiny windows of opportunity – literally.

Display makers are using self-assembling molecules produced by OTI Lumionics – a technology that the Canadian firm dubs CPM patterning – to engineer IR transmission through the display electrodes. “You open up many small windows in the range of a few microns,” said Qiu.

OTI Lumionics, which was spun out of the University of Toronto in 2011, has been busy finding a killer application for its advanced materials technology, exploring a variety of use cases along the way. “You need to find one or two application spaces to get a footing and then climb from there,” Qiu, a member of OTI’s co-founding team, explains.

There are countless examples of bright ideas that shine in the lab, but struggle to find their niche in the market. And Qiu advises tech start-ups to be conscious of the bigger manufacturing picture. “Always be in touch with the end consumers,” he said. “And consider their roadmaps and their needs.”

Endorsement from display makers

In 2022, OTI Lumionics announced that it had raised $55 million in series B funding to enable under display cameras and sensors. And the investor list includes venture arms of big-name display makers such as LG Technology, Samsung, and United Display Corporation (UDC).

It’s worth adding that while the focus may be on enabling full-screen high-end smartphones thanks to under panel front-facing cameras and face biometrics, OTI’s patterning technology can serve other industry sectors too. The firm lists semiconductor, energy, battery electrodes, meta materials, catalysts, communications, and medical devices as other applicable markets beyond displays.

And helping the company in its search to find the right molecule has been the rise in quantum computing methods. Quantum computing holds great promise for being able to simulate materials behavior. And while current machines lack sufficient numbers of qubits to represent the long molecules used in display electrode patterning, it turns out that running quantum-inspired algorithms on classical machines ended up playing a vital role in materials discovery.

Display manufacturing is a huge endeavor, involving pattering large sheets of glass the size of an automobile, which are then cut down to device sizes. And trial and error is too expensive when it comes to getting materials approved for manufacturing. Suppliers such as OTI need to narrow down the number of candidates to a manageable number.

Artificial intelligence methods have merits, but only when fed with large amounts of data. However, that takes developers back to the problem of how to generate sample information affordably, which is where synthetic data fit in. And running quantum-inspired algorithms on classical computers has provided a major boost in generating synthetic data that’s a level above relying purely on density functional theory (DFT) – a common route to calculating the electronic structure and properties of molecules.

Given the progress being made by OTI Lumionics, its display partners, and other stakeholders in the smartphone supply chain, the big question is when will we see high-end smartphones with truly full-screen displays?

Apple has sent out invitations this week for its latest event on 12 September 2023, when many expect the tech giant to announce its new flagship iPhone 15 lineup. But considering the views of display industry analysts, it appears unlikely that consumers will see under panel Face ID until late 2024 for Pro models, and 2025 for standard iPhones.

And waiting to purchase a new mobile device is a good way for consumers to conserve materials and limit their environmental footprint, as smartphone circular economy studies have shown.

However, users should factor in the security benefits of using face biometrics. And if your current smartphone doesn’t have a feature such as Face ID or an equivalent, then it may be worth considering making a purchase – which could be of a refurbished model.

In the interview with TechHQ, Qiu points out that IR-based face biometrics have advantages over simply using images in the visual spectrum. Some mobile users post pictures of themselves to social media every day, and this is just one source that could be exploited to undermine the security of regular camera-based authentication systems, combined with generative AI techniques capable of creating ever more believable deepfakes.

Liveness testing using regular images is being threatened by developments in generative AI.

Apple puts the odds of a random person in the population having a face that can unlock another user’s iPhone at 1 in a million. However, the probability increases for twins and siblings that look alike. Device makers are always pushing the envelope, however.

Last year, Samsung Display revealed that it’s working on technology that sits underneath an OLED screen, which can read multiple fingerprints simultaneously (IMID 2022 keynote address on YouTube). And the screen manufacturer believes that using three fingerprints simultaneously for authentication is billions of times more secure than relying on the impression gathered from a single digit.

The topic of digital id’s is growing. Smartphones are used to authenticate, using face biometrics and other methods, to various services. And while some may be concerned about putting passport details, driving licenses, and other PII in the digital domain, there are plus points for consumers.

For example, in the future users may be able to share just the necessary portions of their ID, such as whether they meet age requirements, rather than having to hand over their address and other personal information as well, which may be irrelevant to a particular ID check.

The post Quantum designs bury face biometrics under smartphone screens appeared first on TechHQ.

]]>
Biometric standards and testing build trust in passwordless future https://techhq.com/2023/05/biometric-standards-and-testing-build-trust-in-passwordless-future/ Thu, 11 May 2023 17:02:21 +0000 https://techhq.com/?p=224574

• Users can view algorithm performance comparisons on the web • Vectors don’t contain real-world data to boost security • Biometric standards and testing provide a foundation for developers Passwords are a pain. If you pick something memorable, there’s a good chance that the secret will be easy for bad actors to guess – for... Read more »

The post Biometric standards and testing build trust in passwordless future appeared first on TechHQ.

]]>

• Users can view algorithm performance comparisons on the web
• Vectors don’t contain real-world data to boost security
• Biometric standards and testing provide a foundation for developers

Passwords are a pain. If you pick something memorable, there’s a good chance that the secret will be easy for bad actors to guess – for example, by looking for the name of your pets or relatives (and other commonly used password prompts) on social media. Password manager apps allow users to apply stronger policies and randomly generate expressions. But extra tools add to the workflow and can put everything in one basket. Consumers want a simpler way to prove their identity, which is why biometric approaches are growing in popularity. But there are important differences between biometrics and passwords. And the unique nature of biometrics makes it important for vendors to follow industry standards and for products to undergo rigorous testing.

A password is (or should be) a secret, but faces, voices, and even fingerprints are everywhere and can’t be protected in the same way. Biometric identity solutions need to be able to differentiate between a copy and the real thing. And because biometrics can’t be reset, users need to be protected from having their identifiers stolen and used elsewhere. “It’s very important to choose biometric providers that use certified and tested technology,” Mikel Sánchez, Business Intelligence and Innovation Director at Veridas, told TechHQ.

Biometric testing and standards

Vendors can submit their algorithms for testing as part of official programs such as the Face Recognition Vendor Test (FRVT) operated by US national standards agency NIST. The program examines how reliable facial recognition algorithms are at recognizing the same person and identifying imposters. And the ongoing work has recently been expanded to include challenges such as distinguishing between twins, which further raises the bar.

Having access to test results strengthens user confidence in the reliability of biometric solutions, and today’s algorithms perform well against common industry benchmarks. Most biometric engines will be capable of comparing two images of the same person and classifying them correctly 99% of the time, which surpasses human performance, according to research. And when it comes to distinguishing between different people, the error rate is just 1 in a million.

Biometric standards and testing help to bolster security by defining the performance assessment of presentation attack detection (PAD) mechanisms. Vendors need to incorporate PAD mechanisms into their biometric products to prevent bad actors from spoofing user data – for example, by showing a photograph rather than presenting the real face. And a huge amount of work has been carried out by vendors to incorporate so-called liveness detection into their systems.

Security labs such as iBeta carry out PAD testing in accordance with ISO/IEC 30107-3:2023 – the international standard that defines how mechanisms should be evaluated and reported on. And examinations account for different kinds of adversaries, from beginners with little experience to specialist attackers with access to more sophisticated equipment such as a 3D printer and latex masks.

And as trust in a passwordless future grows, so do the number of applications for digital identity verification and biometrics authentication. For example, Veridas’ clients include financial services providers who use customer voice characteristics to enable a quick and reliable security check.

Customers first enroll their voice by calling a telephone number and speaking for just a few seconds, and then the system generates a biometric vector that’s unique to them. Then, when customers next speak to their bank, the system can automatically authenticate them – avoiding agents having to spend extra time asking traditional security questions.

“The system is text and language independent,” explains Sánchez. “Callers don’t need to repeat the same phrase for enrolment and authentication.” Today’s algorithms are so effective that a customer could enroll speaking in one language and would still be correctly identified if they switched to another for authentication. “You can enroll in English and authenticate in Spanish,” Sánchez confirms. “The system compares the inner feature of your voice, not the actual words.”

The biometric vectors that are created have a number of important properties, which protect users. For example, they don’t contain any real-world data, which means that operators can’t recover voices or facial images from the information. Also, each vector is tied to specific services through digital certificates so that enrolled biometrics are only valid for selected activities and can’t be misused elsewhere.

Today, biometric solutions are being used to rent vehicles, protect minors from gambling, allow retirees to receive their pensions remotely, speed up airport security, and allow fans into sporting events – to give just a few examples. “Biometrics offer a better user experience,” said Sánchez.

Combatting deepfakes

Just as artificial intelligence (AI) plays a role in extracting key biometrics such as voice features, it can also help to combat deepfake attacks – a rising potential threat as voice cloning and other tools become more widely available. One tell comes from the way that an attacker uses a synthetic voice to impersonate someone. “We can detect that the audio is coming from a loudspeaker and not someone’s vocal cord,” Sánchez comments.

There are other giveaways too, that the audio has been generated synthetically and isn’t being spoken by a human. The characteristics of our vocal tracts put a limit on how quickly spoken sounds can change. But machines don’t follow these same rules and can generate vocal snippets that would be impossible for humans to reproduce.

The combination of security defenses together with biometric standards and testing is building trust in a passwordless future.

The post Biometric standards and testing build trust in passwordless future appeared first on TechHQ.

]]>
Biometric Information Privacy Act (BIPA) – a data protection fail? https://techhq.com/2023/03/biometric-information-privacy-act-bipa-a-data-protection-fail/ Fri, 24 Mar 2023 15:28:02 +0000 https://techhq.com/?p=222470

Biometrics – fingerprints, retina scans, iris patterns, voiceprints, facial recognition features, and other uniquely identifiable human attributes – give developers the option to improve device security. As anyone who has Touch ID or Face ID services enabled on their smartphone will know, biometrics are convenient for unlocking your device or authorizing a payment. Users don’t... Read more »

The post Biometric Information Privacy Act (BIPA) – a data protection fail? appeared first on TechHQ.

]]>

Biometrics – fingerprints, retina scans, iris patterns, voiceprints, facial recognition features, and other uniquely identifiable human attributes – give developers the option to improve device security. As anyone who has Touch ID or Face ID services enabled on their smartphone will know, biometrics are convenient for unlocking your device or authorizing a payment. Users don’t need to remember a fingerprint, for example, and modern data capture methods are quick and easy to use. But, as Illinois’ Biometric Information Privacy Act (BPIA) points out, biometrics are different from other unique identifiers such as passwords or social security numbers.

Users can’t easily reset their biological information. And when the Biometric Information Privacy Act was passed in 2008, concern was growing about what would happen if biometric data was compromised and fell into the hands of bad actors. At the time, individuals had no recourse if their biometric data was stolen, and state legislators agreed that some form of protection should be put in place. Pilot studies showed that biometrics are successful in combating fraud, and the use of finger-scanning technologies was growing to secure financial transactions. But the fear was that if users had no recourse for incidents of identity theft, they would withdraw from using biometric systems and progress in fighting financial crime and other fraudulent activity could stall.

And before we dig into the weeds of where things went wrong, it’s worth celebrating the progress that’s been made in the use of biometrics. On TechHQ, we’ve highlighted how voiceprints that incorporate hundreds of other identity signals, such as the cadence at which users enter their details on a keypad, can out-perform conventional knowledge-based authentication (KBA) screening questions used widely by contact centers to secure customer accounts.

Biometrics are useful authentication tools, but they are not secrets, and developers should take that into account when incorporating them into designs. And, as we’ve written previously, if you’re relying on biometrics to possess the characteristics of a key – to be secret, to be random, have the ability to be updated, or reset – then you’re staring at a major security problem.

Class action concerns

The issue with Illinois’ Biometric Information Privacy Act (BIPA) turned out not to be a security problem, as examples of biometric data falling into the hands of bad actors and being misused are hard to find. What happened instead, is that the legislation has backfired. Rather than nurture the adoption of biometrics to combat fraud, the act, in effect, deters companies from using biometric technology. And the reason for firms to hesitate before implementing biometrics is the rise of multi-million dollar class actions such as those faced by Google and Facebook.

BIPA is unusual in that a private cause of action can be brought for violations, making it relatively straightforward for owners of biometric data to seek damages. Class actions, which group claimants who believe that their biometric data has been mishandled, can send the potential damages faced by firms sky-high. And companies that use biometric technology could be subjected to fines totalling hundreds of millions of dollars.

Pushing up the number of claims was a key decision made by the Illinois Supreme Court in 2019, which held that victims didn’t have to show that any harm had been caused through mishandling of their biometric data. The test case involved a child who’d had his thumbprint scanned and stored by an amusement park to ride the various attractions using a ticketless pass.

Illinois’ Biometric Information Privacy Act (BIPA) requires consent from subjects, or their legally authorized representatives, for the collection and storage of biometric data. And operators must make it clear to subjects – for example, customers or employees – that a biometric identifier or biometric information is being collected or stored, and the specific reason for its use, as well as the length of time that the data will be stored.

Facebook was judged to have fallen foul of the legislation when a class action was brought against the social media giant, which ‘claimed Facebook collected and stored the biometric data of Facebook users in Illinois without the proper notice and consent in violation of Illinois law as part of its “Tag Suggestions” feature and other features involving facial recognition technology’. Facebook, which denies it violated any law, agreed to pay USD 550 million to settle the privacy lawsuit.

Time to reconsider

And, if Facebook’s experience didn’t make companies think twice about using biometric technology, then the prospect of even larger fines could be the final straw. Businesses are urging the Illinois Supreme Court to reconsider a recent decision that appears to pave the way for claimants to pursue separate cases for each time that biometric data is collected or transmitted. Separating each fingerprint scan, for example, into a separate claim would, as observers have noted, result in “annihilative liability” for businesses.

“These results are absurd,” Lauren Daming, an attorney at US law firm Greensfelder, told TechHQ. “It’s time for legislators to step up and make some changes.” BIPA was intended to protect consumers from biometrics getting into the hands of bad actors, not to drive companies out of business. What’s worse is that the legislation is a pathfinder for biometric information privacy. Currently, aside from Illinois, there are only two other states that have biometric privacy laws – Texas and Washington – so it’s important for issues to be resolved before BIPA serves as a blueprint more widely.

The post Biometric Information Privacy Act (BIPA) – a data protection fail? appeared first on TechHQ.

]]>
Can face recognition biometrics be trusted? Millennials, Gen Zs think so https://techhq.com/2022/11/can-face-recognition-biometrics-be-trusted-millennials-gen-zs-think-so/ Fri, 11 Nov 2022 16:23:46 +0000 https://techhq.com/?p=219335

The use of biometrics has exploded in recent years, particularly fingerprint and face recognition on mobile devices, as a secure means of passwordless authentication to grant access. But concerns about the actual security of such methods have been increasing, even as incidents of data breaches and intrusions on critical business and personal systems have risen... Read more »

The post Can face recognition biometrics be trusted? Millennials, Gen Zs think so appeared first on TechHQ.

]]>

The use of biometrics has exploded in recent years, particularly fingerprint and face recognition on mobile devices, as a secure means of passwordless authentication to grant access. But concerns about the actual security of such methods have been increasing, even as incidents of data breaches and intrusions on critical business and personal systems have risen sharply in the past couple of years.

But with the uptrend in mobile-based e-commerce and the proliferation of smartphones, face and fingerprint recognition systems to verify identities and unlock connected devices are very much here for the foreseeable future. But are they a reliable means of password-free identity verification?

There are ethical and technical concerns about the safety of face recognition algorithms. Privacy advocates assert that facial recognition databases can be exploited for nefarious purposes, and there are also doubts about the legal use of facial biometrics by law enforcement agencies – would it be legally binding?

A recent regulatory framework published by the World Economic Forum, in cooperation with INTERPOL and the United Nations Interregional Crime and Justice Research Institute (UNICRI), provided feedback that in the face of changing policing strategies, facial recognition algorithms could indeed be used, as an investigation lead, if not for actual enforcement in a court of law.

How does the average consumer feel about face recognition technology as a security measure? Millennials, Gen Zs like it

A customer tries out the face recognition feature on an iPhone X smartphone during its launch in Singapore. (Photo by ROSLAN RAHMAN / AFP)

But how does the average consumer feel about face recognition technology as a biometric security measure? A recent survey by facial recognition firm CyberLink, carried out by the third-party research firm YouGov, of 2,455 US adults aged 18 and above, uncovered that around four in 10 Americans use facial biometrics at least once a day with a mobile app.

68% of respondents use facial recognition to unlock their smartphone, laptop or other personal devices, while 51% apply it to log in to a phone app. Of these people, 18-to-24-year-olds (popularly referred to as the ‘Gen Z’ age group) and up to 34-year-olds (‘millennials’) are the biggest user group by age – three-quarters (75%) of them regularly unlock their devices using facial recognition.

“The explosion of mobile apps, the password nightmare they generated, and the face login solution that followed drove initial adoption in the mass market,” commented CyberLink CEO Jau Huang, highlighting how facial biometrics were being seen as a trusted alternative to less-trusted passwords these days.

Even among individuals reluctant to adopt face biometrics, the survey found that more than half (52%) would still use it at a commercial outlet like a store or restaurant, if there were assurances that their personal information and other sensitive data would be protected. And 42% would consider face technologies to improve safety at residences and workplaces.

Convenience and ease of use would also convince reluctant users. For instance, nearly half (45%) of those surveyed in the CyberLink study said they would employ face recognition if it would reduce waiting times in lines. Another 43% would adopt biometrics if it made procuring items faster and more accessible in contrast to traditional means.

How does the average consumer feel about face recognition technology as a security measure? Millennials, Gen Zs like it

A passenger walks through a ticket gate equipped with a facial recognition fare payment system at Turgenevskaya metro station. (Photo by Natalia KOLESNIKOVA / AFP)

“There’s this perception that people aren’t ready for facial recognition technology, yet almost all of us are using it every day in one way or another,” pointed out CyberLink’s Huang. “New use cases for AI-based computer vision and facial recognition are constantly emerging.”

Some of those use cases are already gaining mass adoption, with the report highlighting use of face recognition technologies in more than half of airports (55%), banks (54%) and medical offices (53%) in the US-centric survey.

Huang said biometric solutions powered by artificial intelligence (AI) could supply a dependable alternative to deal with the talent shortage afflicting many sectors during the pandemic, when a lot of service-level staff were laid off to cut back on costs as businesses digitalized their operations.

Many see AI-based automation as a key solution to the current labor crisis,” he added. “Traditional and online businesses are using facial recognition to automate a wide set of activities, ranging from security and access control to self-service, statistics and the many facets of customer experience.”

Other aspects uncovered in the research were that current facial recognition naysayers would consider adopting the technology for safety reasons post-pandemic, such as ensuring proper usage of masks on faces (23%) and reducing or eliminating human contact altogether (20%). Another 20% would consider taking up facial recognition solutions if they would afford them a more premium experience, such as a VIP express lane checkout on e-commerce.

The post Can face recognition biometrics be trusted? Millennials, Gen Zs think so appeared first on TechHQ.

]]>
Clearview AI – No more controversial facial rec tool for US private companies https://techhq.com/2022/05/clearview-ai-no-more-controversial-facial-rec-tool-for-us-private-companies/ Tue, 10 May 2022 07:00:53 +0000 http://dev.techhq.com/?p=215581

Clearview AI reached an agreement for the lawsuit filed against them in Illinois state court two years ago by the ACLU and several other nonprofits The company also agreed to not offer free trials of its software to individual police officers without a sign-off from their superiors In 2020, the American Civil Liberties Union (ACLU)... Read more »

The post Clearview AI – No more controversial facial rec tool for US private companies appeared first on TechHQ.

]]>
  • Clearview AI reached an agreement for the lawsuit filed against them in Illinois state court two years ago by the ACLU and several other nonprofits
  • The company also agreed to not offer free trials of its software to individual police officers without a sign-off from their superiors

In 2020, the American Civil Liberties Union (ACLU) filed a suit alleging Clearview AI Inc for violating an Illinois’ ruling that requires companies to get consent from people before collecting or using their biometric information. After two long years, the company finally agreed to settle the litigation by agreeing not to sell its facial recognition technology to most private firms in the US.

In fact, the New York-based company has agreed to a set of restrictions to ensure the company is in alignment with the Illinois’ Biometric Information Privacy Act (BIPA), one of just a few biometric privacy laws that exist in the States. The central provision of the settlement restricts Clearview from selling its faceprint database not just in Illinois, but across the US, according to ACLU.

“Among the provisions in the binding settlement, which will become final when approved by the court, Clearview is permanently banned, nationwide, from making its faceprint database available to most businesses and other private entities. The company will also cease selling access to its database to any entity in Illinois, including state and local police, for five years,” the Union said in a blog posting yesterday.

How significant is their database?

The company had recently announced that it was on track to have 100 billion face prints in its database within a year, enough to ensure “almost everyone in the world will be identifiable.” As the ACLU puts it, those images — equivalent to 14 photos for each of the seven billion people on Earth — would enable covert and remote surveillance of Americans on a scale unlike anything seen before.

So far, Clearview AI claims to have scraped over 20 billion photos from the internet, including photos from popular social media platforms, news websites, websites of mugshots, and a variety of other sites. That simply makes Clearview AI’s collection larger than any other known similar database. To recall, the company came under the spotlight in 2020 when its database containing billions of faces was breached. In the same year, the ACLU filed a lawsuit against Clearview AI stating that it had collected all those images without people’s consent. 

The ACLU said that was in violation of Illinois’ BIPA, what was called a groundbreaking bit of legislation at a time when many Americans were worried about such tech. Prior to the lawsuit however, buyers of the technology included the Chicago Police Department and the office of the Illinois Secretary of State. In fact, according to Clearview AI, its current customers comprise more than 3,100 US agencies, including the FBI and Department of Homeland Security. 

To top it off, based on a June 2021 report from the US Government Accountability Office, a survey of 42 federal agencies shows that 10 agencies used Clearview AI between April 2018 and March 2020, including the FBI, Secret Service, DEA, and US Postal Inspection Service. 

ACLU’s director of speech, privacy and technology projects Nathan Freed Wessler reckons that “by requiring Clearview to comply with Illinois’ pathbreaking biometric privacy law not just in the state, but across the country, this settlement demonstrates that strong privacy laws can provide real protections against abuse.”

He even believes that Clearview should no longer treat people’s unique biometric identifiers as an unrestricted source of profit. “Other companies would be wise to take note, and other states should follow Illinois’ lead in enacting strong biometric privacy laws,” he added.

How else is Clearview AI restricted?

Under the settlement agreement filed in Illinois state court yesterday, Clearview AI will be permanently banned from granting paid or free access to its gargantuan face recognition database to private entities, both companies and individuals nationwide. Besides that, the company will be banned from granting access to its database to any state or local government entity in Illinois, including law enforcement, for a period of five years. 

“This means that within Illinois, Clearview cannot take advantage of BIPA’s exception for government contractors during that time,” ACLU noted. The facial recognition company will also be banned from granting access to its database to any private entity in Illinois for five years, even any of BIPA’s exceptions. The company should also maintain an opt-out request form on its website, allowing Illinois residents to upload a photo and fill out a form to ensure their faceprints will be blocked from appearing in Clearview’s search results, including for Clearview’s law enforcement users. 

To top it off, the court ruled that the company should commit US$50,000 in internet ads to publicize the opt-out mechanism and Clearview AI is also prohibited from using the photos people upload as part of this opt-out process for any purpose other than effectuating the opt-out program.

Clearview is also required to end its practice of offering free trial accounts to individual police officers, without the knowledge or approval of their employers. Finally, for the next five years, Clearview AI is required to continue its current measures to attempt to filter out photographs that were taken in or uploaded from Illinois. 

The post Clearview AI – No more controversial facial rec tool for US private companies appeared first on TechHQ.

]]>
Do sweeping Europol data mandates place EU data freedoms at risk? https://techhq.com/2021/11/do-broader-europol-data-mandates-place-eu-data-freedoms-at-risk/ Mon, 15 Nov 2021 10:50:00 +0000 http://dev.techhq.com/?p=210414

Last month, the European Parliament voted favorably on a resolution to make it easier for continental crime agency Europol to exchange data with private firms, and to cooperate on policing innovations powered by artificial intelligence (AI). The proposed mandate was passed with a large majority of 538 votes to 151 and will allow Europol to... Read more »

The post Do sweeping Europol data mandates place EU data freedoms at risk? appeared first on TechHQ.

]]>

Last month, the European Parliament voted favorably on a resolution to make it easier for continental crime agency Europol to exchange data with private firms, and to cooperate on policing innovations powered by artificial intelligence (AI).

The proposed mandate was passed with a large majority of 538 votes to 151 and will allow Europol to process data from any private entity, including the ‘Big Tech’ giants, along with any third-party countries that agree to submit their data for processing.

The proposal was tabled by the Parliament’s Committee on Civil Liberties, Justice and Home Affairs (LIBE) to broaden the scope of Europol powers when it comes to three critical areas concerning data sovereignty and management. Besides cooperating with private enterprises, Europol will be empowered to process personal data that can be used in criminal proceedings.

Lastly, alongside becoming a processor for exorbitantly large volumes of data, Europol will also be using its newly-expanded powers to identify needs and themes for future EU research projects, particularly in the AI-led fields where algorithms and machine learning tools can be trained to learn and improve for use by law enforcement.

But critics of the latest proposal say that the expansive data abilities outlined represent a far-ranging reach for police and authorities to develop AI systems that can impede personal data privacy freedoms. In early October, the European Parliament had approved a different report from the same LIBE committee that allowed for the use of AI by European police, while limiting use of the tech in certain aspects.

This included tapping AI to “predict” criminal behavior in people, as well as a sweeping ban on biometric surveillance applications in general. The LIBE report even confirmed an oft-repeated fact, which is that until today, a lot of AI-enabled ID methods have been found to “disproportionately misidentify and misclassify” individuals of different ethnic groups, genders, and age groups.

This bias that has been repeatedly displayed by advanced AIs has brought about calls for outright bans on applying AI-based technologies in matters of regulation and judicial decision-making. Not only AI, but controversy has arisen over the use of biometric mass surveillance in public spaces.

But the latest vote appears to have done away with any partial reckoning of these controversial terms. Liberal MEPs have noted that broadly enforcing sweeping data and tech-driven surveillance mandates would appear to go against the EU’s commitment to basic human rights aims.

Some such as non-governmental organization Fair Trials have called for “accountability and meaningful oversight” of all law enforcement agencies, with Fair Trials legal and policy director Laure Baudrihaye-Gérard pointing out to ComputerWeekly.com that Europol has been given specific exemptions for several legislations surrounding AI and will therefore not be subject to any stringent safeguards against abuses.

This includes the Artificial Intelligence Act (AIA) that is still before the European Parliament, and has already been criticized by civil rights experts for opening the door to what is considered high-risk use cases — with an emphasis on technical workflows and risk mitigation, rather than upholding human rights and data privacy expectations.

Baudrihaye-Gérard points out that Europol is specifically excluded from the AIA by name, providing the security and law enforcement agency with little to no expectation of oversight when it comes to implementing AI and biometric surveillance in its policing.

“Considering today’s vote, we are going down a path in which Europol is allowed to operate with little accountability or oversight,” she said. “No one is asking questions. No one is holding the agency to account. It is deeply worrying for fundamental rights, including the right to a fair trial and the presumption of innocence.”

Defenders of the bill, however, claim that reforming Europol’s mandate has long been on the European Parliament’s agenda. Furthermore, detractors believe that as the criminal element becomes increasingly digitally savvy, the capabilities of Europol also need to modernize to keep up with the evolving security threats.

The post Do sweeping Europol data mandates place EU data freedoms at risk? appeared first on TechHQ.

]]>
What’s the Meta with Facebook’s facial recognition systems? https://techhq.com/2021/11/whats-the-meta-with-facebooks-facial-recognition-systems/ Wed, 03 Nov 2021 08:50:31 +0000 http://dev.techhq.com/?p=210213

Citing regulatory and privacy concerns, Facebook will no longer use its facial recognition system to identify faces in photographs and videos.

The post What’s the Meta with Facebook’s facial recognition systems? appeared first on TechHQ.

]]>

On a worldwide scale, facial recognition systems have shot up in demand for various use cases. While the technology was initially designed for surveillance and security purposes, today facial recognition is being tapped across different verticals for assorted industries.

For instance, some enterprises use facial recognition as a form of biometric security to grant employees access to critical data and workloads. Mobile phone developers have also now built-in facial recognition tools into devices for users to unlock their phones and such. Immigration departments and border checkpoints are also using facial recognition systems to scan passengers moving in and out of the country.

Despite this, there are some concerns about bias that can arise from such tech. Facial recognition systems work by matching a human face from a digital image or a video frame reference source, against a database of faces or codes. Using AI algorithms, the system works by pinpointing and measuring facial features from a given source.

The problem is the technology can at times make wrong judgments when recognizing people or objects. This AI bias has been causing unpleasantness in society, particularly in categorizing and stereotyping certain individuals based on inherent basic characteristics such as race and gender.

The BBC reports that in 2019, a US government study suggested facial recognition algorithms were far less accurate at identifying African-American and Asian faces compared to Caucasian faces. African-American women were even more likely to be misidentified, according to the study conducted by the National Institute of Standards and Technology.

In social media, facial recognition systems work by recognizing images of a particular person. While the technology has been used for implementation for several years, recent concerns on privacy have been making social media companies relook their strategies in using the technology on their platforms.

And surprisingly enough Facebook, one of the biggest advocates of facial recognition tech has announced the shutdown of its facial recognition system which automatically identifies users in photos and videos for tagging purposes.

No more auto-tagging on Facebook

According to Jerome Pesenti, VP of Artificial Intelligence at Facebook, the shutdown is part of a company-wide move to limit the use of facial recognition in their products. As part of this change, users who opted for Facebook’s Face Recognition setting will no longer be automatically recognized in photos and videos, and the template used to identify them will be deleted.

Pesenti said the change will represent one of the largest shifts in facial recognition usage in the technology’s history. Pesenti also pointed out that more than a third of Facebook’s daily active users have opted into Facebook’s Face Recognition setting and can be recognized, and its removal will result in the deletion of more than a billion people’s individual facial recognition templates.

“Looking ahead, we still see facial recognition technology as a powerful tool, for example, for people needing to verify their identity or to prevent fraud and impersonation. We believe facial recognition can help for products like these with privacy, transparency, and control in place, so you decide if and how your face is used. We will continue working on these technologies and engaging outside experts,” explained Pesenti in a blog post.

(Photo by Chris DELMAS / AFP)

At the same time, Pesenti admits that there have been growing concerns about the technology as a whole, with questions surfacing about the place of facial recognition technology in society. Regulators are also still in the process of providing a clear set of rules governing its use.

Other tech firms such as Amazon and Microsoft have both suspended facial recognition product sales to police as the application of it to indiscriminately identify suspects has become more controversial.

“Amid this ongoing uncertainty, we believe that limiting the use of facial recognition to a narrow set of use cases is appropriate. This includes services that help people gain access to a locked account, verify their identity in financial products or unlock a personal device,” continued Pesenti. “These are places where facial recognition is both broadly valuable to people and socially acceptable when deployed with care. While we will continue working on use cases like these, we will ensure people have transparency and control over whether they are automatically recognized.”

The Meta of facial recognition systems

While the technology is being removed from the world’s most populous social media platform, the metaverse concept that Facebook’s newly-renamed parent company is pursuing may yet require some form of facial recognition algorithms. This could include the recognition and tracking of facial features and movements in order to mimic an avatar in the metaverse. In fact, Pesenti highlighted the potential to drive positive use cases in the future that maintain privacy, control, and transparency.

“It’s an approach we’ll continue to explore as we consider how our future computing platforms and devices can best serve people’s needs. For potential future applications of technologies like this, we’ll continue to be public about intended use, how people can have control over these systems and their personal data, and how we’re living up to our responsible innovation framework,” added Pesenti.

The reality is though, Facebook removing facial recognition from social media may be providing some semblance of privacy to users. But with the company redirecting most of its focus to the metaverse, some form of recognition tool is still needed to ensure users are secured.

For many use cases, the technology is still being improved to ensure it works properly. To power facial video recognition for biometric security purposes, the AI will only improve with more data being generated. While there is still room for improvement in social platforms, Facebook may have been too ambitious and far-reaching with the technology in the first place. Only time will tell when and if they will go back to implementing the technology in the future.

The post What’s the Meta with Facebook’s facial recognition systems? appeared first on TechHQ.

]]>
Is now the time finally for passwordless authentication? https://techhq.com/2021/10/passwordless-authentication/ Fri, 08 Oct 2021 13:50:21 +0000 http://dev.techhq.com/?p=209481

As more tech providers look to passwordless authentication methods, its only a matter of time before passwords are no longer a primary authentication method.

The post Is now the time finally for passwordless authentication? appeared first on TechHQ.

]]>

Despite passwords being an integral part of everyone today, passwordless authentication has been quickly catching fire all of a sudden. Be it for operating mobile phones, banking, checking emails, or even entering homes, passwords are the most used authentication method. With more and more passwords being required for evermore digital platforms and services, many users use the same password for all their access as it’s easier to remember.

However, using the same password for multiple authentications only makes it easier for cybercriminals. Hence, users were asked to develop stronger passwords. What initially started as just several digits for passwords soon required more complex characters to ensure security. Despite this, many users often still recycle the same passwords and just added a few new characters.

Not surprisingly, cybercriminals could easily compromise these passwords. Even large corporations were easily breached due to weak passwords. As such, multi-factor authentications were introduced. But even with that, cybercriminals were still able to access the codes normally sent via text messages to mobile devices.

With passwords being increasingly easy to be compromised, biometric authentications are now proving to be the safest bet for organizations looking to safeguard their employees and company. Biometric authentication uses the biometric capabilities of users, normally fingerprints, retina scanners, palm readers, and such to grant access.

Several organizations have been testing and advocating biometric authentications with Microsoft deciding biometric authentication being a more secured authentication method compared to passwords. Microsoft has since announced that it is doing away with passwords in some of its products such as emails, and allowing users to only have biometric access.

TechHQ caught up with Andrew Shikiar, Managing Director of FIDO Alliance, to find out if a passwordless authentication future is legitimately possible. Here’s what he had to say.

Why are people still struggling to create strong passwords today?

I think the real question is, why are people still using passwords today. No password can be secured. The fundamental problem with passwords is that they are a human-readable shared secret that is on a server. And anything that sits on the server, can and eventually most likely will be stolen. It’s a historical dependence we have on knowledge-based authentication, which is where users log in to services based on what they know. Attacks won’t stop until we break our dependence on knowledge base authentication.

(Photo by PAUL J. RICHARDS / AFP)

Is biometric authentication the best alternative to reduce dependence on passwords?

We need to move away from this old model of centralized shared secret authentication. The question is what we many to move to. Big tech companies like Google, Microsoft, Samsung, Intel, Qualcomm, and service providers like Amazon, Facebook, Twitter are working on it from a requirement standpoint.

From a technical standpoint, the approach we believe we should take is a user-friendly approach to asymmetric public-key cryptography. The difference between a public key cryptography approach and a knowledge-based approach is that instead of having a knowledge base credential sitting on a server you have a cryptographic key pair, with a public key on the server and a private key that needs to match precisely sitting on the user’s device.

So now, to log in, instead of me trying to remember what I told the server, which could be intercepted by any sort of hacker, I just need to prove that I’m in possession of my device. And I can prove that possession, either through just literally touching something or entering a pin or by using a biometric, and so what this does is it changes the playing fields dramatically for hackers that there’s really nothing to happen anymore. You can’t take those credentials, you can’t steal them or sell them.

I think something like biometrics is the direction that we will head in to see user-friendly passwordless authentication happen at scale.

Today, most services allow you to log in with your device via biometrics. This is good as it changes the user behavior. That being said, as it’s taking the password out of the user’s brain, it allows them to come to the more complex password.

If you use a password manager, a key chain, or something like that, it allows you to do a very complex password that is harder to hack. And that’s an important step behaviourally but, ultimately, this is a transitory step to moving beyond passwords entirely, where instead of having a complex password, you actually have a public key.

So, yes I think biometrics and possession-based authentication is the future. Whether you prove that possession by who you are, or by what you have, or some combination they’re in, and that’s the key to stemming this type of data breaches and other hacks that are related to passwords.

Is using the same biometric a concern for users?

A lot of education needs to happen for this to grow and scale. The fundamental thing people need to understand about biometrics and there are different ways to do biometric authentication.

For Fido and most banks, the authentication is done locally on the device. Your biometrics never go into a central server. It’s not going to be a biometric hack, or someone could steal your actual thumbprint or a visual representation. This makes it impossible for hackers to do a scalable biometric attack.

With possession-based authentication, it’s a one at a time approach, where a hacker literally has to be with you to spoof your face or your fingerprints. It takes away the high-value high damage attacks. Of course, when someone comes and puts a gun to your head and forces you to the login, that’s a whole different scenario and really out of scope of any sort of technology.

The core thing is for users to authenticate locally to their device and that authentication data or anything valuable is not transmitted over the Internet because that’s where the phishing attacks happen, the man in the middle attacks happen.

passwordless authentication

(Photo by Dimitar DILKOFF / AFP)

Is biometrics technology expensive for implementation?

The good thing about biometric technology is that it’s built into most devices that we have right now. Most Windows machines and mobile devices have a biometric reader on them.

Another way companies are implementing passwordless is using security keys. Hardware tokens cost anywhere between $20 and $60 per user and can be distributed to employees to log in as a second factor or primary factor.

The nice thing about these devices is that they prevent phishing and password resets. Passwords resets can cost companies millions of companies each year. Most of the technologies are already built into devices, and all businesses need to do is deploy them.

Technology is also getting better. For example, facial recognition can now detect the liveliness of the subject. The technology is improving and we’re helping set standards for biometric certification along with other standards organizations.

We don’t specify the biometric modality, so it can be a finger, retinal, veins pulse, or voice sounds on a local device.

As an industry, we’re learning and we’re establishing best practices. We’re trying to tackle things like account recovery to see what happens when you lose your account, or you need to re-enroll a new account to make that process a little smoother.

Will we eventually get rid of passwords in the future?

We can’t fully get rid of passwords. But I do think a couple of things will happen, especially from a user experience standpoint. More consumer services will offer passwordless authentication and logins and rely on device biometrics.

But eventually, more of these will offer true passwordless authentication where you don’t have a password. Ultimately, there will be less friction when you log in.  Part of possession-based authentication is knowing what you’re doing. Initially, it’s going to take some adjusting.

For example, users will be thinking about why it is so easy to log in to a bank account. Is a thumbprint secured enough? People are used to friction. It is going to get some time to get used to this.

The challenges are part technical, part education, and maturity. We need to make progress in all three fields to advance this and make it a practice.

The post Is now the time finally for passwordless authentication? appeared first on TechHQ.

]]>