Passive Behavioral Biometrics: Identifying the Fallacies and Dispelling the Myths

Reading time:9 mins

In today’s ever-expanding technology market, it’s easy to get caught up in the latest buzz words and noise. Start talking about biometrics and you’ll likely hear someone throw out terms like “frictionless”, “passive” and “behavioral” but what does this all mean and better yet, how does it impact you?

For starters, companies who advertise themselves as behavioral biometrics are really talking about “behavioral patterns only” and then combine with either keystroke or face biometrics to make the claim they are using behavioral biometrics.

To explain this a short primer will help. There’s two types of biometrics, physical/static biometrics (face, fingerprints, voice), or behavioral biometrics( gait or signature/gestures). To be a biometric you need to create a template at an enrollment. If you are capturing movement  metrics and calling these biometrics (like phone or body movements) you need to know three important things:

  1. What is the false positive and false negative rates from their independent testing? Note if your vendor does not produce independent and published testing, start walking away. Often they will say “well we did our own testing” but this is meaningless. They could be selling you a biometric that has no value at helping to determine ID authentication and will produce many false positives which defeats the purpose and creates bad user experiences.
  2. Have them explain how they obtain a template. Having a template is crucial for any biometric and is completed before any measurements are captured, usually at enrollment. This creates the data required to compare all future logins or access activities of the user. ALL biometrics must have a template otherwise there’s nothing to compare to. Best practice is to always involve the user. Some companies avoid user participation and this means they have to gather a lot of data to “create” a template and this takes time,(example you only visit your bank 2x/week so it’s difficult to collect any meaningful biometrics until a month or so has passed). If the technology collects the way you tap or text it is pretty hard to see differences between any two users in a short time and it’s even likely you can never get a unique score. A lot of damage to your accounts can happen in minutes and since you don’t have a solid template, many false positives can happen letting the bad guys in. NOT GOOD.
  3. Have them explain how their templates get updated. Just like software, your templates need continual updating since the movements they measure are affected by changes in our lives such as stress, alcohol, injury, drug use, and a host of other normal occurrences that constantly change. If they are not updating this information start walking. Other behaviors captured might include what time you sign in to your bank, what transactions you complete, geolocation, Mac address, ISP, device, etc.. But these don’t tell us who you are. just that someone with your account info has logged in.

An example of Artifical Intelligence (AI) learning is the following: a product like BioSig-ID, a gesture biometric has built-in algorithms that “learn” and adjusts on every log in the biometric template. This makes the template current so they don’t require re-enrollments. Most companies use behavior patterns as a “rating system” only. They are simply not using enough biometrics or keeping up with templates to perform a true identity authentication. So they have to combine many other variables to even create a rating index. Example, if you capture 200 indices, yet each index contributes say 0.01% and when added up equals a small % that is not effective enough to authenticate a person.  This means you never really know who the user is, it’s only an informed guess. If an informed guess is good enough for you, stop reading and good luck.

Other words used include frictionless or passive. In most cases, this also refers to lack of privacy and spells liability issues for you going forward. The BIPA law in Illinois is ground zero for all the class action lawsuits against Facebook, Google and Shutterfly and dozens of others. Facebook reportedly is keeping $5B handy to compensate for potential fines as a result of privacy issues. The problem is these companies have been “passively” collecting facial images without the user’s consent plus they have not provided use statements and gotten users consent agreements, which is all illegal. So much so in fact, anyone collecting this type of information is subject to up to $1,000 fine per transaction. Here is a snippet of the BIPA law.

  • (740 ILCS 14/10) Sec. 10. Definitions. In this Act:
    “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or scan of hand or face geometry. Biometric identifiers do not include writing samples, written signatures,

Facebook’s defense is that they did no harm, they collected pictures and this collection was not injurious.

However in a landmark ruling – Rosenbach v. Six Flags the ruling went against this same defense and the ruling stated they did not have to show harm and that “when companies violate the law, “the injury is real and significant” As a result Facebook and others are now in jeopardy of paying heavy fines.

So much for “frictionless” and “passive” when collecting physical biometrics! Did you notice that the BIPA law stated that written signatures were not included in the list of biometric identifiers?

Consider biometrics using written signatures like BioSig-ID and save yourself the headache and liability.

Here are some other factors to ask companies who claim they are using biometric behaviors to determine ID authentication:

  1. Keystroke: Solutions that use typing patterns (keystroke or texting), are not useful on all devices because of keyboard differences, such as mobile versus PC. Keystroke as a biometric ranks as one of the weakest because it takes lots of typing/texting to determine a unique pattern. In independent tests, BioSig-ID was 27X more accurate and 9X better at recognizing registered users. (i.e. this is when you are in a hurry, you enter your password and the message says “wrong password” or “does not recognize user”). One large school in Texas stopped using typing analysis after they found it was not effective for ID authentication.
  2. Time to collect patterns: Fraud detection and prevention in consumer applications, financial services, enterprise, or government applications are crucial but offer little to no benefit if not detected in real-time. Time to create a pattern can also take months. Example a user goes into their healthcare account once a week, so you need many weeks to even develop a pattern that provides enough history that can be useful as a pattern for comparison.
  3. Not useful for standalone authentication: Typically data points are collected and analyzed to confirm that the person entering the username/password is, in fact, the authorized user and not just someone who is using stolen credentials. In such instances, behavioral patterns don’t provide unique, stand-alone identification. Looking at how someone moves or behaves without a current template is not a viable authentication solution and even though we might throw geolocation and other factors in, they don’t all add up to true ID authentication. So based on a score value, the software is set to minimum and if this is low enough you may find imposters getting access. This is why you need false positive testing so you can determine assurance levels and make decisions about the technology and the scoring methodology offered.
  4. User experience: Passive/Frictionless behavioral patterns captured provide a bad user experience with inherent privacy issues/flaws when there’s an issue brought to their attention. How do you explain to someone that they’re being monitored or that their information is being collected? Behavioral biometrics companies claim to look at anywhere between 100-2000 data points to determine a person’s identity but never divulge what data is being collected/analyzed. Why not? Maybe its because you might laugh?
  5. Real-time reporting: Behavioral applications are primarily used for post-transaction forensic analysis, rather than as a means to analyze, identify and conduct authentication in real time as it’s happening. The whole point of fraud detection is to monitor for suspicious activity as it’s happening and put a stop to it. If a provider uses a behavioral application to examine months’ worth of historical user data after the incident has occurred, chances are the bad actor has already entered and exited the network and stolen from you.
  6. Sensors limit use: The future performance of behavioral patterns is limited by the capabilities of the sensors available to collect behavior data and the AI capabilities available to analyze it. If these tools are ever compromised or become outdated, the technology becomes useless. It requires a team of humans to verify the results of any machine learning. It is unwise to have your AI run algorithms and end up with results that don’t make sense and without human intervention, they could produce false positives and negate their use
  7. New privacy laws: The privacy law in Illinois (BIPA) and other laws will serve to force companies to disclose which biometrics they are collecting and using. You will quickly find that companies who advertise that they use biometric behaviors will admit that they don’t really use them and instead collect simple patterns of behavior.
    1. They just don’t want the liability that comes with capturing face or fingerprints
    2. They would have to get consent and provide a use case and promises not to sell agreements from all users they are monitoring = ergo, there goes the passive and frictionless components you wanted for your users.

When it comes to passive or behavioral biometrics don’t believe the hype, many are not suitable for practical ID authentication. Have your vendor explain what biometrics they are using.

You can’t afford to gamble on unknown, unreliable solutions that could directly impact your personal information or who has access to your security network, company portal, etc.

Criminals are getting smarter by the day and the privacy watchdogs are getting ready to pounce. You don’t need fancy and there’s no such thing as passive/frictionless going forward with the new rules and current lawsuits.

When looking for a secure, reliable, privacy-driven biometric that won’t leave you liable look at BioSig-ID. We’ve re-invented the password which means you maintain the architecture that you are already comfortable with, but have the added bonus of a highly secure authentication solution, simply by drawing 4 characters.

Turn your passwords into a highly secure authenticator that stops the sharing of passwords and eliminates imposters from accessing your accounts or devices. Protect yourself.  Stop being misled by other company claims. Check out then call us at 877-700-1611.