THE trouble with passwords is that it is difficult to come up with one that is both easy to remember and difficult to crack, leading most people to invent words or phrases that mischief-makers can work out with relative ease. An alternative, supported by many security researchers, is to switch to biometrics, in which symbols are replaced with an individual’s unique bodily characteristics, like fingerprints, the retina or voice.
Of these, voice requires no extra kit beyond mics built into phones and computers, just clever software. But the data that describe a voice are just as vulnerable to being pilfered as any other authentication bits stored in a central repository. This worried Bhiksha Raj, from Carnegie Mellon University in Pittsburgh, because voice, unlike passwords and fingerprints, reveals much personal information, like ethnicity, education and levels of stress. And voice data associated with a user’s account might be matched against recordings hosted on YouTube or even surveillance tapes to pinpoint an individual. This information might in turn be used to target ads, identify a person at a rally or impersonate a user.
This led Dr Raj, a computer scientist who specializes in voice, and three colleagues to develop ways to encrypt voice-authentication data and match it against a record held in a central repository without the need to store a sample of the speaker’s voice or any data derived from it. They will present a paper outlining their latest method to a security conference in Germany on September 21st.
Existing voice-authentication systems require a speaker first to record voice samples that are then crunched into a template against which future vocal logins are measured. Both the voice samples and template may be stored on the server. When a user returns to log in by voice, the new speech is also sent, crunched and compared against the sample.
Earlier proposals to keep voice data private relied on secure multiparty computation (SMC). SMC was devised as a solution to “the millionaire problem”, in which two millionaires want to find out which one of them is the richer without divulging the size of their fortunes to each other. It also allows two parties—a user’s device and a server, say—to compare values held by each without revealing what these are to the other.
The easy way to solve the millionaire problem is, of course, to enlist a trusted third party to compare the two fortunes. But that introduces precisely the sort of vulnerability authentication researchers wish to avoid. SMC is an alternative but in the case of voice authentication it is fiendishly complicated, since the central server never stores the registration samples, but only encrypted bits of them. As a result, it is a drain on processing power of the devices involved. Dr Raj estimates that it might take an ordinary computer up to 14 hours to check a four-second snippet against stored voice registration using SMC. So he and his team have come up with a radically simpler approach turns the analysis of a voice into a sort of password that may be passed and stored without revealing its contents to the server, while the user is still preventing from effectively probing a server.
Humdrum passwords tend to be encrypted using so-called “hashing” algorithms which take a piece of text and transform it to a string of seeming gibberish called a “hash” that is then stored in the central repository. Any bit of text yields a distinct hash; change the original password, even by a single character, and the resulting hash will be wildly different. Moreover, because of the algorithm’s complexity, it is impossible to work back from the hash to the password. (Password cracking, such as that performed with the LinkedIn database leak, requires performing matches against common words and sequences, rather than cracking the hash.)
Such one-to-one correspondence, Dr Raj notes, is impossible with voice analysis, where a close match is the best that can be hoped for. In January Dr Raj and a colleague described software that takes voice samples recorded at registration, generates a digital description of them, but incorporates an acceptable range of variation. Those numeric values are then obscured in three stages. A private key, generated by the user’s device and never sent to the server, encrypts the starting value. A random string of text, called a salt and also known only to the user, is appended to the password-encrypted text. Then the lot is hashed. Ten samples might have 100 encrypted hashes each, giving 1,000 chunks of voice-matching data. The step of adding a private key is needed because without it, an attacker with a voice recording of the speaker might be able to run the speech through the same algorithm and produce a matching hash. The salt makes the whole password even more impregnable.
When a user logs in, he utters an arbitrary phrase, which is processed through the variability-generating algorithm to create a range of digital descriptions, each of which is encrypted with his private key, salted and hashed. The resulting passwords, reflecting the range, are sent to the server and verified. If enough of these overlap with the stored ones, access is granted. (Dr Raj says the precise value of “enough” could be tuned to optimise the rate of false positives depending on how tight security needs to be.)
For a single user, his device (be it a smartphone or a computer) can perform all this encrypting and hashing of hundreds of password possibilities in a jiffy, and a server can carry out the matching equally briskly. But thousands of jiffies, as with simultaneous logins at a popular online service, add up to a considerable drain on a server’s processing power. So Dr Raj and colleagues have refined the method to create a single hash at login that can be matched against the range generated from the original voice samples.
To do this the authors used a different form of hash, known as secured binary embeddings (SBEs). SBEs have the odd but useful property of producing encrypted hashes that vary only slightly for inputs that are themselves alike. The earlier method calculates acceptable ranges from samples and encodes those as hashes, of which a percentage must match for verification. With SBE, all the voice samples are compiled into a single SBE which can be compared to the SBE derived from a voice login. If the login voice is similar to the stored samples, the two SBEs will have values that are also close; the server software can easily spot when they aren’t.
SBEs may be generated just as quickly as the multiple-password method on the device used to record samples and to log in. Crucially, instead of having to cope with thousands of values at each login, the server need only perform a single, relatively simple comparison of values, similarly to an ordinary text password check. In testing Dr Raj’s method has been as accurate as conventional systems that do not try to disguise users’ identities.
From the Economist, published under license. The original article can be found on www.economist.com