Resources  >  Advancing Machine Learning at AIS


Advancing Machine Learning at AIS

In the summer of 2017,

While performing research for the Defense Advanced Research Agency (DARPA), Dr. Robert Wright, Principal Research Scientist, Jacob Baldwin, Research Scientist and Rob Dora, Project Manager II, along with former AIS interns Ryan Burnham and Andrew Meyer began writing a paper titled “Beyond Speech: Generalizing D-Vectors for Biometric Verification.”

“The goal of this research is to develop robust means of verifying the identity of individuals based on how they regularly interact with devices,” said Wright. “Whether the person is using a cell phone or computer, we needed to find a way to successfully do this and do it passively, without disrupting the user in any way.”

The group was inspired by Deep Vectors (D-Vectors) *, an approach used for speaker verification and they were curious to see if it could be applied to other domains of verification.

“From the original D-Vectors work, we were able to generalize the process of training models to automatically extract features from datasets that differentiate between individuals,” said Wright. We have proven that this framework applies to far more than just speaker verification.”

The result of this research was an entirely new framework that can be applied to any biometric verification problem while out-performing modern state-of-the-art methods. In the paper, the group applies their novel process to keystroke and mobile gait analysis to successfully identify a person based on how they type on keyboards and physically move when carrying a mobile device.

“Our framework generalizes beyond speaker verification and is passive, unlike D-Vectors,” said Wright. “We require less data and perform with higher accuracies, which will enable further advancements in numerous biometric verification domains.”

Turning such an incredible achievement into a paper was a challenge. Assuring the technical information was clear, as well as emphasizing its significance was a priority while the group was developing the paper. In addition to contributing to the technical effort, Dora helped the group meet these goals, since he served as editor, focusing on readability.

“We needed to make sure the paper contained enough technical details for a reader to be able to follow and understand our technique,” said Baldwin. “We also had to make sure that the reviewers could see the big picture of our contributions to the biometrics field.”

After sending the paper to the American Association for Artificial Intelligence (AAAI) and getting feedback, the group made improvements to their motivation for the paper and amplified their research. Once this process was complete, the group submitted the final paper to the AAAI and it was one of only 1,147 papers selected from over 7,700 submissions for the 33rd Annual AAAI Conference.

“It was rewarding to have the research acknowledged at one of the top-tier AI conferences,” said Dora. “The effort was the first one worked by several young scientists, so having them actualize their ideas in such an outstanding way and get recognition for it from AAAI was incredible.”

Wright and Baldwin attended the AAAI Conference on January 27, 2019 in Honolulu, Hawaii. There, they had the opportunity to present a poster on their research and major contributions to the field. They also got to enjoy the full experience of the AAAI conference, which included technical paper presentations, invited speakers, workshops, tutorials, poster sessions, senior member presentations, competitions and exhibit programs.

The group has been continuing the momentum they have had with this research by expanding on applications of their framework and finalizing the patent process on their findings.

“Two things I am very excited to do are to try our models on mouse data and to rework our models to include explicit time dependence by using a technique called attention,” said Baldwin. “We are looking for additional funding for this work and we have a demo ready to show off our keystroke model to potential customers and end-user groups.”

Anne Hartman, Operations Manager, sees the magnitude of the research that was communicated in the paper.

“We are very excited by the amazingly accurate results we’re achieving in identifying a specific end user by employing keystroke or gait data the size of a tweet instead of data the size of a book,” said Hartman. “Selection of their paper by AAAI is a testament to the innovation, creativity and talent routinely shown by Jacob Baldwin, Bob Wright, Rob Dora and their entire team as we push forward in Machine Learning missions for AIS.”


*Variani, Ehsan, et al. “Deep neural networks for small footprint text-dependent speaker verification.” 2014 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 2014.

Want to Learn More?

Click to read the full research paper

Privacy Settings
We use cookies to enhance your experience while using our website. If you are using our Services via a browser you can restrict, block or remove cookies through your web browser settings. We also use content and scripts from third parties that may use tracking technologies. You can selectively provide your consent below to allow such third party embeds. For complete information about the cookies we use, data we collect and how we process them, please check our Privacy Policy
Consent to display content from - Youtube
Consent to display content from - Vimeo
Google Maps
Consent to display content from - Google
Consent to display content from - Spotify
Sound Cloud
Consent to display content from - Sound