Hubbry Logo
search button
Sign in
LipNet
LipNet
Comunity Hub
History
arrow-down
starMore
arrow-down
bob

Bob

Have a question related to this hub?

bob

Alice

Got something to say related to this hub?
Share it here.

#general is a chat channel to discuss anything related to the hub.
Hubbry Logo
search button
Sign in
LipNet
Community hub for the Wikipedia article
logoWikipedian hub
Welcome to the community hub built on top of the LipNet Wikipedia article. Here, you can discuss, collect, and organize anything related to LipNet. The purpose of the hub is to connect people, foster deep...
Add your contribution
LipNet

LipNet is a deep neural network for audio-visual speech recognition (ASVR). It was created by University of Oxford researchers Yannis Assael, Brendan Shillingford, Shimon Whiteson, and Nando de Freitas.[1] Audio-visual speech recognition has enormous practical potential, with applications such as improved hearing aids, improving the recovery and wellbeing of critically ill patients,[2] and speech recognition in noisy environments,[3] implemented for example in Nvidia's autonomous vehicles.[4]

References

[edit]
  1. ^ Assael, Yannis M.; Shillingford, Brendan; Whiteson, Shimon; de Freitas, Nando (2016-12-16). "LipNet: End-to-End Sentence-level Lipreading". arXiv:1611.01599 [cs.LG].
  2. ^ "Home Elementor". Liopa.
  3. ^ Vincent, James (November 7, 2016). "Can deep learning help solve lip reading?". The Verge.
  4. ^ Quach, Katyanna. "Revealed: How Nvidia's 'backseat driver' AI learned to read lips". www.theregister.com.