Implementation of hands-free gesture and voice control for system interfacing

The project consists of two main components i.e. Gesture and Voice. Gesture Recognition is where we used two algorithms. Those are Lucas-Kanade and Haar Classifier. The second main component, which is the Voice component, consists of two sub-components, which are the voice recognizer and the voice synthesizer. The entire architectural idea is based on the basic floor concepts of OpenCV (Open Computer Vision). The architecture and component distribution looks very easy and systematic but gets complicated as the implementation proceeds. The belowdetailed description should make it easy for all us to understand this not so easy system.

Author: 
TanvishThakker, Vaibhav Chopda, Shrenik Ashar and Nasim Shah
Journal Name: 
Int J Inf Res Rev
Volume No: 
04
Issue No: 
04
Year: 
2017
Paper Number: 
1887
Select Subjects: 
Select Issue: 
Download PDF: 

Editorial Board

David Johnson
Jessica Parker
Sarah Conor
John Franklin