The LEAP Motion API can detect four main gestures namely- circular gesture, swipe gesture,screen tap and key tap.The API also gives readings or certain measurements based on the hand/palm/finger tip position, angle with certain axis and other useful data.
We are focusing on converting 20 ASL sign language gestures/sentences into text & voice. Each sentence can be dealt and converted individually through their respective executable files. Our next step is to combine all these sentences into one executable program and the challenge we need to overcome is when this single program has to make a decision as to which set of gestures correspond to which sentence among the 20.
It is important to reflect that each of these ASL sign gestures are more complex than a simple circle/swipe/tap. I have attached a couple of links to give you a better idea and visualization -
http://www.signingsavvy.com/sign/BATHROOM/38/1
http://www.signingsavvy.com/sign/HOW%20MUCH/3624/1
So as you can see that simply detecting a circle/swipe/tap does not suffice as each ASL gesture may have a combination of readings. Moreover, by taking readings of different parameters that the LEAP API provides (like fingertip position or angle with the axis) it is not possible to devise a set of conditions or make a decision boundary in the code to detect the individual gesture without uncertainty. By the way, there are also inconsistency in the data when a same person makes the gesture more than once.
In this situation what would be the best and reliable approach to identify the gestures and make the decision based on the data I get from the controller? Lastly, is the LEAP API generating enough data which is both sufficient and accurate to help us distinguish such gestures as given in the links above? Or perhaps there is a version update coming which address some of these observations?
Looking forward to some guidance.
- The BabyBaxi Team -