Hello everyone,
I am attempting to define several custom gestures and run onFrame() based detection to identify them, as well as the built in SDK gestures. My original plan was to reverse engineer the existing gestures and inherit whatever classes and methods I could from the SDK, however I've read this isn't possible (although that didn't stop me from trying anyway).
From what I've gathered a lot of you choose to implement your own gesture detection using a logical blocks inside the onFrame().. but I was wondering if there was a m̶o̶r̶e̶ ̶e̶l̶e̶g̶a̶n̶t̶/̶e̶f̶f̶i̶c̶i̶e̶n̶t̶/̶a̶c̶c̶u̶r̶a̶t̶e̶ better way of handling this.. especially for complex gestures and projects that require many different gestures.
My current idea is to create an abstract CustomGesture class and extend that to define classes for each one of my custom gestures. Each gesture class will have start, midpoint(s) and end methods that will compare the current frame to the accepted conditions at that point in the gesture, and a counter to keep track of how much of the gesture has been detected so far (which will reset on a time interval and when a full gesture is actually detected).
So with each onFrame() iteration my implementation would check if the current frame data matches up to the start point of each gesture, unless a gesture start point had been recently detected in which case it would compare it to that objects mid point..
Has anyone tried a similar approach? Any suggestions or tips? Is there a better way of doing this?
All feedback is much appreciated.
-Matt