Given that we haven't released an ARM SDK yet, nor committed to when we might do so, it is probably unwise to base your thesis on its availability. Also, there may be other components of the system that are important to performance beyond the CPU, so it would be difficult to predict whether one monitor would work better than another.
A safer route would be to do the hand tracking on a laptop (or desktop) computer and and send control signals from that computer to the patient monitor. That would let you explore the human interaction aspects of the project. If a suitable ARM SDK became available in time, then you could integrate everything into the single device.