Hand Gestures for Object Movements: A Comprehensive Exploration

Balakrishnan, G. and Mangaiyarkarasi, T. and Sangeetha, K. and Rathika, M. (2025) Hand Gestures for Object Movements: A Comprehensive Exploration. International Journal of Innovative Science and Research Technology, 10 (7): 25jul535. pp. 1391-1396. ISSN 2456-2165

Abstract

Hand gestures are revolutionizing Human-Computer Interaction (HCI), moving beyond traditional interfaces to offer a natural and intuitive means of controlling objects across diverse applications, from virtual reality to robotics. The shift towards hand gestures represents a paradigm shift in HCI, driven by the demand for more immersive and accessible interactions. Hand gestures, as a powerful non-verbal communication modality, offer naturalness, accessibility, and hands- free control, crucial for enhancing user experience in fields like VR/AR and smart homes. Gestures can be broadly categorized into static gestures (fixed hand shapes for discrete commands like "stop" or "grab") and dynamic gestures (sequences of movements for continuous actions like "move" or "rotate"). Specific manipulations include translation, rotation, scaling, grabbing/releasing, and selection/deselection. Applications are vast, spanning immersive VR/AR environments, remote control of robotic arms, intuitive smart home device control, medical rehabilitation, and even in-car infotainment systems. However, challenges persist, including variations in hand shape, lighting conditions, occlusion, computational complexity, and user fatigue. Opportunities lie in advancements in sensor technology, deep learning, hybrid approaches, and the potential for gesture standardization. A typical hand gesture recognition system for object movement involves several key stages: data acquisition, preprocessing, hand detection/segmentation, feature extraction, gesture classification/recognition, and object control mapping. Data acquisition employs various sensing modalities. Vision-based approaches utilize RGB cameras (cost-effective but sensitive to lighting), depth cameras (providing 3D information, robust to lighting, but potentially more expensive), and infrared (IR) cameras (effective in low light). Sensor-based approaches rely on wearable devices like data gloves (highly accurate but intrusive), Inertial Measurement Units (IMUs) (less intrusive, capturing orientation and movement), and Electromyography (EMG) sensors (detecting muscle activity, but requiring direct skin contact and complex processing). Hybrid approaches combine these modalities to leverage their respective strengths, enhancing overall robustness and accuracy.

Documents
1884:11311
[thumbnail of IJISRT25JUL535.pdf]
Preview
IJISRT25JUL535.pdf - Published Version

Download (360kB) | Preview
Information
Library
Metrics

Altmetric Metrics

Dimensions Matrics

Statistics

Downloads

Downloads per month over past year

View Item