- Volume: 4,
Issue: 1,
Sitasi : 0
Abstrak:
Emotion recognition in Virtual Reality (VR) has become increasingly relevant for enhancing immersive user experiences and enabling emotionally responsive interactions. Traditional approaches that rely on facial expressions or vocal cues often face limitations in VR environments due to occlusion by head-mounted displays and restricted audio inputs. This study aims to develop an emotion recognition model based on body gestures using a hybrid deep learning architecture combining Convolutional Neural Networks (CNN) and Long Short-Term Memory (LSTM). The CNN component extracts spatial features from skeletal data, while the LSTM processes the temporal dynamics of the gestures. The proposed model was trained and evaluated using a benchmark VR gesture-emotion dataset annotated with five distinct emotional states: happy, sad, angry, neutral, and surprised. Experimental results show that the CNN-LSTM model achieved an overall accuracy of 89.4%, with precision and recall scores of 88.7% and 87.9%, respectively. These findings demonstrate the model’s ability to generalize across various gesture patterns with high reliability. The integration of spatial and temporal features proves effective in capturing subtle emotional expressions conveyed through movement. The contribution of this research lies in offering a robust and non-intrusive method for emotion detection tailored to immersive VR settings. The model opens potential applications in virtual therapy, training simulations, and affective gaming, where real-time emotional feedback can significantly enhance system adaptiveness and user engagement. Future work will explore real-time implementation, multimodal sensor fusion, and advanced architectures, such as attention mechanisms for further performance improvements