Kiruthiga, V. and Priya, K. Lakshmi (2025) Comparison of Multimodal vs. Unimodal Learning for Privacy-Aware Mental Health Prediction. International Journal of Innovative Science and Research Technology, 10 (10): 25oct657. pp. 1021-1027. ISSN 2456-2165
Mental health problems like depression and anxiety are increasing all over the world. Detecting them early can help people get proper care and support. Artificial Intelligence (AI) systems can analyze how people speak, write, or express emotions to find early signs of these problems. This study compares two types of learning methods — unimodal (using one type of data such as text or voice) and multimodal (using more than one type, like text, voice, and facial expressions). Both methods are tested using privacy-aware AI techniques such as Federated Learning and Differential Privacy, which protect user data from being shared or misused. The system was tested on public datasets like DAIC-WOZ and WESAD. The results show that multimodal learning gives better accuracy (about 10–12% higher) than unimodal learning, but it also needs more processing power and care to protect privacy. This comparison helps researchers understand the balance between accuracy, privacy, and efficiency when designing AI tools for mental health support.
Altmetric Metrics
Dimensions Matrics
Downloads
Downloads per month over past year
![]() |

