Yasser, Mohammed (2025) Investigating the Harmful Implications of Generative AI in the Military Field. International Journal of Innovative Science and Research Technology, 10 (9): 25sep1357. pp. 2046-2048. ISSN 2456-2165
Generative Artificial Intelligence (AI) has rapidly emerged as both a technological innovation and a global security concern. Its application in the military domain raises unique ethical, legal, and strategic challenges. This paper examines harmful implications of generative AI in warfare, supported by published data from surveys and policy reports. Ipsos (2023) found that 69% of respondents globally are concerned about autonomous weapons, and 73% are concerned about surveillance misuse. Similarly, a UK government survey highlighted that 45% of respondents fear job displacement, 35% worry about loss of human creativity, and 34% fear losing control over AI. Meanwhile, Brookings (2018) found that only 30% of respondents support AI use in warfare, but support increases to 45% if adversaries are already using AI-based weapons. These statistics reflect widespread societal concern about the destabilizing consequences of AI militarization. This paper analyzes these concerns across five domains: misinformation, autonomous weapons, accountability, bias in decision support, and adversarial vulnerabilities. It argues that generative AI may exacerbate risks of misinformation campaigns, unlawful targeting, biased decision-making, and loss of accountability, demanding urgent international regulation.
Altmetric Metrics
Dimensions Matrics
Downloads
Downloads per month over past year
![]() |

