Jan 14 / Naz

Should we ban AI in military and warfare?

The Intercept's article on OpenAI removing its ChatGPT military ban raises concerns, particularly considering the board's lack of diversity and some ethically questionable members.
The slow and ineffective nature of current regulations, due to decentralized decision-making, adds to these worries.
Just like I think deepfakes should be banned for their risks, AI in military use should also be strictly prohibited. The ethical risks and potential misuse are too big to ignore.
After a quick exploration (assisted by ChatGPT), I've learned about current efforts to regulate military AI. Initiatives like the Political Declaration on Responsible Military Use of AI guide state behavior in line with international law, and the U.S. Department of Defense has implemented an AI Adoption Strategy for responsible AI integration in military operations.
Despite these steps, the global policy framework is still evolving. Most countries lack clear guidelines for military AI, and there are no binding international regulations specific to AI in military use. This regulatory gap makes the need for comprehensive, universally accepted norms in AI use in warfare even more critical.
I'd like to think that we could quickly establish a diverse, ethical, and effective regulatory framework to ensure responsible AI deployment in military contexts and prevent potential misuses. However, I find it hard to believe that this can be achieved swiftly.
Given this skepticism, I feel it's better to outright ban the use of AI in military applications.
Created with