In a recent update to its usage policies page, OpenAI has made a notable change by removing the explicit ban on the use of its technology for “military and warfare” purposes.
The alteration, noticed by The Intercept, occurred on January 10 as part of an effort by the company to provide clearer and more specific guidance.
While OpenAI still emphasizes the prohibition of using its large language models (LLMs) for activities causing harm, it has eliminated the explicit mention of “military and warfare.”
This modification comes at a time when global military agencies are increasingly exploring the applications of AI.
The former mention of “military and warfare” in the list of prohibited uses suggested that OpenAI refrained from collaborating with government agencies like the Department of Defense, known for offering lucrative contracts to contractors.
Although OpenAI currently lacks a product designed for direct harm, concerns arise regarding the potential use of its technology in various military-related tasks, such as code writing and processing procurement orders for potentially harmful purposes.
Sarah Myers West, a managing director of the AI Now Institute, highlighted the timing of this policy change, pointing to the use of AI systems in conflicts such as the targeting of civilians in Gaza. The modification raises questions about the extent to which OpenAI’s technology could be involved in military applications that go beyond weapons development.
OpenAI spokesperson Niko Felix clarified that the company aimed to establish universal principles that are easy to remember and apply. While the principle “Don’t harm others” remains a key focus, Felix emphasized its broad applicability in various contexts.
Despite citing weapons and injury to others as clear examples, the spokesperson did not explicitly address whether the prohibition extended to all forms of military use beyond weapons development.