Tokyo, 7 June, /AJMEDIA/
Japan has set guidelines for the safe development of artificial intelligence-controlled defense systems, Defense Minister Gen Nakatani said Friday, aiming to address ethical concerns over weapons that can operate without direct human involvement.
The guidelines outline steps to be followed in the research and development of such defense equipment, calling for careful classification of the systems, legal and policy reviews to guarantee compliance, and technical evaluations of operational reliability.
Nakatani said the guidelines are intended to “reduce risks of using AI while maximizing its benefits,” adding they are expected to “provide predictability” for the private sector, with his ministry to “promote research and development activities in a responsible way.”
Global concerns over autonomous weapons that use AI are mounting, as the deployment of combat drones has become commonplace in the war between Russia and Ukraine and in conflicts in the Middle East.
The Defense Ministry will conduct reviews to check whether systems meet requirements such as clear human accountability and operational safety, while categorizing such weaponry as “high” or “low” risk.
If categorized as high risk based on whether AI influences destructive capabilities, the ministry will assess whether the equipment complies with international and domestic laws, remains under human control, and is not a fully autonomous lethal weapon.
The ministry unveiled its first-ever basic policy for the promotion of AI use last July, focusing on seven fields including detection and identification of military targets, command and control, and logistical support.
Last May, the Foreign Ministry submitted a paper on Japan’s stance on lethal autonomous weapons systems, or LAWS, to the United Nations, stating that a “human-centric” principle should be maintained and emerging technologies must be developed and used “in a responsible manner.”