The United States is at the vanguard of autonomously hovering technology development. A government-appointed council in the United States has even stated that developing artificial intelligence-driven weaponry is a moral duty for the government. Whereas the morality of this need can be contested, there really is no denying that the technological breakthroughs of the world’s most powerful army ought to be watched.
Take into account current experiments conducted by the United States Air Force Research Laboratory (AFRL) that show a stage toward independent cooperative firearms, in this case, weaponry with pre-defined Rules of Engagement (ROE) that could really interact with one another and engage targets using pre-programmed datasets, all without the need for a human pilot. The research reflects a desire by more powerful weapons for full independence in armaments as well as a weakening of the borders between the aspect of basic control required in armaments. It was also taking place in a legislative vacuum since worldwide deliberations on autonomous weapons have come to a halt.
Much would be determined by technological breakthroughs and how independent these technologies can genuinely be. Nonetheless, such decisions are impossible to make without explicit direction on the extent of human control required over target acquisition and engagements. The CCW discussions on autonomous weapons had begun to bring together crucial parts of the human engagement required. These global conversations must be restarted in order to explain who is directing the moves.