
Lethal Autonomous Weapons Systems (LAWS) — weapons that can select and engage targets without human involvement — represent one of the most serious AI ethics challenges.
Removing human judgment from lethal force decisions violates the principle of meaningful human control — and creates systems that cannot be held accountable under international humanitarian law.
Reference:
TaskLoco™ — The Sticky Note GOAT