Autonomous Weapons & Morality

Though not directly related to moral injury, the discussions of ethics in warfare comes close. The article seems to be slanted towards ‘normalizing’ using autonomous weapons… by asking the rhetorical question ‘what if autonomous weapons are more ethical than humans?’ That’s a flawed starting point–machines only have the ethics of their designers. The article goes on to quote several folks, this being a true gem:

Shin-Shin Hua of the Centre for the Study of Existential Risk argues that programmers can easily develop “prophylactic measures to ensure that machine learning weapons can comply with IHL rules in the first place.” If this is true, machines can, indeed, be more ethical than humans.

https://extranewsfeed.com/what-if-autonomous-weapons-are-more-ethical-than-humans-5369890659cf

It’s not as cut and dry as the article’s author would have you believe… Machine learning requires ‘training’ yet the actual underlying way it makes a decision is opaque to it’s designers. What this means is that the programmers of an algorithm can only test how accurate the model’s predictions are–they can’t tell you how the algorithm actually arrived at said prediction. Conflating machine learning with dumb conformance/normal programming is what is going on here. Yes, a weapon can be programmed to recognize and fire on certain targets autonomously in conformance with some rule set, but that doesn’t make the system ‘ethical’ nor is it possible for the system to act ethically. Moreover, the inputs defining ‘appropriate behaviors’ is itself murky. Take, for instance, guerilla warfare and the Geneva conventions. The Geneva conventions state unequivocally that the general conventions on land warfare only apply to ‘state actors who share an easily identifiable uniform’ anyone else who takes up arms (in their blue jeans and t-shirt) can be shot on sight as a spy and have no rights to surrender. So what does the ‘dumb machine’ program do if it sees someone shooting who’s not in uniform? Under the Geneva convention, it could lawfully shoot that person on sight and still be in compliance. Yet, what if that person shooting was actually shooting at someone raping a child in a war zone? Machines do not understand nuance.

The article also offers this up (which is very true):

Human soldiers tend to form harmful psychological links with enemies. Patterson cites researching concluding that “propensity for moral injury increases as the human warrior develops a more intimate knowledge of a potential target,” an intimacy unachievable by (and uninteresting to) the autonomous machine.

Unintentionally the author negates his whole thesis that autonomous weapons can be ‘more ethical’ than humans. As the quote makes obvious, the more one knows their ‘enemy’ on a human to human basis, the more difficult it is to the pull the trigger as our built in ethical and moral structures tell us it is wrong to kill others. These psychological links are only ‘harmful’ inasmuch that the humans involved actually embrace their humanity! A machine, as the quote points out, wouldn’t give a rat’s ass about the human at the end of its gun. It is this moral pause, this killing of others that causes us psychic pain and yes, moral injury if we follow through. That is the whole point of this site.

In the end, we will end up like the humans in Terminator, hunted by the machines we made to fight wars ‘ethically?’ Will these weapons be like WOPR just trying to win ‘the game?’ I am not that pessimistic, but pretending that ever more clever, autonomous weapons systems are ‘more ethical’ is a non-starter in my book.

Leave a Reply