Kantian Terminators!  

November 18, 2019

OK maybe not, but this DoD report on the "principles on the Ethical Use of Artificial Intelligence by the Department of Defense" is interesting.  A lot to process here.

 

A couple quick comments: 

In general, this doesn't really undercut my reflexive humanist's cynicism, but in fact makes me somewhat more concerned.  I was able to find a list of who they consultd ed for this, and while some of the names are redacted, I could only find two names with any training in ethics--Seth Lazar and Alex London.  There may be more, and I don't know Lazar but I do know London, he's smart.  But there's a lot of engineers and a lot of corporate people and a lot of lawyers.  What does that say about the academic scholarship of "ethics"?  What does it say about the DoD's perception, or the tech world's perception, of "ethics"?

There's one nice move: They differentiate between AI and "autonomy" in weapons system performance.  So, not directly about Terminators.  But that makes me wonder, will they offer a distinct set of principles for "autonomous" weapons?  Or will this be the closest they get to that?

Then, when they get to their "principles" they are astonishingly (well, not really astonishingly) anodyne: AI should be "Responsible, Equitable, Traceable, Reliable, and Governable."  Well, I like traceable--that's potentially interesting.  But honestly, in what way are "reliable" and "governable" counted as ethical principles here?  And the definition of "equitable" sounds like a generalization of the principle of discrimination.  In fact I'd say that the central principle should be responsible.  But then that should be unpacked in a lot of different ways and in a lot of different directions.

Finally, they produce a set of recommendations, and it's clear that they think they've got "the ethics bit" down now.  There is no attempt to incorporate an independent ethical assessment in this process, nor any effort to ask basic questions of self-critique (as one would hope any complex instittuion, facing a new kind of problem, would want to employ), nor any suggestion that they might want to support ethical reflection on AI going forward.  Basically this sounds to me like Google's "Don't Be Evil" motto.  And we all know how powerfully directive that principle is.  (Thomas Aquinas actually gets as close to mockery as he ever does with this kind of stratospheric statement as at all ethically useful in the Summa Theologiae (Ia-IIae, 94.2).)  

A more global observation is this: This kind of document suggests not so much the reassertion of some "military-industrial complex," as some cynics might suggest.  It may be even more troubling here, that what this is, is the corporatization of a military mindset.  The whole thing seems run by people trained up in the kind of PR and consultant business of ethical "value statements" and the general mobilization of a language of ethics in a way meant to offer an especially brilliant sheen of polish on this particular turd.  (This is a big problem in our world today: the way that people think that using a language of "ethics" is a solution to problems, when what it often does is actually further obscure the problems they will concretely face.)  I would have appreciated more direct attention to the way in which war, in all its various modes and anticipatory preparations, gets out of hand, escapes mission statements, proclamations of principles, expressions of concern, and the like.  

If the history of the past century teaches us anything about war, it is this: the armies that the United States took to war (in 1917, 1939, 1950, 1965, 2001) had not fully grappled with the brutality and cruelty that they would witness, would countenance, and would participate in, in the coming years.  That's a generalization, but I think it's a fair one.  This is an old story: dulce bellum inexpertis.  I suspect those who actually serve in combat or in combat-proximate positions are the most desperately in need of all the help they can get on what to do in these situations.  And that help, to be (minimally) helpful at all, but have a head-start in getting going, years before it must be called upon.  This document seems to me not to engage with that reality at all.

So maybe in the end my cynicism wins out.  What I see here is not a serious attempt to think about the ethics of AI, but an institutional effort to "check the ethics box" and move on.  But you never move on from ethics, I'm afraid.  I fear that with documents like this, we'll all learn this lesson, again, in coming decades.