Bureaucratic Agency

A very popular recent Ars Technica article was very well headlined The NSA’s SKYNET program may be killing thousands of innocent people.

Well headlined, that is, for sensationalist attention-getting. It also describes the tone of the article, which goes into a lot of detail about how a leaked presentation exposes that their machine learning program is an imperfect tool of analysis. They walk through these details with a single data scientist, from a human rights organization (which I greatly admire). The inevitable conclusion by most readers is that the US is using flawed algorithms to direct its killer robots.

So I am very glad to see the Guardian take it down a notch with their overview, which points out:

  • the approach is pretty normal for the task of sifting through large datasets to highlight candidates,
  • these candidates are leads for tracking organization activity, not likely targets themselves, and
  • the entire program is clearly presented as an experiment, not yet operationalized (at least at the time of the leaked presentation).

Left unaddressed, though, is how the article implied automation of drone targeting, which was the message received by most readers.

I don’t mean to pick on Alper, who is a very reasoned person

The flurry of SKYNET tweets I saw this week is well represented by this assertion of “automatic” assassination, which gives too much agency to the machines.

Bureaucracy is necessary scaffolding for civilization, though we love to vilify it for dehumanizing individuals. Meanwhile, cultural awareness of information technologies increases and algorithms are becomming celebrities. Google’s PageRank, Facebook’s News Feed filter, the nebulous suite of tools IBM names Watson — they have become superhuman entities we discuss in ways once reserved for bureaucratic institutions. So now we vilify not only organizations of people for their inhuman decisions, but the tools they use (and sometimes the people who wield them).

To ascribe the decision-making of Amazon’s pricing models to “just software” is naive but harmless. Complaining that the assassination decisions of the US government are made by inadequate algorithms is dangerous. This would mean that preventing the killing of innocent people was a matter of retooling.

More than whether random forests are the best approach to sifting through mountains of SIGINT data, let us worry about the bureaucracy. Are remote killings morally sound? Are decision making processes adequately accounting for risk to innocent lives? Are our elected representatives savvy enough to understand any of it?

Before getting outraged at the (naive) thought that the NSA is analyzing its data poorly, consider whether that problem is core to your concerns about the government’s actions.

Update: the heuristics used by humans appear to be more explicitly questionable, but the story that up to 10 civilians is an acceptable risk didn’t seem nearly as popular.

Meanwhile, Google says “we clearly bear some responsibility” for their robot running into a bus, and that with more training “our cars will more deeply understand” similar situations and “we hope to handle situations like this more gracefully in the future.”

Who is “we”? Are the robot cars included? Alphabet shareholders?

I posted this on medium.com in February 2016 during week 2189.

For more, you should follow me on the fediverse: @hans@gerwitz.com