Stuxnet and the Poisons that Open Your Eyes

Poison_EUstdimage-Wikipedia_200px_mod2Playwright August Strindberg wrote, “…There are poisons that blind you, and poisons that open your eyes.

We’ve been blinded for decades by complacency and stupidity, as well as our trust. Most Americans still naively believe that our government acts responsibly and effectively as a whole (though not necessarily its individual parts).

By effectively, I mean Americans believed their government would not deliberately launch a military attack that could affect civilians — including Americans — as collateral damage. Such a toll would be minimized substantively. Yesterday’s celebration related to the P5+1 interim agreement regarding Iran’s nuclear development program will lull most Americans into deeper complacency. The existing system worked, right?

But U.S. cyber warfare to date proves otherwise. The government has chosen to deliberately poison the digital waters so that all are contaminated, far beyond the intended initial target.

There’s very little chance of escaping the poison, either. The ubiquity of U.S. standards in hardware and software technology has ensured this. The entire framework — the stack of computing and communications from network to user applications — has been affected.

• Network: Communications pathways have been tapped, either to obtain specific content, or obtain a mirror copy of all content traveling through it. It matters not whether telecom network, or internal enterprise networks.

• Security Layer: Gatekeeping encryption has been undermined by backdoors and weakened standards, as well as security certificates offering handshake validation
between systems.

• Operating Systems: Backdoors have been obtained, knowingly or unknowingly on the part of OS developers, using vulnerabilities and design flaws. Not even Linux can be trusted at this point (Linux progenitor Linus Torvalds has not been smart enough to offer a dead man’s switch notification.)

• User Applications: Malware has embedded itself in applications, knowingly or unknowingly on the part of app developers.

End-to-end, top-to-bottom and back again, everything digital has been touched in one layer of the framework or another, under the guise of defending us against terrorism and cyber warfare.

Further, the government watchdogs entrusted to prevent or repair damage have become part and parcel of the problem, in such a way that they cannot effectively be seen to defend the public’s interests, whether those of individual citizens or corporations. The National Institute of Standards and Technology has overseen the establishment and implementation of weak encryption standards for example; it has also taken testimony [PDF] from computing and communications framework hardware and software providers, in essence hearing where the continued weak spots will be for future compromise.

The fox is watching the hen house, in other words, asking for testimony pointing out the weakest patches installed on the hen house door.

The dispersion of cyber poison was restricted only in the most cursory fashion.

Stuxnet’s key target appears to have been Iran’s Natanz nuclear facility, aiming at its SCADA equipment, but it spread far beyond and into the private sector as disclosed by Chevron. The only protection against it is the specificity of its end target, rendering the rest of the malware injected but inert. It’s still out there.

Duqu, a “sibling” cyber weapon, was intended for widespread distribution, its aims two-fold. It delivered attack payload capability, but it also delivered espionage capability.

• Ditto for Flame, yet another “sibling” cyber weapon, likewise intended for widespread distribution, with attack payload and espionage capability.

There could be more than these, waiting yet to be discovered.

In the case of both Duqu and Flame, there is a command-and-control network of servers still in operation, still communicating with instances of these two malware cyber weapons. The servers’ locations are global — yet another indicator of the planners’/developers’ intention that these weapons be dispersed widely.

Poison everything, everywhere.

But our eyes are open now. We can see the poisoners fingerprints on the work they’ve done, and the work they intend to do. Read more

NSA Denies Their Existing Domestic Cyberdefensive Efforts, Again

James Risen and Laura Poitras have teamed up to analyze a 4-year plan the NSA wrote in 2012, in the wake of being told its collection of some US person content in the US was illegal. I’ll discuss the document itself in more depth later. But for the moment I want to look at the denials anonymous senior intelligence officials (SIOs) gave Risen and Poitras about their domestic cyberdefensive efforts.

As a reminder, since before 2008, the government has been collecting bulk Internet data from switches located in the US by searching on selectors in the content. Some of that collection searches on identifiers of people (for example, searching for people sharing Anwar al-Awlaki’s email in the body of a message). But the collection also searches on other identifiers not tied to people. This collection almost certainly includes code, in an effort to find malware and other signs of cyberattacks.

We know that’s true, in part, because the Leahy-Sensenbrenner bill not only restricts that bulk domestic collection to actually targeted people, but also because it limits such collection only to terrorism and counterproliferation, thereby silently prohibiting its use for cybersecurity. The bill gives NSA 6 months to stop doing these two things — collecting non-person selectors and doing so for cybersecurity — so it’s clear such collection is currently going on.

So in 2012, just months after John Bates told NSA that when it collected domestic communications using such searches, it was violating the Constitution (the NSA contemplated appealing that decision), the NSA said (among other things),

The interpretation and guidelines for applying our authorities, and in some cases the authorities themselves, have not kept pace with the complexity of the technology and target environments, or the operational expectations levied on NSA’s mission.

The document then laid out a plan to expand its involvement in cybersecurity, citing such goals as,

Integrate the SIGINT system into a national network of sensors which interactively sense, respond, and alert one another at machine speed

Cyberdefense and offense are not the only goals mapped out in this document. Much of it is geared towards cryptanalysis, which is crucial for many targets. But it only mentions “non-state actors” once (and does not mention terrorists specifically at all) amid a much heavier focus on cyberattacks and after a description of power moving from West to East (that is, to China).

Which is why the SIO denials to Risen and Poitras ring so hollow.

Read more

Lavabit and The Definition of US Government Hubris

Graphic by Darth

Graphic by Darth

Well, you know, if you do not WANT the United States Government sniffing in your and your family’s underwear, it is YOUR fault. Silly American citizens with your outdated stupid piece of paper you call the Constitution.

Really, get out if you are a citizen, or an American communication provider, that actually respects American citizen’s rights. These trivialities the American ethos was founded on are “no longer operative” in the minds of the surveillance officers who claim to live to protect us.

Do not even think about trying to protect your private communications with something so anti-American as privacy enabling encryption like Lavabit which only weakly, at best, even deigned to supply.

Any encryption that is capable of protecting an American citizen’s private communication (or even participating in the TOR network) is essentially inherently criminal and cause for potentially being designated a “selector“, if not target, of any number of searches, whether domestically controlled by the one sided ex-parte FISA Court, or hidden under Executive Order 12333, or done under foreign collection status and deemed “incidental”. Lavabit’s Ladar Levinson knows.

Which brings us to where we are today. Let Josh Gerstein set the stage:

A former e-mail provider for National Security Agency leaker Edward Snowden, Lavabit LLC, filed a legal brief Thursday detailing the firm’s offers to provide information about what appear to have been Snowden’s communications as part of a last-ditch offer that prosecutors rejected as inadequate.

The disagreement detailed in a brief filed Thursday with the U.S. Court of Appeals for the Fourth Circuit resulted in Lavabit turning over its encryption keys to the federal government and then shutting down the firm’s secure e-mail service altogether after viewing it as unacceptably tainted by the FBI’s possession of the keys.

I have a different take on the key language from Lavabit’s argument in their appellate brief though, here is mine:

First, the government is bereft of any statutory authority to command the production of Lavabit’s private keys. The Pen Register Statute requires only that a company provide the government with technical assistance in the installation of a pen- trap device; providing encryption keys does not aid in the device’s installation at all, but rather in its use. Moreover, providing private keys is not “unobtrusive,” as the statute requires, and results in interference with Lavabit’s services, which the statute forbids. Nor does the Stored Communications Act authorize the government to seize a company’s private keys. It permits seizure of the contents of an electronic communication (which private keys are not), or information pertaining to a subscriber (which private keys are also, by definition, not). And at any rate it does not authorize the government to impose undue burdens on the innocent target business, which the government’s course of conduct here surely did.

Second, the Fourth Amendment independently prohibited what the government did here. The Fourth Amendment requires a warrant to be founded on probable cause that a search will uncover fruits, instrumentalities, or evidence of a crime. But Lavabit’s private keys are none of those things: they are lawful to possess and use, they were known only to Lavabit and never used by the company to commit a crime, and they do not prove that any crime occurred. In addition, the government’s proposal to examine the correspondence of all of Lavabit’s customers as it searched for information about its target was both beyond the scope of the probable cause it demonstrated and inconsistent with the Fourth Amendment’s particularity requirement, and it completely undermines Lavabit’s lawful business model. General rummaging through all of an innocent business’ communications with all of its customers is at the very core of what the Fourth Amendment prohibits.

The legal niceties of Lavabit’s arguments are thus:

The Pen Register Statute does not come close. An anodyne mandate to provide information needed merely for the “unobtrusive installation” of a device will not do. If there is any doubt, this Court should construe the statute in light of the serious constitutional concerns discussed below, to give effect to the “principle of constitutional avoidance” that requires this Court to avoid constructions of statutes that raise colorable constitutional difficulties. Norfolk S. Ry. Co. v. City of Alexandria, 608 F.3d 150, 156–57 (4th Cir. 2010).

And, later in the pleading:

By those lights, this is a very easy case. Lavabit’s private keys are not connected with criminal activity in the slightest—the government has never accused Lavabit of being a co-conspirator, for example. The target of the government’s investigation never had access to those private keys. Nor did anyone, in fact, other than Lavabit. Given that Lavabit is not suspected or accused of any crime, it is quite impossible for information known only to Lavabit to be evidence that a crime has occurred. The government will not introduce Lavabit’s private keys in its case against its target, and it will not use Lavabit’s private keys to impeach its target at trial. Lavabit’s private keys are not the fruit of any crime, and no one has ever used them to commit any crime. Under those circumstances, absent any connection between the private keys and a crime, the “conclusion[] necessary to the issuance of the warrant” was totally absent. Zurcher, 436 U.S., at 557 n.6 (quoting, with approval, Comment, 28 U. Chi. L. Rev. 664, 687 (1961)).

What this boils down to is, essentially, the government thinks the keys to Lavabit’s encryption for their customers belong not just to Lavabit, and their respective customers, but to the United States government itself.

Your private information cannot be private in the face of the United States Government. Not just Edward Snowden, but anybody, and everybody, is theirs if they want it. That is the definition of bullshit.

[Okay, big thanks to Darth, who generously agreed to let us use the killer Strangelovian graphic above. Please follow Darth on Twitter]

You Were Warned: Cybersecurity Expert Edition — Now with Space Stations

Over the last handful of days breathless reports may have crossed your media streams about Stuxnet infecting the International Space Station.

The reports were conflations or misinterpretations of cybersecurity expert Eugene Kaspersky’s recent comments before the Australian Press Club in Canberra. Here’s an excerpt from his remarks, which you can enjoy in full in the video embedded above:

[26:03] “…[government] departments which are responsible for the national security for national defense, they’re scared to death. They don’t know what to do. They do understand the scenarios. They do understand it is possible to shut down power plants, power grids, space stations. They don’t know what to do. Uh, departments which are responsible for offense, they see it as an opportunity. They don’t understand that in cyberspace, everything you do is [a] boomerang. It will get back to you.

[26:39] Stuxnet, which was, I don’t know, if you believe American media, it was written, it was developed by American and Israel secret services, Stuxnet, against Iran to damage Iranian nuclear program. How many computers, how many enterprises were hit by Stuxnet in the United States, do you know? I don’t know, but many.

Last year for example, Chevron, they agreed that they were badly infected by Stuxnet. A friend of mine, work in Russian nuclear power plant, once during this Stuxnet time, sent a message that their nuclear plant network, which is disconnected from the internet, in Russia there’s all that this [cutting gestures, garbled], so the man sent the message that their internal network is badly infected with Stuxnet.

[27:50] Unfortunately these people who are responsible for offensive technologies, they recognize cyber weapons as an opportunity. And a third category of the politicians of the government, they don’t care. So there are three types of people: scared to death, opportunity, don’t care.”

He didn’t actually say the ISS was infected with Stuxnet; he only suggested it’s possible Stuxnet could infect devices on board. Malware infection has happened before when a Russian astronaut brought an infected device used on WinXP machines with her to the station.

But the Chevron example is accurate, and we’ll have to take the anecdote about a Russian nuclear power plant as fact. We don’t know how many facilities here in the U.S. or abroad have been infected and negatively impacted as only Chevron to date has openly admitted exposure. It’s not a stretch to assume Stuxnet could exist in every manner of facility using SCADA equipment combined with Windows PCs; even the air-gapped Russian nuclear plant, cut off from the internet as Kaspersky indicates, was infected.

The only thing that may have kept Stuxnet from inflicting damage upon infection is the specificity of the encrypted payload contained in the versions released in order to take out Iran’s Natanz nuclear facility. Were the payload(s) injected with modified code to adapt to their host environs, there surely would have been more obvious enterprise disruptions.

In other words, Stuxnet remains a ticking time bomb threatening energy and manufacturing production at a minimum, and other systems like those of the ISS at worst case. Read more

Tapping the Oil Industry

Remember when it was outrageous that the Iranians had (allegedly) hacked Aramco? In addition to wiping hard drives (though in ways that left the computers recoverable), they also took and threatened to release documents.

In news that I earlier predicted, NSA and GCHQ have hacked OPEC, including Saudi Arabia’s OPEC Minister (though NSA managed to detask him when he came to the US).

Spiegel doesn’t provide much detail of what they’ve gotten — just a tantalizing overview, particularly given the likelihood that the speculation claim pertains to the skyrocketing prices in 2008, which (among other things) the Saudis used to get us into a new security cooperation agreement.

None of this is surprising. But as we try to fearmonger new wars based on one party hacking another, it’s probably safe to assume we got there first.

It stated that OPEC officials were trying to cast the blame for high oil prices on speculators. A look at files in the OPEC legal department revealed how the organization was preparing itself for an antitrust suit in the United States. And a review of the section reserved for the OPEC secretary general documented that the Saudis were using underhanded tactics, even within the organization. According to the NSA analysts, Riyadh had tried to keep an increase in oil production a secret for as long as possible.

Our TCA with Saudi Arabia (and the fact that we (Booz, in fact!) are now providing it with cybersecurity) may well be one reason it is no longer a top NSA target.

OPEC appears in the “National Intelligence Priorities Framework,” which the White House issues to the US intelligence community. Although the organization is still listed as an intelligence target in the April 2013 list, it is no longer a high-priority target.

Who needs to hack when you’re in charge of cybersecurity?

And guess which company has a lot of that business? Edward Snowden’s former employer, Booz.

The Intelligence Community’s Wide Open, Unprotected Back Door to All Your Content

PCLOB has posted the transcript from the first part of its hearing on Monday. So I want to return to the issue I raised here: both Director of National Intelligence General Counsel Robert Litt and NSA General Counsel Raj De admit that there are almost no limits on Intelligence Community searches of incidentally collection US person data (we know that FBI, NSA, and CIA have this authority, and I suspect National Counterterrorism Center does as well).

This discussion starts when PCLOB Chair David Medine asks whether the IC would consider getting a warrant before searching on incidentally collected data.

MR. MEDINE: And so turning to the protections for U.S. persons, as I understand it under the 702 program when you may target a non-U.S. person overseas you may capture communications where a U.S. person in the United States is on the other end of the communication. Would you be open to a warrant requirement for searching that data when your focus is on the U.S. person on the theory that they would be entitled to Fourth Amendment rights for the search of information about that U.S. person?

MR. DE: Do you want me to take this?

MR. LITT: Thanks, Raj. Raj is always easy, he raises his hands for all the easy ones.

MR. DE: I can speak for NSA but this obviously has implications beyond just NSA as well.

MR. LITT: I think that’s really an unusual and extraordinary step to take with respect to information that has been lawfully required.

I mean I started out as a prosecutor. There were all sorts of circumstances in which information is lawfully acquired that relates to persons who are not the subject of investigations. You can be overheard on a Title III wiretap, you can overheard on a Title I FISA wiretap. Somebody’s computer can be seized and there may be information about you on it.

The general rule and premise has been that information that’s lawfully acquired can be used by the government in the proper exercise of authorities.

Now we do have rules that limit our ability to collect, retain and disseminate information about U.S. persons. Those rules, as know, are fairly detailed. But generally speaking, we can’t do that except for foreign intelligence purposes, or when there’s evidence of a crime, or so on and so forth. But what we can’t do under Section 702 is go out and affirmatively use the collection authority for the purpose of getting information about U.S. persons. Once we have that information I don’t think it makes sense to say, you know, a year later if something comes up we need to go back and get a warrant to search that information. [my emphasis]

Litt compares finding incidental information on a laptop, presumably seized using a warrant, with searching for incidental information on a digital collection that includes very few limits on specificity. Remember, NSA can and has claimed a targeted “facility” may mean all the Internet traffic from a particular country or at least a region of a country. This is petabytes of data obtained with a directive, not gigabytes obtained with a specific warrant.

Read more

Leahy-Sensenbrenner Would Shut the Section 702 Cybersecurity Loophole

Section 702 Reporting HighlightI’m going to have a few posts on the Leahy-Sensenbrenner bill, which is the most likely way we’ll be able to rein in NSA spying. In addition to several sections stopping bulk collection, it has a section on collection of US person data under FISA Amendments Act (I’ll return to the back-door loophole later).

But I’m particularly interested in what it does with upstream collection. It basically adds a paragraph to section d of Section 702 that limits upstream collection to two uses: international terrorism or WMD proliferation.

(C) limit the acquisition of the contents of any communication to those communications—

(i) to which any party is a target of  the acquisition; or

(ii) that contain an account identifier of a target of an acquisition, only if such communications are acquired to protect against international terrorism or the international proliferation of weapons of mass destruction.;

And adds a definition for “account identifier” limiting it to identifiers of people.

(1) ACCOUNT IDENTIFIER.—The term ‘account identifier’ means a telephone or instrument number, other subscriber number, email address, or  username used to uniquely identify an account.

I believe the effect of this is to prevent NSA from using Section 702 to conduct cyberdefense in the US.

As I have noted, there are reasons to believe that NSA uses Section 702 for just 3 kinds of targets:

  • International terrorism
  • WMD proliferation
  • Cybersecurity

There are many reasons to believe one primary use of Section 702 for cybersecurity involves upstream collection targeted on actual pieces of code (that is, the identifier for a cyberattack, rather than the identifier of a user). As an example, the slide above, which I discuss in more detail here, explains that one of the biggest Section 702 successes involves preventing an attacker from exfiltrating 150 Gigs of data from a defense contractor. The success involved both PRISM and STORMBREW, the latter of which is upstream collection in the US.

In other words, the government has been conducting upstream collection within the US to search for malicious code (I’m not sure how they determine whether the code originated in a foreign country though given that they refuse to count domestic communications collected via upstream collection, I doubt they care).

So what these two sections of Leahy-Sensenbrenner would do is 1) limit the use of upstream collection to terrorists and proliferators, thereby prohibiting its use for cybersecurity, and 2) define “account identifier” to exclude something like malicious code.

There’s one more interesting aspect of this fix. Unlike many other sections of the bill, it doesn’t go into effect right away.

EFFECTIVE DATE.—The amendments made by subsections (a) and (b) shall take effect on the date that is 180 days after the date of the enactment of this Act.

The bill gives the Executive 6 months to find an alternative to this use of Section 702 — presumably, to pass a cybersecurity bill explicitly labeled as such.

Keith Alexander and others have long talked about the need to scan domestic traffic to protect against cyberattacks. But it appears — especially given the 6 month effective date on these changes — they’re already doing that, all in the name of foreign intelligence.

The Stalker Outside Your Window: The NSA and a Belated Horror Story

[photo: Gwen's River City Images via Flickr]

[photo: Gwen’s River City Images via Flickr]

It’s a shame Halloween has already come and gone. The reaction to Monday’s Washington Post The Switch blogpost reminds of a particularly scary horror story, in which a young woman alone in a home receives vicious, threatening calls.

There’s a sense of security vested in the idea that the caller is outside the house and the woman is tucked safely in the bosom of her home. Phew, she’s safe; nothing to see here, move along…

In reality the caller is camped directly outside the woman’s window, watching every move she makes even as she assures herself that everything is fine.

After a tepid reaction to the initial reporting last week, most media and their audience took very little notice of the Washington Post’s followup piece — what a pity, as it was the singular voice confirming the threat sits immediately outside the window.

Your window, as it were, if you have an account with either Yahoo or Google and use their products. The National Security Agency has access to users’ content inside the corporate fenceline for each of these social media firms, greasy nose pressed to glass while peering in the users’ windows.

There’s more to story, one might suspect, which has yet to be reported. The disclosure that the NSA’s slides reflected Remote Procedure Calls (RPCs) unique to Google and Yahoo internal systems is only part of the picture, though this should be quite frightening as it is.

Access to proprietary RPCs means — at a minimum — that the NSA has:

1) Access to content and commands moving in and out of Google’s and Yahoo’s servers, between their own servers — the closest thing to actually being inside these corporations’ servers.

2) With these RPCs, the NSA has the ability to construct remote login access to the servers without the businesses’ awareness. RPCs by their nature require remote access login permissions.

3) Construction through reverse engineering of proprietary RPCs could be performed without any other governmental bodies’ awareness, assuming the committees responsible for oversight did not explicitly authorize access to and use of RPCs during engineering of the MUSCULAR/SERENDIPITY/MARINA and other related tapping/monitoring/collection applications.

4) All users’ login requests are a form of RPC — every single account holder’s login may have been gathered. This includes government employees and elected officials as well as journalists who may have alternate accounts in either Gmail or Yahoo mail that they use as a backup in case their primary government/business account fails, or in the case of journalists, as a backchannel for handling news tips. Read more

US Getting Its Cyber-Ass Handed to It

David Sanger has early reporting on a report that will be sure to affect the NSA debate, though it has nothing to do with Edward Snowden. The National Commission for the Review of the Research and Development Programs of the United States Intelligence Community, which has been reviewing our cybercapabilities for two years, has found that we’re losing any edge we have.

The problems?

  • [In-Q-Tel founder Gilman] Louie also said the intelligence agencies were heavily focused on the development of offensive cyberweapons because “it is easier and more intellectually interesting to play offense than defense.” “Defense is where we are losing the ballgame,” he said.
  • The leader of science and technology for [the Director of National Intelligence] office, commission members said Tuesday, was not aware of some of the most classified research and development programs. They also found that intelligence agencies were duplicating efforts by pursuing similar projects at the same time, but because operations were compartmentalized, few researchers were aware of their colleagues’ work.
  • Shirley Ann Jackson, the president of Rensselaer Polytechnic Institute, found particular fault with the intelligence agencies’ approach, “which involves gathering more data than you need.”

Again, these panel members have come to this conclusion completely independent of the Snowden revelations, but they should well fuel the very questions his disclosures have been driving, because they, like Snowden, show that aggressive Big Data badly organized  won’t keep our country safe.

In related news, there are reports that NSA will be reorganized with Keith Alexander’s departure, by splitting of CYBERCOM from NSA.

Senior military officials are leaning toward removing the National Security Agency director’s authority over U.S. Cyber Command, according to a former high-ranking administration official familiar with internal discussions.

[snip]

No formal decision has been made yet, but the Pentagon has already drawn up a list of possible civilian candidates for the next NSA director, the former official told The Hill. A separate military officer would head up Cyber Command, a team of military hackers that trains for offensive cyberattacks and protects U.S. computer systems.

I think this is the wrong solution (and the anonymous leaks here sound as much like Generals trying to make a bid for turf as it does a real decision).

One of several big problems with our cyber stature is that there is no champion for defending (rather than policing) the US. That means we’ve committed to the same kind of approach we use with terrorists, trying to inflame terrorists we’ve found hints of so we can demobilize them, rather than just trying to harden our vulnerabilities to make it very difficult or unrewarding to attack.

And in inflaming and spying, we’ve been relying on weakening security, so we can see them, which makes the cyberattackers’ job easier.

Moreover there are a lot more real cyberattackers than real terrorists out there, and they can do far more damage than any but the very lucky 9/11 team could pull off. Which means if you miss here, you miss big. Whereas if we spent money on defense, we might be better able to withstand these attacks.

So I still say we need a very well-funded cyberdefense entity (I said put it in DHS, not because DHS is functional, but because that agency should but doesn’t operate under a different paradigm) that will be held responsible for successful attacks.

NSA Apologists Now Blaming Snowden for NSA’s Own Cyberdefense Failures

Read this claim about NSA spying, but don’t laugh.

“None of what the U.S. is doing is benefiting American business.”

Did you manage not laughing at the notion that the US is spending $70 billion a year on spying and none of it — not one little bit of it! — benefits American businesses?

Didn’t think so.

That quote, from Mandiant Chief Security Officer Richard Bejtlich, is just one of the utter absurdities built into this Kurt Eichenwald piece attempting to blame Edward Snowden for our failure to stop Chinese hacking of us.

Here’s the logic.

In May, [Tom] Donilon flew to Beijing to meet senior government officials there and set the framework for a summit between Obama and Chinese President Xi Jinping; Donilon and other American officials made it clear they would demand that hacking be a prime topic of conversation. By finally taking the step of putting public – and, most likely, international – pressure on the Chinese to rein in their cyber tactics, the administration believed it was about to take a critical step in taming one of the biggest threats to America’s economic security.

But it didn’t happen. The administration’s attempt to curb China’s assault on American business and government was crippled – perhaps forever, experts say – by a then-unknown National Security Agency contractor named Edward Snowden.

Snowden’s clandestine efforts to disclose thousands of classified documents about NSA surveillance emerged as the push against Chinese hacking intensified. He reached out to reporters after the public revelations about China’s surveillance of the Times‘s computers and the years of hacking by Unit 61398 into networks used by American businesses and government agencies. On May 24, in an email from Hong Kong, Snowden informed a Washington Post reporter to whom he had given documents that the paper had 72 hours to publish them or he would take them elsewhere; had the Post complied, its story about American computer spying would have run on the day Donilon landed in Beijing to push for Chinese hacking to be on the agenda for the presidential summit.

The first report based on Snowden’s documents finally appeared in The Guardian on June 5, two days before the Obama-Xi meeting, revealing the existence of a top-secret NSA program that swept up untold amounts of data on phone calls and Internet activity. When Obama raised the topic of hacking, administration officials say, Xi again denied that China engaged in such actions, then cited The Guardian report as proof that America should not be lecturing Beijing about abusive surveillance. [my emphasis]

Let’s review what Eichenwald has done here.

First, he has taken the Administration at its word that publicly shaming China, and then negotiating with them, would have slowed their cybertheft.

Next, he has insinuated — though not provided evidence — that both Snowden’s initial leaks and the timing of their release (which, after all, took place at different times) were all intentionally rather than coincidentally linked to the US effort to rein in Chinese hacking, and done at the direction of Snowden (that may be the case, but he hasn’t presented it, and if that were Snowden’s real intent, you would think he would have leaked specifics about our attacks on China weeks before he did).

He has highlighted an email (did he somehow get the content of an Edward Snowden email to Barton Gellman? Because I can’t imagine Gellman sharing this.) threatening to take his documents somewhere else, without thinking through what it means that he already had gone somewhere else or considering other reasons (he was holed in a hotel room, for example) why Snowden might have had some urgency for publishing. [Update: Here’s where that claim came from.]

And then he has Xi’s comments on America’s own hacking, which Eichenwald suggests was a response to the Section 215 and PRISM disclosures–“top-secret NSA program that swept up untold amounts of data on phone calls and Internet activity”

With me so far?

Curiously, Eichenwald makes no mention of the document that might actually bolster his case and which almost certainly was the reference Xi intended: the Presidential Policy Directive on cyberwar, which was released just hours before Obama’s meetings with Xi started in CA.

But that would require painting a very different picture of what the US does in cyberspace than this one. Read more