Admiral Mike Rogers Virtually Confirms OPM Was Not on Counterintelligence Radar

For some time, those following the OPM hack have been asking where the intelligence community’s counterintelligence folks were. Were they aware of what a CI bonanza the database would present for foreign governments?

Lawfare’s Ben Wittes has been asking it for a while. Ron Wyden got more specific in a letter to the head of the National Counterintelligence and Security Center last month.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

And Steven Aftergood, analyzing a 2013 Intelligence Community Directive released recently, noted that the OPM database should have been considered a critical counterintelligence asset.

A critical asset is “Any asset (person, group, relationship, instrument, installation, process, or supply at the disposition of an organization for use in an operational or support role) whose loss or compromise would have a negative impact on the capability of a department or agency to carry out its mission; or may have a negative impact on the ability of another U.S. Government department or agency to conduct its mission; or could result in substantial economic loss; or which may have a negative impact on the national security of the U.S.”

By any reasonable definition, the Office of Personnel Management database of security clearance background investigations for federal employees and contractors that was recently compromised by a foreign adversary would appear to qualify as a “critical asset.” But since OPM is not a member or an element of the Intelligence Community, it appears to fall outside the scope of this directive.

But in a private event at the Wilson Center last night, NSA Director Mike Rogers described NSA being brought in to help OPM — but only after OPM had identified the hack.

After the intrusion, “as we started more broadly to realize the implications of OPM, to be quite honest, we were starting to work with OPM about how could we apply DOD capability, if that is what you require,” Rogers said at an invitation-only Wilson Center event, referring to his role leading CYBERCOM.

NSA, meanwhile, provided “a significant amount of people and expertise to OPM to try to help them identify what had happened, how it happened and how we should structure the network for the future,” Rogers added.

That “as we started more broadly to realize the implications of OPM” is the real tell, though. It sure sounds like the Chinese were better able to understand the value of a database containing the security clearance portfolios on many government personnel then our own counterintelligence people.

Oops.

The Continued Belief in Unicorn Cyber Deterrence

For some reason, people continue to believe Administration leaks that they will retaliate against China (and Russia!) for cyberattacks — beyond what are probably retaliatory moves already enacted.

I think Jack Goldsmith’s uncharacteristically snarky take is probably right. After cataloging the many past leaks about sanctions that have come to no public fruition, Goldsmith talks about the cost of this public hand-wringing.

As I have explained before, figuring out how to sanction China for its cyber intrusions is hard because (among other reasons) (i) the USG cannot coherently sanction China for its intrusions into US public sector (DOD, OPM, etc.) networks since the USG is at least as aggressive in China’s government networks, and (ii) the USG cannot respond effectively to China’s cyber intrusions in the private sector because US firms and the US economy have more to lose than gain (or at least a whole lot to lose) from escalation—especially now, given China’s suddenly precarious economic situation.

But even if sanctions themselves are hard to figure out, the public hand-wringing about whether and how to sanction China is harmful.  It is quite possible that more is happening in secret.  “One of the conclusions we’ve reached is that we need to be a bit more public about our responses, and one reason is deterrence,” a senior administration official in an “aha” moment told Sanger last month.  One certainly hopes the USG is doing more in secret than in public to deter China’s cybertheft.   Moreover, one can never know what cross-cutting machinations by USG officials lie behind the mostly anonymous leaks that undergird the years of stories about indecisiveness.

This performance seems to be directed at domestic politics, because the Chinese aren’t impressed.

A still crazier take, though, is this one, which claims DOJ thought indicting 5 PLA connected hackers last year would have any effect.

But nearly a year and a half after that indictment was unveiled, the five PLA soldiers named in the indictment are no closer to seeing the inside of a federal courtroom, and China’s campaign of economic espionage against U.S. firms continues. With Chinese President Xi Jinping set to arrive in Washington for a high-profile summit with President Barack Obama later this month, the question of how — and, indeed, if — the United States can deter China from pilfering American corporate secrets remains very much open. The indictment of the PLA hackers now stands out as a watershed moment in the escalating campaign by the U.S. government to deter China from its aggressive actions in cyberspace — both as an example of the creative ways in which the United States is trying to fight back and the limits of its ability to actually influence Chinese behavior.

[snip]

In hindsight, the indictment seems less like an exercise in law enforcement than a diplomatic signal to China. That’s an argument the prosecutor behind the case, U.S. Attorney David Hickton, resents. “I believe that’s absolute nonsense,” Hickton told Foreign Policy. “It was not the intention, when we brought this indictment, to at the same time say, ‘We do not intend to bring these people to justice.’”

But it’s unclear exactly what has happened to the five men since Hickton brought charges against them. Their unit suspended some operations in the aftermath of the indictment, but experts like Weedon say the group is still active. “The group is not operating in the same way it was before,” she said. “It seems to have taken new shape.”

Hickton, whose office has made the prosecution of cybersecurity cases a priority, says he considers the law enforcement effort against hackers to be a long-term one and likens it to indictments issued in Florida against South American drug kingpins during the height of the drug war. Then, as now, skeptics wondered what was the point of bringing cases against individuals who seemed all but certainly beyond the reach of U.S. law enforcement. Today, Hickton points out, U.S. prisons are filled with drug traffickers. Left unsaid, of course, is that drugs continue to flow across the border.

That’s because it fundamentally misunderstands what the five hackers got indicted for.

This indictment was not, as claimed, for stealing corporate secrets. It was mostly not for economic espionage, which we claim not to do.

Rather — as I noted at the time — it was for stealing information during ongoing trade disputes.

But the other interesting aspect of this indictment coming out of Pittsburgh is that — at least judging from the charged crimes — there is far less of the straight out IP theft we always complain about with China.

In fact, much of the charged activity involves stealing information about trade disputes — the same thing NSA engages in all the time. Here are the charged crimes committed against US Steel and the United Steelworkers, for example.

In 2010, U.S. Steel was participating in trade cases with Chinese steel companies, including one particular state-owned enterprise (SOE-2).  Shortly before the scheduled release of a preliminary determination in one such litigation, Sun sent spearphishing e-mails to U.S. Steel employees, some of whom were in a division associated with the litigation.  Some of these e-mails resulted in the installation of malware on U.S. Steel computers.  Three days later, Wang stole hostnames and descriptions of U.S. Steel computers (including those that controlled physical access to company facilities and mobile device access to company networks).  Wang thereafter took steps to identify and exploit vulnerable servers on that list.

[snip]

In 2012, USW was involved in public disputes over Chinese trade practices in at least two industries.  At or about the time USW issued public statements regarding those trade disputes and related legislative proposals, Wen stole e-mails from senior USW employees containing sensitive, non-public, and deliberative information about USW strategies, including strategies related to pending trade disputes.  USW’s computers continued to beacon to the conspiracy’s infrastructure until at least early 2013.

This is solidly within the ambit of what NSA does in other countries. (Recall, for example, how we partnered with the Australians to obtain information to help us in a clove cigarette trade dispute.)

I in no way mean to minimize the impact of this spying on USS and USW. I also suspect they were targeted because the two organizations partner together on an increasingly successful manufacturing organization. Which would still constitute a fair spying target, but also one against which China has acute interests.

But that still doesn’t make it different from what the US does when it engages in spearphishing — or worse — to steal information to help us in trade negotiations or disputes.

We’ve just criminalized something the NSA does all the time.

The reason this matters is because all the people spotting unicorn cyber-retaliation don’t even understand what they’re seeing, and why. I mean, Hickton (who as I suggested may well run for public office) may have reasons to want to insist he’s championing the rights of Alcoa, US Steel, and the Steelworkers. But he’s not implementing a sound deterrence strategy because — as Goldsmith argues — it’s hard to imagine one that we could implement, much less one that wouldn’t cause more blowback than good.

Before people start investing belief in unicorn cyber deterrence, they’d do well to understand why it presents us such a tough problem.

 

The Lessons NSA Teaches When It Conflates Use of Encryption with Terrorism

Screen shot 2013-08-01 at 9.34.18 AM
Just a few days after our Egyptian allies sentenced 3 Al Jazeera journalists to 3 years in prison, Turkey joined the club, charging 2 UK Vice employees and their Turkish fixer with terrorism. Today, Al Jazeera explained why the Vice journalists got charged: because the fixer uses an encryption technique that members of ISIS also use.

Three staff members from Vice News were charged with “engaging in terrorist activity” because one of the men was using an encryption system on his personal computer which is often used by the Islamic State of Iraq and the Levant (ISIL), a senior press official in the Turkish government has told Al Jazeera.

Two UK journalists, Jake Hanrahan and Philip Pendlebury, along with their Turkey-based Iraqi fixer and a driver, were arrested on Thursday in Diyarbakir while filming clashes between security forces and youth members of the outlawed and armed Kurdistan Workers’ Party (PKK).

On Monday, the three men were charged by a Turkish judge in Diyarbakir with “engaging in terrorist activity” on behalf of ISIL, the driver was released without charge.

The Turkish official, who spoke on condition of anonymity, told Al Jazeera: “The main issue seems to be that the fixer uses a complex encryption system on his personal computer that a lot of ISIL militants also utilise for strategic communications.”

Note, the Vice journalists were reporting on PKK, not ISIS, but it wouldn’t be the first time Turkey used ISIS as cover for their war against PKK.

A lot of people are treating this as a crazy expression of rising Turkish repression, that it conflates use of encryption — even a certain kind of encryption! — with membership in ISIS.

But they’re not the only one who does so. As the slide above — and some other documents released by Snowden — makes clear, NSA makes the same conflation. How do you find terrorists without other information, this slide asks? Simple! You find someone using encryption.

While the US might not arrest people based on such evidence (though it did hold Al Jazeera journalist Sami al-Hajj for years without charge), they certainly make the same baseless connection.

Did China and Russia Really Need Our Help Targeting Spook Techies?

LAT has a story describing what a slew of others — including me — have already laid out. The OPM hack will enable China to cross-reference a bunch of databases to target our spooks. Aside from laying all that out again (which is worthwhile, because not a lot of people are still not publicly discussing that), LAT notes Russia is doing the same.

But other than that (and some false claims the US doesn’t do the same, including working with contractors and “criminal” hackers) and a review of the dubiously legal Junaid Hussain drone killing, LAT includes one piece of actual news.

At least one clandestine network of American engineers and scientists who provide technical assistance to U.S. undercover operatives and agents overseas has been compromised as a result, according to two U.S. officials.

I would be unsurprised that China was rolling up actual HUMINT spies in China as a result of the OPM breach (which would explain why we’d be doing the same in response, if that’s what we’re doing). But the LAT says China (and/or Russia) is targeting “engineers and scientists who provide technical assistance” to spooks — one step removed from the people recruiting Chinese (or Russian) nationals to share its country’s secrets.

I find that description rather curious because of the way it resembles the complaint by CIA contractor whistleblower John Reidy in an appeal of a denial of a whistleblower complaint by CIA’s Inspector General. (Marisa Taylor first reported on Reidy’s case.) As I extrapolated from redactions some weeks ago, it looks like Reidy reported CIA’s reporting system getting hacked at least as early as 2007, but the contractors whose system got (apparently) hacked got him fired and CIA suppressed his complaints, only to have the problem get worse in the following years until CIA finally started doing something about it — with incomplete information — starting in 2010.

Reidy describes playing three roles in 2005: facilitating the dissemination of intelligence reporting to the Intelligence Community, identifying Human Intelligence (HUMINT) targets of interest for exploitation, and (because of resource shortages) handling the daily administrative functions of running a human asset. In the second of those three roles, he was “assigned the telecommunications and information operations account” (which is not surprising, because that’s the kind of service SAIC provides to the intelligence community). In other words, he seems to have worked at the intersection of human assets and electronic reporting on those assets.

Whatever role he played, he described what by 2010 had become a “catastrophic intelligence failure[]” in which “upwards of 70% of our operations had been compromised.” The problem appears to have arisen because “the US communications infrastructure was under siege,” which sounds like CIA may have gotten hacked. At least by 2007, he had warned that several of the CIA’s operations had been compromised, with some sources stopping all communications suddenly and others providing reports that were clearly false, or “atmospherics” submitted as solid reporting to fluff reporting numbers. By 2011 the government had appointed a Task Force to deal with the problem he had identified years earlier, though some on that Task Force didn’t even know how long the problem had existed or that Reidy had tried to alert the CIA and Congress to the problem.

All that seems to point to the possibility that tech contractors had set up a reporting system that had been compromised by adversaries, a guess that is reinforced by his stated desire to bring a “qui tam lawsuit brought against CIA contractors for providing products whose maintenance and design are inherently flawed and yet they are still charging the government for the products.” In his complaint, he describes Raytheon employees being reassigned, suggesting that contracting giant may be one of the culprits, but all three named contractors (SAIC, Raytheon, and Mantech) have had their lapses; remember that SAIC was the lead contractor that Thomas Drake and friends exposed.

Reidy’s appeal makes it clear that one of the things that exacerbated this problem was overlapping jurisdiction, with a functional unit apparently taking over control from a geographic unit. While that in no way rules out China, it sounded as much like the conflict between CIA’s Middle East and Counterterrorism groups that has surfaced in other areas as anything else.

The reason I raise Reidy is because — whether or not the engineers targeted as described in the LAT story are the same as the ones Reidy seems to describe — Reidy’s appeal suggests the problem he described arose from contractor incompetence and cover-ups.

I guess you could say the same about the OPM hack (though it was also OPM’s incompetence). Except in the earlier case, you’re talking far more significant intelligence contractors — including SAIC and Raytheon, who both do a lot of cybersecurity contracting on top of their intelligence contracting — and a years-long cover up with the assistance of the agency in question.

All while assets were being exposed, apparently because of insecure computer systems.

China’s hacking is a real threat to the identities of those who recruit human sources (and therefore of the human sources themselves).

But if Reidy’s complaint is true, then it’s not clear how much work China really needs to do to compromise these identities.

Under CISA, Would Wyndham Be Able To Pre-empt FTC Action?

The Third Circuit just issued an important ruling holding that the Federal Trade Commission could sue Wyndham Hotels for having cybersecurity practices that did not deliver what their privacy policies promised. The opinion, written by Clinton appointee Thomas Ambro, laid out just how bad Wyndham’s cybersecurity was, even after it had been hacked twice. Ambro upheld the District Court’s decision that FTC could claim that Wyndham had unfairly exposed its customers.

The Federal Trade Commission Act prohibits “unfair or deceptive acts or practices in or affecting commerce.” 15 U.S.C. § 45(a). In 2005 the Federal Trade Commission began bringing administrative actions under this provision against companies with allegedly deficient cybersecurity that failed to protect consumer data against hackers. The vast majority of these cases have ended in settlement.

On three occasions in 2008 and 2009 hackers successfully accessed Wyndham Worldwide Corporation’s computer systems. In total, they stole personal and financial information for hundreds of thousands of consumers leading to over $10.6 million dollars in fraudulent charges. The FTC filed suit in federal District Court, alleging that Wyndham’s conduct was an unfair practice and that its privacy policy was deceptive. The District Court denied Wyndham’s motion to dismiss, and we granted interlocutory appeal on two issues: whether the FTC has authority to regulate cybersecurity under the unfairness prong of § 45(a); and, if so, whether Wyndham had fair notice its specific cybersecurity practices could fall short of that provision.1 We affirm the District Court.

[snip]

Wyndham’s as-applied challenge falls well short given the allegations in the FTC’s complaint. As the FTC points out in its brief, the complaint does not allege that Wyndham used weak firewalls, IP address restrictions, encryption software, and passwords. Rather, it alleges that Wyndham failed to use any firewall at critical network points, Compl. at ¶ 24(a), did not restrict specific IP addresses at all, id. at ¶ 24(j), did not use any encryption for certain customer files, id. at ¶ 24(b), and did not require some users to change their default or factory-setting passwords at all, id. at ¶ 24(f). Wyndham did not respond to this argument in its reply brief.

Wyndham’s as-applied challenge is even weaker given it was hacked not one or two, but three, times. At least after the second attack, it should have been painfully clear to Wyndham that a court could find its conduct failed the costbenefit analysis. That said, we leave for another day whether Wyndham’s alleged cybersecurity practices do in fact fail, an issue the parties did not brief. We merely note that certainly after the second time Wyndham was hacked, it was on notice of the possibility that a court could find that its practices fail the cost-benefit analysis.

The ruling holds out the possibility that threats of such actions by the FTC, which has been hiring superb security people in the last several years, might get corporations to adopt better cybersecurity and thereby make us all safer.

Which brings me to an issue I’ve been asking lots of lawyers about, without satisfactory answer, on other contexts.

The Cybersecurity Information Sharing Act prevents the federal government, as a whole, from bringing any enforcement actions against companies using cybersecurity threat indicators and defensive measures (or lack thereof!) turned over voluntarily under the act.

(D) FEDERAL REGULATORY AUTHORITY.—

(i) IN GENERAL.—Except as provided in clause (ii), cyber threat indicators and defensive measures provided to the Federal Government under this Act shall not be directly used by any Federal, State, tribal, or local government to regulate, including an enforcement action, the lawful activities of any entity, including activities relating to monitoring, operating defensive measures, or sharing cyber threat indicators.

(ii) EXCEPTIONS.—

(I) REGULATORY AUTHORITY SPECIFICALLY RELATING TO PREVENTION OR MITIGATION OF CYBERSECURITY THREATS.—Cyber threat indicators and defensive measures provided to the Federal Government under this Act may, consistent with Federal or State regulatory authority specifically relating to the prevention or mitigation of cybersecurity threats to information systems, inform the development or implementation of regulations relating to such information systems.

(II) PROCEDURES DEVELOPED AND IMPLEMENTED UNDER THIS ACT.—Clause (i) shall not apply to procedures developed and implemented under this Act.

Given this precedent, could Wyndham — and other negligent companies — pre-empt any such FTC actions simply by sharing promiscuously as soon as they discovered the hack?

Could FTC still sue Wyndham because it broke the law because it claimed its “operating defensive measures” were more than what they really were? Or would such suits be precluded — by all federal agencies — under CISA, assuming companies shared the cyberattack data? Or would CISA close off this new promising area to force companies to provide minimal cybersecurity?

Update: Paul Rosenzweig’s post on the FTC decision is worth reading. Like him, I agree that FTC doesn’t yet have the resources to be the police on this matter, though I do think they have the smarts on security, unlike most other agencies.

How Does Duty to Warn Extend to Cyberattacks?

Steve Aftergood has posted a new directive from James Clapper mandating that Intelligence Community members warn individuals (be they corporate or natural persons) of a threat of death of seriously bodily harm.

This Directive establishes in policy a consistent, coordinated approach for how the Intelligence Community (IC) will provide warning regarding threats to specific individuals or groups of intentional killing, serious bodily injury, and kidnapping.

The fine print on it is quite interesting. For example, if you’re a drug dealer, someone involved in violent crime, or you’re at risk solely because you’re involved in an insurgency, the IC is not obliged to give you notice. Remember, the FBI did not alert members of Occupy Wall Street someone was plotting to assassinate them. Did they (then) not do so because they considered Occupy an “insurgency”? Would they consider them as one going forward?

But I’m most interested in what this should mean for hacking.

Here’s how the directive defines “seriously bodily harm.”

Serious Bodily Injury means an injury which creates a substantial risk of death or which causes serious, permanent disfigurement or impairment.

As I have noted, NSA has secretly defined “serious bodily harm” to include threat to property — that is, threats to property constitute threats of bodily harm.

If so, a serious hack would represent a threat of bodily harm (and under NSA’s minimization procedures they could share this data). While much of the rest of the Directive talks about how to accomplish this bureaucratically (and the sources and methods excuses for not giving notice), this should suggest that if a company like Sony is at risk of a major hack, NSA would have to tell it (and the Directive states that the obligation applies for US persons and non-US persons, though Sony is in this context a US person).

So shouldn’t this amount to a mandate for cybersharing, all without the legal immunity offered corporations under CISA?

 

The Questions the NCSC Doesn’t Want to Answer

A few days ago the WaPo published a story on the OPM hack, focusing (as some earlier commentary already has) on the possibility China will alter intelligence records as part of a way to infiltrate agents or increase distrust.

It’s notable because it relies on the Director of the National Counterintelligence and Security Center, Bill Evanina. The article first presents his comments about that nightmare scenario — altered records.

“The breach itself is issue A,” said William “Bill” Evanina, director of the federal National Counterintelligence and Security Center. But what the thieves do with the information is another question.

“Certainly we are concerned about the destruction of data versus the theft of data,” he said. “It’s a different type of bad situation.” Destroyed or altered records would make a security clearance hard to keep or get.

And only then relays Evanina’s concerns about the more general counterintelligence concerns raised by the heist, that China will use the data to target people for recruitment. Evanina explains he’s more worried about those without extensive operational security training than those overseas who have that experience.

While dangers from the breach for intelligence community workers posted abroad have “the highest risk equation,” Evanina said “they also have the best training to prevent nefarious activity against them. It’s the individuals who don’t have that solid background and training that we’re most concerned with, initially, to provide them with awareness training of what can happen from a foreign intelligence service to them and what to look out for.”

Using stolen personal information to compromise intelligence community members is always a worry.

“That’s a concern we take seriously,” he said.

Curiously, given his concern about those individuals without a solid CI background, Evanina provides no hint of an answer to the questions posed to him in a Ron Wyden letter last week.

  1. Did the NCSC identify OPM’s security clearance database as a counterintelligence vulnerability prior to these security incidents?
  2. Did the NCSC provide OPM with any recommendations to secure this information?
  3. At least one official has said that the background investigation information compromised in the second OPM hack included information on individuals as far back as 1985. Has the NCSC evaluated whether the retention requirements for background investigation information should be reduced to mitigate the vulnerability of maintaining personal information for a significant period of time? If not, please explain why existing retention periods are necessary?

Evanina has asserted he’s particularly worried about the kind of people who would have clearance but not be in one of the better protected (CIA) databases. But was he particularly worried about those people — and therefore OPM’s databases — before the hack?

Air Travel, Disrupted: Welcome to the New Normal

[graphic: Live radar from 15-AUG-2015, via @FlightRadar24]

[graphic: Live radar from 15-AUG-2015, via @FlightRadar24]

Air travelers along the U.S. east coast experienced flight cancellations and delays this past Saturday, due to initially unspecified “technical issues” attributed to the air traffic control system.

Beginning some time late morning, hundreds of flights were affected by the problem. The FAA’s service was restored around 4:00 p.m. EDT, though it would take hours longer for the airlines to reschedule flights and flyers.

Although 492 flights were delayed and 476 flights were canceled, the FAA’s Twitter account did not mention the outage or mass flight disruptions until 4:06 p.m., when it said service had been restored.

In a tweet issued long after the outage began, the Federal Aviation Administration said, “The FAA is continuing its root cause analysis to determine what caused the problem and is working closely with the airlines to minimize impacts to travelers.”

The FAA’s Safety Briefing Twitter account made no mention at all of the outage, though it has advised of GPS system testing at various locations across the country.

Various news outlets were conflicted: airports were blamed, then the FAA blamed, and the public knew nothing at all except they were stuck for an indeterminate period.

Get used to this. There’s no sign FAA will change its communications methodology after several air travel disruptions this year alone “due to technical issues” or whatever catchy nondescript phrase airlines/airports/government chooses to use.

Is this acceptable? Hell no. Just read the last version of WaPo’s article about the outage; the lack of communication causes as much difficulty as the loss of service. How can travelers make alternative plans when they hear nothing at all about the underlying problem? They’re stuck wherever they are, held hostage by crappy practices if not policies.

It doesn’t help that the media is challenged covering what appears to be a technology problem. The Washington Post went back and forth as to the underlying cause. The final version of an article about this disruption is clean of any mentions of the FAA’s En Route Automation Modernization (ERAM) system, though earlier versions mention an upgrade to or component of that system as suspect. Read more

Tim Pawlenty Makes It Clear Banks Want Immunity for Negligence

The business community is launching a big push for the Cyber Information Sharing Act over the recess, with the Chamber of Commerce pushing hard and now the Financial Services Roundtable’s Tim Pawlenty weighing in today.

Pawlenty is fairly explicit about why banks want the bill: so that if they’re attacked and share data with the government, they cannot be sued for negligent maintenance of data.

“If I think you’ve attacked me and I turn that information over to the government, is that going to be subject to the Freedom of Information Act?” he said, highlighting a major issue for senators concerned about privacy.

“If so, are the trial lawyers going to get it and sue my company for negligent maintenance of data or cyber defenses?” Pawlenty continued. “Are my regulators going to get it and come back and throw me in jail, or fine me or sanction me? Is the public going to have access to it? Are my competitors going to have access to it? Are they going to be able to see my proprietary cyber systems in a way that will give up competitive advantage?”

CISA has been poorly framed, he explained.

“It should be called the cyber teamwork bill,” Pawlenty said.

As I’ve pointed out repeatedly, what the banks would get here is far more than they get under the Bank Secrecy Act, where they get immunity for sharing data, but are required to do certain things to protect against financial crimes.

Here, banks (and other corporations, but never natural people) get immunity without having to have done a damn thing to keep their customers safe.

Which is why CISA is counterproductive for cybersecurity.

How Would Microsoft’s User Agreement Work with CISA?

When Jim Comey talks about wanting back doors into Apple products, he often claims that some software providers have managed to put back doors into allegedly secure products.

I keep thinking of that claim when I hear about the many privacy problems with Microsoft 10 — including the most recent report that it will send data to Microsoft even if you’ve disabled some of the spy features on the operating system. Is this the kind of thing Comey had in mind?

I’m even more intrigued given the report that Microsoft changed its Services Users Agreement to permit it to scan your machine looking for counterfeits.

Sometimes you’ll need software updates to keep using the Services. We may automatically check your version of the software and download software updates or configuration changes, including those that prevent you from accessing the Services, playing counterfeit games, or using unauthorized hardware peripheral devices. You may also be required to update the software to continue using the Services.

Add that to this part of the Users Agreement, which permits Microsoft to retain, transmit, and reformat your content, in part “to protect you and the Services.”

To the extent necessary to provide the Services to you and others, to protect you and the Services, and to improve Microsoft products and services, you grant to Microsoft a worldwide and royalty-free intellectual property license to use Your Content, for example, to make copies of, retain, transmit, reformat, display, and distribute via communication tools Your Content on the Services.

The two together seem to broadly protect not just Microsoft sharing data with the government under CISA, but also deploying countermeasures, as permitted under the Cyber Intelligence Sharing Act.

(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, operate a defensive measure that is applied to—

(A) an information system of such private entity in order to protect the rights or property of the private entity;

(B) an information system of another entity upon written consent of such entity for operation of such defensive measure to protect the rights or property of such entity; and

This Service Agreement would seem to imply consent for automatic updates including those that disable what gets called a cybercrime under the bill (that is, counterfeit software) and a general consent to let Microsoft do what it needs to to “protect you and the Services.”

To be fair, the counterfeit clause is just one adopted from Xbox so it may not reflect anything new at all.

But given the presumption that some form of CISA will pass after Congress returns next month, I wonder how these clauses with work under CISA.