Posts

Richard Burr Wants to Prevent Congress from Learning if CISA Is a Domestic Spying Bill

As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.

So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.

The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.

Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.

Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.

Prevent Congress from learning how often CISA data is used for law enforcement

Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.

In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.

Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.

Prevent Congress from learning how often CISA sharing falls short of the current NSA minimization standard

Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).

But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.

Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.

So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.

Prevent Congress from learning how well the privacy strips — at both private sector and government — are working

Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.

Burr and DiFi stripped that entirely.

Prevent Congress from learning how often “defensive measures” cause damage

Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.

Burr and DiFi stripped that really critical measure.

Prevent Congress from learning whether companies are bypassing the preferred sharing method

Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.

Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.

That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.

Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).

But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.

Read more

Is CISA the Upstream Cyber Certificate NSA Wanted But Didn’t Really Get?

I’ve been wracking my brain to understand why the Intel Community has been pushing CISA so aggressively.

I get why the Chamber of Commerce is pushing it: because it sets up a regime under which businesses will get broad regulatory immunity in exchange for voluntarily sharing their customers’ data, even if they’re utterly negligent from a security standpoint, while also making it less likely that information their customers could use to sue them would become public. For the companies, it’s about sharply curtailing the risk of (charitably) having imperfect network security or (more realistically, in some cases) being outright negligent. CISA will minimize some of the business costs of operating in an insecure environment.

But why — given that it makes it more likely businesses will wallow in negligence — is the IC so determined to have it, especially when generalized sharing of cyber threat signatures has proven ineffective in preventing attacks, and when there are far more urgent things the IC should be doing to protect themselves and the country?

Richard Burr and Dianne Feinstein’s move the other day to — in the guise of ensuring DHS get to continue to scrub data on intake, instead give the rest of the IC veto power over that scrub (which almost certainly means the bill is substantially a means of eliminating the privacy role DHS currently plays) — leads me to believe the IC plans to use this as they might have used (or might be using) a cyber certification under upstream 702.

Other accounts of upstream 702 and CISA don’t account for John Bates’ 2011 ruling

Since NYT and ProPublica caught up to my much earlier reporting on the use of upstream 702 for cyber, people have long assumed that CISA would work with upstream 702 authority to magnify the way upstream 702 works. Jonathan Mayer described how this might work.

This understanding of the NSA’s domestic cybersecurity authority leads to, in my view, a more persuasive set of privacy objections. Information sharing legislation would create a concerning surveillance dividend for the agency.

Because this flow of information is indirect, it prevents businesses from acting as privacy gatekeepers. Even if firms carefully screen personal information out of their threat reports, the NSA can nevertheless intercept that information on the Internet backbone.

Note that Mayer’s model assumes the Googles and Verizons of the world make an effort to strip private information, then NSA would use the signature turned over to the government under CISA to go get the private information just stripped out. But Mayer’s model — and the ProPublica/NYT story — never considered how the 2011 John Bates ruling on upstream collection might hinder that model, particularly as it pertains to domestically collected data.

As I laid out back in June, NSA’s optimistic predictions they’d soon get an upstream 702 certificate for cyber came in the wake of John Bates’ October 3, 2011 ruling that the NSA had illegally collected US person data. Of crucial importance, Bates judged that data obtained in response to a particular selector was intentionally, not incidentally, collected (even though the IC and its overseers like to falsely claim otherwise), even data that just happened to be collected in the same transaction. Crucially, pointing back to his July 2010 opinion on the Internet dragnet, Bates said that disclosing such information, even just to the court or internally, would be a violation of 50 USC 1809(a), which he used as leverage to make the government identify and protect any US person data collected using upstream collection before otherwise using the data. I believe this decision established a precedent for upstream 702 that would make it very difficult for FISC to permit the use of cyber signatures that happened to be collected domestically (which would count as intentional domestic collection) without rigorous minimization procedures.

The government, at a time when it badly wanted a cyber certificate, considered appealing his decision, but ultimately did not. Instead, they destroyed the data they had illegally collected and — in what was almost certainly a related decision — destroyed all the PATRIOT-authorized Internet dragnet data at the same time, December 2011. Bates did permit the government to keep collecting upstream data, but only under more restrictive minimization procedures.

Did FISC approve a cyber certificate but with sharp restrictions on retention and dissemination?

Neither ProPublica/NYT nor Mayer claimed NSA had obtained an upstream cyber certificate (though many other people have assumed it did). We actually don’t know, and the evidence is mixed.

Even as the government was scrambling to implement new upstream minimization procedures to satisfy Bates’ order, NSA had another upstream violation. That might reflect informing Bates, for the first time (there’s no sign they did inform him during the 2011 discussion, though the 2011 minimization procedures may reflect that they already had), they had been using upstream to collect on cyber signatures, or one which might represent some other kind of illegal upstream collection. When the government got Congress to reauthorize FAA that year, it did not inform them they were using or intended to use upstream collection to collect cyber signatures. Significantly, even as Congress began debating FAA, they considered but rejected the first of the predecessor bills to CISA.

My guess is that the FISC did approve cyber collection, but did so with some significant limitations on it, akin to, or perhaps even more restrictive, than the restrictions on multiple communication transactions (MCTs) required in 2011. I say that, in part, because of language in USA F-ReDux (section 301) permitting the government to use information improperly collected under Section 702 if the FISA Court imposed new minimization procedures. While that might have just referred back to the hypothetical 2011 example (in which the government had to destroy all the data), I think it as likely the Congress was trying to permit the government to retain data questioned later.

More significantly, the 2014 NSA, FBI, and CIA minimization procedures contain some version of this language, which appears to be new from the 2011 procedures.

Additionally, nothing in these procedures shall restrict NSA’s ability to conduct vulnerability or network assessments using information acquired pursuant to section 702 of the Act in order to ensure that NSA systems are not or have not been compromised. Notwithstanding any other section in these procedures, information used by NSA to conduct vulnerability or network assessments may be retained for one year solely for that limited purpose. Any information retained for this purpose may be disseminated only in accordance with the applicable provisions of these procedures.

That is, the FISC approved new procedures that permit the retention of vulnerability information for use domestically, but it placed even more restrictions on it (retention for just one year, retention solely for the defense of that agency’s network, which presumably prohibits its use for criminal prosecution, not to mention its dissemination to other agencies, other governments, and corporations) than it had on MCTs in 2011.

To be sure, there is language in both 2011 and 2014 NSA MPs that permits the agency to retain and disseminate domestic communications if it is necessary to understand a communications security vulnerability.

the communication is reasonably believed to contain technical data base information, as defined in Section 2(i), or information necessary to understand or assess a communications security vulnerability. Such communication may be provided to the FBI and/or disseminated to other elements of the United States Government. Such communications may be retained for a period sufficient to allow a thorough exploitation and to permit access to data that are, or are reasonably believed likely to become, relevant to a current or future foreign intelligence requirement. Sufficient duration may vary with the nature of the exploitation.

But at least on its face, that language is about retaining information to exploit (offensively) a communications vulnerability. Whereas the more recent language — which is far more restrictive — appears to address retention and use of data for defensive purposes.

The 2011 ruling strongly suggested that FISC would interpret Section 702 to prohibit much of what Mayer envisioned in his model. And the addition to the 2014 minimization procedures leads me to believe FISC did approve very limited use of Section 702 for cyber security, but with such significant limitations on it (again, presumably stemming from 50 USC 1809(a)’s prohibition on disclosing data intentionally collected domestically) that the IC wanted to find another way. In other words, I suspect NSA (and FBI, which was working closely with NSA to get such a certificate in 2012) got their cyber certificate, only to discover it didn’t legally permit them to do what they wanted to do.

CISA is the new and improved cyber-FISA

And while I’m not certain, I believe that in ensuring that DHS’ scrubs get dismantled, CISA gives the IC a way to do what it would have liked to with a FISA 702 cyber certificate.

Let’s go back to Mayer’s model of what the IC would probably like to do: A private company finds a threat, removes private data, leaving just a selector, after which NSA deploys the selector on backbone traffic, which then reproduces the private data, presumably on whatever parts of the Internet backbone NSA has access to via its upstream selection (which is understood to be infrastructure owned by the telecoms).

But in fact, Step 4 of Mayer’s model — NSA deploys the signature as a selector on the Internet backbone — is not done by the NSA. It is done by the telecoms (that’s the Section 702 cooperation part). So his model would really be private business > DHS > NSA > private business > NSA > treatment under NSA’s minimization procedures if the data were handled under upstream 702. Ultimately, the backbone operator is still going to be the one scanning the Internet for more instances of that selector; the question is just how much data gets sucked in with it and what the government can do once it gets it.

And that’s important because CISA codifies private companies’ authority to do that scan.

For all the discussion of CISA and its definition, there has been little discussion of what might happen at the private entities. But the bill affirmatively authorizes private entities to monitor their systems, broadly defined, for cybersecurity purposes.

(a) AUTHORIZATION FOR MONITORING.—

(1) IN GENERAL.—Notwithstanding any other provision of law, a private entity may, for cybersecurity purposes, monitor—

(A) an information system of such private entity;

(B) an information system of another entity, upon the authorization and written consent of such other entity;

(C) an information system of a Federal entity, upon the authorization and written consent of an authorized representative of the Federal entity; and

(D) information that is stored on, processed by, or transiting an information system monitored by the private entity under this paragraph.

(2) CONSTRUCTION.—Nothing in this subsection shall be construed—

(A) to authorize the monitoring of an information system, or the use of any information obtained through such monitoring, other than as provided in this title; or

(B) to limit otherwise lawful activity.

Defining monitor this way:

(14) MONITOR.—The term ‘‘monitor’’ means to acquire, identify, or scan, or to possess, information that is stored on, processed by, or transiting an information system.

That is, CISA affirmatively permits private companies to scan, identify, and possess cybersecurity threat information transiting or stored on their systems. It permits private companies to conduct precisely the same kinds of scans the government currently obligates telecoms to do under upstream 702, including data both transiting their systems (which for the telecoms would be transiting their backbone) or stored in its systems (so cloud storage). To be sure, big telecom and Internet companies do that anyway for their own protection, though this bill may extend the authority into cloud servers and competing tech company content that transits the telecom backbone. And it specifically does so in anticipation of sharing the results with the government, with very limited requirement to scrub the data beforehand.

Thus, CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.

Of course, there’s no requirement that the telecoms scan for every signature the government shares with it and share the results with the government. Though both Verizon and AT&T have a significant chunk of federal business — which just got put out for rebid on a contract that will amount to $50 billion — and they surely would be asked to scan the networks supporting federal traffic for those signatures (remember, this entire model of scanning domestic backbone traffic got implicated in Qwest losing a federal bid which led to Joe Nacchio’s prosecution), so they’ll be scanning some part of the networks they operate with the signatures. CISA just makes it clear they can also scan their non-federal backbone as well if they want to. And the telecoms are outspoken supporters of CISA, so we should presume they plan to share promiscuously under this bill.

Assuming they do so, CISA offers several more improvements over FISA.

First — perhaps most important for the government — there are no pesky judges. The FISC gets a lot of shit for being a rubber stamp, but for years judges have tried to keep the government operating in the vicinity of the Fourth Amendment through its role in reviewing minimization procedures. Even John Bates, who was largely a pushover for the IC, succeeded in getting the government to agree that it can’t disseminate domestic data that it intentionally collected. And if I’m right that the FISC gave the government a cyber certificate but sharply limited how it could use that data, then it did so on precisely this issue. Significantly, CISA continues a trend we already saw in USA F-ReDux, wherein the Attorney General gets to decide whether privacy procedures (no longer named minimization procedures!) are adequate, rather than a judge. Equally significant, while CISA permits the use of CISA-collected data for a range of prosecutions, unlike FISA, it requires no notice to defendants of where the government obtained that data.

In lieu of judges, CISA envisions PCLOB and Inspectors General conducting the oversight (as well as audits being possible though not mandated). As I’ll show in a follow-up post, there are some telling things left out of those reviews. Plus, the history of DOJ’s Inspector General’s efforts to exercise oversight over such activities offers little hope these entities, no matter how well-intentioned, will be able to restrain any problematic practices. After all, DOJ’s IG called out the FBI in 2008 for not complying with a 2006 PATRIOT Act Reauthorization requirement to have minimization procedures specific to Section 215, but it took until 2013, with three years of intercession from FISC and leaks from Edward Snowden, before FBI finally complied with that 2006 mandate. And that came before FBI’s current practice of withholding data from its IG and even some information in IG reports from Congress.

In short, given what we know of the IC’s behavior when there was a judge with some leverage over its actions, there is absolutely zero reason to believe that any abuses would be stopped under a system without any judicial oversight. The Executive Branch cannot police itself.

Finally, there’s the question of what happens at DHS. No matter what you think about NSA’s minimization procedures (and they do have flaws), they do ensure that data that comes in through NSA doesn’t get broadly circulated in a way that identifies US persons. The IC has increasingly bypassed this control since 2007 by putting FBI at the front of data collection, which means data can be shared broadly even outside of the government. But FISC never permitted the IC to do this with upstream collection. So any content (metadata was different) on US persons collected under upstream collection would be subjected to minimization procedures.

This CISA model eliminates that control too. After all, CISA, as written, would let FBI and NSA veto any scrub (including of content) at DHS. And incoming data (again, probably including content) would be shared immediately not only with FBI (which has been the vehicle for sharing NSA data broadly) but also Treasury and ODNI, which are both veritable black holes from a due process perspective. And what few protections for US persons are tied to a relevance standard that would be met by virtue of a tie to that selector. Thus, CISA would permit the immediate sharing, with virtually no minimization, of US person content across the government (and from there to private sector and local governments).

I welcome corrections to this model — I presume I’ve overstated how much of an improvement over FISA this program would be. But if this analysis is correct, then CISA would give the IC everything that would have wanted for a cybersecurity certificate under Section 702, with none of the inadequate limits that would have had and may in fact have. CISA would provide an administrative way to spy on US person (domestic) content all without any judicial overview.

All of which brings me back to why the IC wants this this much. In at least one case, the IC did manage to use a combination of upstream and PRISM collection to stop an attempt to steal large amounts of data from a defense contractor. That doesn’t mean it’ll be able to do it at scale, but if by offering various kinds of immunity it can get all backbone providers to play along, it might be able to improve on that performance.

But CISA isn’t so much a cybersecurity bill as it is an Internet domestic spying bill, with permission to spy on a range of nefarious activities in cyberspace, including kiddie porn and IP theft. This bill, because it permits the spying on US person content, may be far more useful for that purpose than preventing actual hacks. That is, it won’t fix the hacking problem (it may make it worse by gutting Federal authority to regulate corporate cyber hygiene). But it will help police other kinds of activity.

If I’m right, the IC’s insistence it needs CISA — in the name of, but not necessarily intending to accomplish — cybersecurity makes more sense.

Update: This post has been tweaked for clarity.

Update, November 5: I should have written this post before I wrote this one. In it, I point to language in the August 26, 2014 Thomas Hogan opinion reflecting earlier approval, at least in the FBI minimization procedures, to share cyber signatures with private entities. The first approval was on September 20, 2012. The FISC approved the version still active in 2014 on August 30, 2013. (See footnote 19.) That certainly suggests FISC approved cyber sharing more broadly than the 2011 opinion might have suggested, though I suspect it still included more restrictions than CISA would. Moreover, if the language only got approved for the FBI minimization procedures, it would apply just to PRISM production, given that the FBI does not (or at least didn’t used to) get unminimized upstream production.

The Pro-Scrub Language Added to CISA Is Designed to Eliminate DHS’ Scrub

I’ve been comparing the Manager’s Amendment (MA) Richard Burr and Dianne Feinstein introduced Wednesday with the old bill.

A key change — one Burr and Feinstein have highlighted in their comments on the floor — is the integration of DHS even more centrally in the process of the data intake process. Just as one example, the MA adds the Secretary of Homeland Security to the process of setting up the procedures about information sharing.

Not later than 60 days after the date of the enactment of this Act, the Attorney General and the Secretary of Homeland Security shall, in coordination with the heads of the appropriate Federal entities, develop and submit to Congress interim policies and procedures relating to the receipt of cyber threat indicators and defensive measures by the Federal Government. [my emphasis]

That change is applied throughout.

But there’s one area where adding more DHS involvement appears to be just a show: where it permits DHS conduct a scrub of the data on intake (as Feinstein described, this was an attempt to integrate Tom Carper’s and Chris Coons’ amendments doing just that).

This is also an issue DHS raised in response to Al Franken’s concerns about how CISA would affect their current intake procedure.

To require sharing in “real time” and “not subject to any delay [or] modification” raises concerns relating to operational analysis and privacy.

First, it is important for the NCCIC to be able to apply a privacy scrub to incoming data, to ensure that personally identifiable information unrelated to a cyber threat has not been included. If DHS distributes information that is not scrubbed for privacy concerns, DHS would fail to mitigate and in fact would contribute to the compromise of personally identifiable information by spreading it further. While DHS aims to conduct a privacy scrub quickly so that data can be shared in close to real time, the language as currently written would complicate efforts to do so. DHS needs to apply business rules, workflows and data labeling (potentially masking data depending on the receiver) to avoid this problem.

Second, customers may receive more information than they are capable of handling, and are likely to receive large amounts of unnecessary information. If there is no layer of screening for accuracy, DHS’ customers may receive large amounts of information with dubious value, and may not have the capability to meaningfully digest that information.

While the current Cybersecurity Information Sharing Act recognizes the need for policies and procedures governing automatic information sharing, those policies and procedures would not effectively mitigate these issues if the requirement to share “not subject to any delay [or] modification” remains.

To ensure automated information sharing works in practice, DHS recommends requiring cyber threat information received by DHS to be provided to other federal agencies in “as close to real time as practicable” and “in accordance with applicable policies and procedures.”

Effectively, DHS explained that if it was required to share data in real time, it would be unable to scrub out unnecessary and potentially burdensome data, and suggested that the “real time” requirement be changed to “as close to real time as practicable.”

But compare DHS’s concerns with the actual language added to the description of the information-sharing portal (the new language is in italics).

(3) REQUIREMENTS CONCERNING POLICIES AND PROCEDURES.—Consistent with the guidelines required by subsection (b), the policies and procedures developed and promulgated under this subsection shall—

(A) ensure that cyber threat indicators shared with the Federal Government by any entity pursuant to section 104(c) through the real-time process described in subsection (c) of this section—

(i) are shared in an automated manner with all of the appropriate Federal entities;

(ii) are only subject to a delay, modification, or other action due to controls established for such real-time process that could impede real-time receipt by all of the appropriate Federal entities when the delay, modification, or other action is due to controls—

(I) agreed upon unanimously by all of the heads of the appropriate Federal entities;

(II) carried out before any of the appropriate Federal entities retains or uses the cyber threat indicators or defensive measures; and

(III) uniformly applied such that each of the appropriate Federal entities is subject to the same delay, modification, or other action; and

This section permits one of the “appropriate Federal agencies” to veto such a scrub. Presumably, the language only exists in the bill because one of the “appropriate Federal agencies” has already vetoed the scrub. NSA (in the guise of “appropriate Federal agency” DOD) would be the one that would scare people, but such a veto would equally as likely to come from FBI (in the guise of “appropriate Federal agency” DOJ), and given Tom Cotton’s efforts to send this data even more quickly to FBI, that’s probably who vetoed it.

If you had any doubts the Intelligence Community is ordering up what it wants in this bill, the language permitting them a veto on privacy protections should alleviate you of those doubts.

On top of NSA and FBI’s veto authority, there’s an intentional logical problem here. DHS is one of the “appropriate Federal agencies,” but DHS is the entity that would presumably do the scrub. Yet if it can’t retain data before any other agency, it’s not clear how it could do a scrub.

In short, this seems designed to lead people to believe there might be a scrub (or rather, that under CISA, DHS would continue to do the privacy scrub they are currently doing, though they are just beginning to do it automatically) when, for several reasons, that also seems to be ruled out by the bill. And ruled out because one “appropriate Federal agency” (like I said, I suspect FBI) plans to veto such a plan.

So it has taken this Manager’s Amendment to explain why we need CISA: to make sure that DHS doesn’t do the privacy scrubs it is currently doing.

I’ll explain in a follow-up post why it would be so important to eliminate DHS’ current scrub on incoming data.

CISA Moves: A Summary

This afternoon, Aaron Richard Burr moved the Cyber Intelligence Sharing Act forward by introducing a manager’s amendment that has limited privacy tweaks (permitting a scrub at DHS and limiting the use of CISA information to cyber crimes that nevertheless include to prevent threat to property), with a bunch of bigger privacy fix amendments, plus a Tom Cotton one and a horrible Sheldon Whitehouse one called as non-germane amendments requiring 60 votes.

Other than that, Burr, Dianne Feinstein, and Ron Wyden spoke on the bill.

Burr did some significant goalpost moving. Whereas in the past, he had suggested that CISA might have prevented the Office of Public Management hack, today he suggested CISA would limit how much data got stolen in a series of hacks. His claim is still false (in almost all the hacks he discussed, the attack vector was already known, but knowing it did nothing to prevent the continued hack).

Burr also likened this bill to a neighborhood watch, where everyone in the neighborhood looks out for the entire neighborhood. He neglected to mention that that neighborhood watch would also include that nosy granny type who reports every brown person in the neighborhood, and features self-defense just like George Zimmerman’s neighborhood watch concept does. Worse, Burr suggested that those not participating in his neighborhood watch were had no protection, effectively suggesting that some of the best companies on securing themselves — like Google — were not protecting customers. Burr even suggested he didn’t know anything about the companies that oppose the bill, which is funny, because Twitter opposes the bill, and Burr has a Twitter account.

Feinstein was worse. She mentioned the OPM hack and then really suggested that a series of other hacks — including both the Sony hack and the DDOS attacks on online banking sites that stole no data! — were worse than the OPM hack.

Yes, the Vice Chair of SSCI really did say that the OPM hack was less serious than a bunch of other other hacks that didn’t affect the national security of this country. Which, if I were one of the 21 million people whose security clearance data had been compromised, would make me very very furious.

DiFi also used language that made it clear she doesn’t really understand how the information sharing portal works. She said something like, “Once cyber information enters the portal it will move at machine speed to other federal agencies,” as if a conveyor belt will carry information from DHS to FBI.

Wyden mostly pointed out that this bill doesn’t protect privacy. But he did call out Burr on his goalpost moving on whether the bill would prevent (his old claim) or just limit the damage 0f (his new one) attacks that it wouldn’t affect at all.

Wyden did, however, object to unanimous consent because Whitehouse’s crappy amendment was being given a vote, which led Burr to complain that Wyden wasn’t going to hold this up.

Finally, Burr came back on the floor, not only to bad mouth companies that oppose this bill again (and insist it was voluntary so they shouldn’t care) but also to do what I thought even he wouldn’t do: suggest we need to pass CISA because a 13 year old stoner hacked the CIA Director.

BREAKING: What emptywheel Reported Two Years Ago

The NYT today:

The National Security Agency has used its bulk domestic phone records program to search for operatives from the government of Iran and “associated terrorist organizations” — not just Al Qaeda and its allies — according to a document obtained by The New York Times.

[snip]

The inclusion of Iran and allied terrorist groups — presumably the Shiite group Hezbollah — and the confirmation of the names of other participating companies add new details to public understanding of the once-secret program. The Bush administration created the program to try to find hidden terrorist cells on domestic soil after the attacks of Sept. 11, 2001, and government officials have justified it by using Al Qaeda as an example.

emptywheel, 15 months ago:

I want to post Dianne Feinstein’s statement about what Section 215 does because, well, it seems Iran is now a terrorist. (This is around 1:55)

The Section 215 Business Records provision was created in 2001 in the PATRIOT for tangible things: hotel records, credit card statements, etcetera. Things that are not phone or email communications. The FBI uses that authority as part of its terrorism investigations. The NSA only uses Section 215 for phone call records — not for Google searches or other things. Under Section 215, NSA collects phone records pursuant to a court record. It can only look at that data after a showing that there is a reasonable, articulable that a specific individual is involved in terrorism, actually related to al Qaeda or Iran. At that point, the database can be searched. But that search only provides metadata, of those phone numbers. Of things that are in the phone bill. That person, um [flips paper] So the vast majority of records in the database are never accessed, and are deleted after a period of five years. To look at, or use content, a court warrant must be obtained.

Is that a fair description, or can you correct it in any way?

Keith Alexander: That is correct, Senator. [underline/italics added]

Some time after this post Josh Gerstein reported on Keith Alexander confirming the Iran targeting.

The NYT today:

One document also reveals a new nugget that fills in a timeline about surveillance: a key date for a companion N.S.A. program that collected records about Americans’ emails and other Internet communications in bulk. The N.S.A. ended that program in 2011 and declassified its existence after the Snowden disclosures.

In 2009, the N.S.A. realized that there were problems with the Internet records program as well and turned it off. It then later obtained Judge Bates’s permission to turn it back on and expand it.

When the government declassified his ruling permitting the program to resume, the date was redacted. The report says it happened in July 2010.

emptywheel in November 2013:

I’ve seen a lot of outright errors in the reporting on the John Bates opinion authorizing the government to restart the Internet metadata program released on Monday.

Bates’ opinion was likely written in July 2010.

[snip]

It had to have been written after June 21, 2010 and probably dates to between June 21 and July 23, 2010, because page 92 footnote 78 cites Holder v. HLP (which was released on June 21), but uses a “WL” citation; by July 23 the “S. Ct.” citation was available. (h/t to Document Exploitation for this last observation).

So: it had to have been written between June 21, 2010 and October 3, 2011, but was almost certainly written sometime in the July 2010 timeframe.

The latter oversight is understandable, as this story — which has been cited in court filings — misread Claire Eagan’s discussions of earlier bulk opinions, which quoted several sentences of Bates’ earlier one (though it was not the among the stories that really botched the timing of the Bates opinion).

In September, the Obama administration declassified and released a lengthy opinion by Judge Claire Eagan of the surveillance court, written a month earlier and explaining why the panel had given legal blessing to the call log program. A largely overlooked passage of her ruling suggested that the court has also issued orders for at least two other types of bulk data collection.

Specifically, Judge Eagan noted that the court had previously examined the issue of what records are relevant to an investigation for the purpose of “bulk collections,” plural. There followed more than six lines that were censored in the publicly released version of her opinion.

There have been multiple pieces of evidence to confirm my earlier July 2010 deduction since.

The big news in the NYT story (though not necessarily the NYT documents, which I’ll return to) is that in 2010, Verizon Wireless also received phone dragnet orders. I’ll return to what that tells us too.

But the news that Iran was targeted under the phone dragnet was confirmed publicly — and reported here — in a prepared statement from the Senate Intelligence Chair and confirmed by the Director of National Security Agency a week after the first Snowden leak story.

Was the White House Involved in the Decision to Unapologize to Dianne Feinstein?

A must-read Jason Leopold piece on the fight between the Senate Intelligence Committee and CIA over the torture report reveals that John Brennan apologized about hacking the SSCI website — before he unapologized .

John Brennan was about to say he was sorry.

On July 28, 2014, the CIA director wrote a letter to senators Dianne Feinstein and Saxby Chambliss — the chairwoman of the Senate Intelligence Committee (SSCI) and the panel’s ranking Republican, respectively. In it, he admitted that the CIA’s penetration of the computer network used by committee staffers reviewing the agency’s torture program — a breach for which Feinstein and Chambliss had long demanded accountability — was improper and violated agreements the Intelligence Committee had made with the CIA.

[snip]

“I recently received a briefing on the [OIG’s] findings, and want to inform you that the investigation found support for your concern that CIA staff had improperly accessed the [Intelligence Committee] shared drive on the RDINet [an acronym for rendition, detention, and interrogation] when conducting a limited search for CIA privileged documents,” Brennan wrote. “In particular, the [OIG] judged that Agency officers’ access to the… shared drive was inconsistent with the common understanding reached in 2009 between the Committee and the Agency regarding access to RDINet. Consequently, I apologize for the actions of CIA officers…. I am committed to correcting the shortcomings that this report has revealed.”

But Brennan didn’t sign or send the apology letter.

Instead, four days later, he sent Feinstein and Chambliss a different letter — one without an apology or admission that the search of their computer network was improper.

Leopold includes the letter as an image in his story (and also at page 299 in the SCRIBD embed). The letter he did send appears at page 11 of the embed.

In addition to the dramatically different content, the later letter does not include — as the earlier one did — notice that carbon copies of the letter were sent to DNI James Clapper, White House Counsel Neil Eggleston, and CIA’s Inspector General David Buckley.

Screen Shot 2015-08-12 at 1.55.19 PM

You can see the earlier letter (see page 298) was sent by some emoticon-wielding (presumed) Assistant who explained — at 4:32 that same day — “Sending anyway, Just in case you need it soft copy for any reason. :)”

Screen Shot 2015-08-12 at 2.29.35 PM

 

It’s as if by that point the CIA had already decided to pursue a different option (which, if we can believe the CIA’s currently operative story to Leopold, was to apologize to Senator Feinstein in person rather than memorialize such an apology in writing).

But I wonder … given that they were going to include Eggleston on the original but saw no need to include him (and Clapper and Buckley) on the finalized letter … was the White House in the loop in the decision to unapologize?

As Leopold reminds in his story, Brennan looped Chief of Staff Denis McDonough in before the January searches of SSCI’s network, implicating (though insulated by two degrees of separation, if we believe the CIA’s story) the White House in the decision to spy on SSCI. Was the White House included in the decision on whether to apologize to Dianne Feinstein?

GM Supports Obtaining Cybersecurity Immunity Just after Hack Vulnerability Revealed

Dianne Feinstein just gave a long speech on the Senate floor supporting the Cyber Information Sharing Act.

She listed off a list of shocking hacks that happened in the last year or so — though made no effort (or even claim) that CISA would have prevented any of them.

She listed some of the 56 corporations and business organizations that support the bill.

Most interestingly, she boasted that yesterday she received a letter from GM supporting the bill. We should pass CISA, Feinstein suggests, because General Motors, on August 4, 2015, decided to support the bill.

I actually think that’s reason to oppose the bill.

As I have written elsewhere — most recently this column at the DailyDot — one of my concerns about the bill is the possibility that by sharing data under the immunity afforded by the bill, corporations might dodge liability where it otherwise might serve as necessary safety and security leverage.

Immunizing corporations may make it harder for the government to push companies to improve their security. As Wyden explained, while the bill would let the government use data shared to prosecute crimes, the government couldn’t use it to demand security improvements at those companies. “The bill creates what I consider to be a double standard—really a bizarre double standard in that private information that is shared about individuals can be used for a variety of non-cyber security purposes, including law enforcement action against these individuals,” Wyden said, “but information about the companies supplying that information generally may not be used to police those companies.”

Financial information-sharing laws may illustrate why Wyden is concerned. Under that model, banks and other financial institutions are obligated to report suspicious transactions to the Treasury Department, but, as in CISA, they receive in return immunity from civil suits as well as consideration in case of sanctions, for self-reporting. “Consideration,” meaning that enforcement authorities take into account a financial institution’s cooperation with the legally mandated disclosures when considering whether to sanction them for any revealed wrongdoing. Perhaps as a result, in spite of abundant evidence that banks have facilitated crimes—such as money laundering for drug cartels and terrorists—the Department of Justice has not managed to prosecute them. When asked during her confirmation hearing why she had not prosecuted HSBC for facilitating money laundering when she presided over an investigation of the company as U.S. Attorney for the Eastern District of New York, Attorney General Loretta Lynch said there was not sufficient “admissible” evidence to indict, suggesting they had information they could not use.

In the same column, I pointed out the different approach to cybersecurity — for cars at least — of the SPY Act — introduced by Ed Markey and Richard Blumenthal — which affirmatively requires certain cybersecurity and privacy protections.

Increased attention on the susceptibility of networked cars—heightened by but not actually precipitated by the report of a successful remote hack of a Jeep Cherokee—led two other senators, Ed Markey and Richard Blumenthal, to adopt a different approach. They introduced the Security and Privacy in Your Car Act, which would require privacy disclosures, adequate cybersecurity defenses, and additional reporting from companies making networked cars and also require that customers be allowed to opt out of letting the companies collect data from their cars.

The SPY Car Act adopts a radically different approach to cybersecurity than CISA in that it requires basic defenses from corporations selling networked products. Whereas CISA supersedes privacy protections for consumers like the Electronic Communications Privacy Act, the SPY Car Act would enhance privacy for those using networked cars. Additionally, while CISA gives corporations immunity so long as they share information, SPY Car emphasizes corporate liability and regulatory compliance.

I’m actually not sure how you could have both CISA and SPY Act, because the former’s immunity would undercut the regulatory limits on the latter. (And I asked both Markey and Blumenthal’s offices, but they blew off repeated requests for an answer on this point.)

Which brings me back to GM’s decision — yesterday!!! — to support CISA.

The hackers that remotely hacked a car used a Jeep Cherokee. But analysis they did last year found the Cadillac Escalade to be the second most hackable car among those they reviewed (and I have reason to believe there are other GM products that are probably even more hackable).

So … hackers reveal they can remotely hack cars on July 21; Markey introduced his bill on the same day. And then on August 4, GM for the first time signs up for a bill that would give them immunity if they start sharing data with the government in the name of cybersecurity.

Now maybe I’m wrong in my suspicion that CISA’s immunity would provide corporations a way to limit their other liability for cybersecurity so long as they had handed over a bunch of data to the government, even if it incriminated them.

But we sure ought to answer that question before we go immunizing corporations whose negligence might leave us more open to attack.

Did the Former Deputy Director of CTC Misinform Congress about Torture Report Costs?

Jason Leopold had an important update on the torture report that — because he’s doing rolling updates — hasn’t gotten sufficient attention.

Leopold obtained the contracting documents of the company, Centra, that drove up costs for the report by reviewing every document turned over to the Senate Intelligence Committee. But after he posted those documents, the CIA’s story about how much Centra got paid for those specific tasks changed. After 7 months of public claims that the then-unnamed contractor had gotten paid $40 million, the CIA all of a sudden changed its mind.

CIA spokesman Ryan Trapani disputed VICE News’ “interpretation” of the Centra contract.

“A significant portion of the contract cost pertained to services completely distinct from, and wholly unrelated to, the Senate Intelligence Committee review,” Trapani said, backtracking on the agency’s statement last year that the $40 million the agency spent was due entirely to “the committee’s demands of CIA in this investigation.” “In terms of the services performed in support of the committee review, CIA dedicated substantial resources to provide the committee unprecedented access to millions of pages of documents as expeditiously as possible, consistent with the security requirements for such highly classified, sensitive documents.”

That’s troubling because it runs counter to what everyone on SSCI believed, including then Chair Dianne Feinstein, who has been rebutting claims that the committee itself spent the money ever since it became public last year.

The overwhelming majority of the $40 million cost was incurred by the CIA and was caused by the CIA’s own unprecedented demands to keep documents away from the committee. Rather than provide documents for the committee to review in its own secure Senate office—as is standard practice—the CIA insisted on establishing a separate leased facility and a “stand-alone” computer network for committee use.

Which raises the question of where the claim that the entirety of that $40 million was spent on the torture report came from — which Leopold notes in an update came from this footnote in the Republican views on the report (and by association, a 2012 letter from CIA’s then number 3, Sue Bromley).

Screen Shot 2015-07-29 at 5.06.08 PM

Not only was Bromley CIA’s number 3 when she wrote the letter, but in the years in question, she cycled through as Deputy Director of the Counterterrorism Center.

V. Sue Bromley, an Agency veteran of 28 years, will become our new Associate Deputy Director. Sue has served as our Chief Financial Officer since June 2009. As a former OMB director, I can attest to her exceptional skill and diligence in managing one of the most complex budgets in government.

Before that, Sue helped lead our analytic effort for two years as Deputy Director for Intelligence. She has made vital contributions to the fight against al-Qa’ida and its violent allies, both as Deputy Director of the Counterterrorism Center and as Chief of the Operations and Management Staff in the National Clandestine Service, where she helped plan, justify, and distribute a large increase in funding for counterterrorism operations after the September 11th attacks.

Now, it’s possible that the Republicans just took her letter out of context and no one on the Democratic side checked their math. There are a lot of references in the minority report (heh) that don’t make sense.

But Bromley is a money gal. She shouldn’t be making mistakes about contracts, and certainly not to the scale that appears to have happened — all in such a way as to serve the pro-torture narrative which in turn serves to protect … the counterterrorism center.

At least according to the story the CIA is currently telling, everyone on the CIA’s oversight committee grossly misunderstood a $40 million expenditure.

Why?

Feinstein Wants to Introduce Reporting Mandate Jim Comey Says We Don’t Need

I’ll have a piece in Salon shortly about the two hearings on whether FBI should be able to mandate back doors (they call them front doors because that fools some Senators about the security problems with that) in software.

One thing not in there, however, has to do with a bill the Senate Intelligence Committee is considering that would require Facebook and Twitter and other social media to report terrorist content to authorities. ABC News, quoting Richard Clarke (who hasn’t had an official role in government for some years but is on ABC’s payroll) reported that the social media companies were not now reporting terrorist content.

In the middle of the SSCI hearing on this topic, Dianne Feinstein asked Jim Comey whether social media companies were reporting such content. Comey said they are (he did say they’ve gotten far better of late). Feinstein asked whether there ought to be a law anyway, to mandate behavior the companies are already doing. Comey suggested it wasn’t necessary. Feinstein said maybe they should mandate it anyway, like they do for child porn.

All of which made it clear that such a law is unnecessary, even before you get into the severe problems with the law (such as defining who is a terrorist and what counts as terrorist content).

SSCI will probably pass it anyway, because that’s how they respond to threats of late: by passing legislation that won’t address it.

Note, Feinstein also got visibly and audibly and persistently pissed at Ron Wyden for accurately describing what Deputy Attorney General Sally Yates had said she wanted in an earlier hearing: for providers to have keys that the FBI could use. Feinstein seems to believe good PR will eliminate all the technical problems with a back door plan, perhaps because then she won’t be held responsible for making us less secure as a result.

Update: The measures is here, in the Intelligence Authorization.

Update: Title changed for accuracy.

Behold, BR 15-24, the Longest-Serving Phone Dragnet Order Ever

By my calculation today marks the 91st day of the life of phone dragnet order BR 15-24, making it the longest running dragnet order ever. Though the order offered no explanation, FISC judge James Boasberg approved a 95-day expiration for this order back on February 26 so the dragnet order expiration would coincide with PATRIOT Act’s sunset.

It probably seemed wise at the time, but it definitely exacerbates the impact of Mitch McConnell’s miscalculation last week, as it means there’s is no grace period after the current order expires.

The 90-day renewals appear to arise out of both the Stellar Wind practice and the FISA Pen Register practice. Under the former, the Bush Administration reviewed the dragnet every 45 days to make sure it was still necessary and give it the appearance of oversight. (The renewal dates appear on this timeline.) When FISC approved the use of the Pen Register statute to collect the Internet dragnet, it adhered to that statute’s renewal process, which requires 90-day renewals. I assume the phone dragnet adopted the same, even though Section 215 has no renewal requirement, because the phone dragnet collected even more data than the Internet dragnet did.

So already, we’re a day longer than the spirit of the law should permit, four days before Sunday’s scheduled resolution (or lack thereof) of the current impasse.

Given Charlie Savage’s account, it appears the Administration did not — as ordered by Boasberg — brief the FISC on the impact of the 2nd Circuit decision if it would change the program. Rather, they’re just hiding out, hoping they don’t need to raise this or any other issue with regards to the dragnet with the FISC.

The Foreign Intelligence Surveillance Court had given the government a deadline of last Friday to file a new application to extend the bulk phone records program for 90 days. Given the disarray in the Senate and the looming deadline, the Justice Department did not file, the official said, speaking on condition of anonymity to discuss intelligence-related matters.

[snip]

The administration is holding to its decision not to invoke the grandfather clause to keep collecting bulk phone records past next Monday, the official said. But the government has not ruled out invoking such a clause for using the business records provision — as well as the other two powers that are expiring — to gather specific records for more routine investigations.

“We will not use the grandfather clause in the Patriot Act to continue the bulk metadata collection program; it would not be tenable for us to do so,” the senior official said.

“Thus, because of the pending sunset of the current authority, we have not filed an application with the FISA court to continue collection,” the official said, referring to the Foreign Intelligence Surveillance Act court, also known as FISC.

The official added, “We will consider, in light of our national security needs and the status of our authorities, whether to make an appropriate filing with the FISC about accessing previously collected metadata.”

[snip]

The administration is hoping to avoid any need to go to the court for permission to query already-acquired bulk phone data, which would raise additional legal complications.

But one plan being floated — Dianne Feinstein’s non-compromise compromise — would simply permit the FISC to extend the current order until a year after whenever her bill might be passed into law (which couldn’t be Sunday night), as if nothing had ever happened.

CONTINUED APPLICABILITY.—Notwithstanding any other provision of the Foreign Intelligence Surveillance Act of 1978 (50 U.S.C. 1801 et seq.) or this Act or any amendment made by this Act, the order entered by the court established under section 103(a) of the Foreign Intelligence Surveillance Act of 1978 (50 U.S.C. 1803(a)) on February 26, 2015, in Docket No. BR 15–24, may be extended by order of that court until the effective date established in subsection (a) [that is, one year after the passage of this bill]

In other words, Feinstein proposes to take a dragnet collecting the phone records of all Americans, and extend it for an entire year, when even a Pen Register targeting an individual would need to be formally renewed.