The Internet Dragnet Was a Clusterfuck … and NSA Didn’t Care

Here’s my best description from last year of the mind-boggling fact that NSA conducted 25 spot checks between 2004 and 2009 and then did a several months’ long end-to-end review of the Internet dragnet in 2009 and found it to be in pretty good shape, only then to have someone discover that every single record received under the program had violated rules set in 2004.

Exhibit A is a comprehensive end-to-end report that the NSA conducted in late summer or early fall of 2009, which focused on the work the agency did in metadata collection and analysis to try and identify people emailing terrorist suspects.

The report described a number of violations that the NSA had cleaned up since the beginning of that year — including using automatic alerts that had not been authorized and giving the FBI and CIA direct access to a database of query results. It concluded the internet dragnet was in pretty good shape. “NSA has taken significant steps designed to eliminate the possibility of any future compliance issues,” the last line of the report read, “and to ensure that mechanisms are in place to detect and respond quickly if any were to occur.”

But just weeks later, the Department of Justice informed the FISA Court, which oversees the NSA program, that the NSA had been collecting impermissible categories of data — potentially including content — for all five years of the program’s existence.

The Justice Department said the violation had been discovered by NSA’s general counsel, which since a previous violation in 2004 had been required to do two spot checks of the data quarterly to make sure NSA had complied with FISC orders. But the general counsel had found the problem only after years of not finding it. The Justice Department later told the court that “virtually every” internet dragnet record “contains some metadata that was authorized for collection and some metadata that was not authorized for collection.” In other words, in the more than 25 checks the NSA’s general counsel should have done from 2004 to 2009, it never once found this unauthorized data.

The following year, Judge John Bates, then head of FISC, emphasized that the NSA had missed the unauthorized data in its comprehensive report. He noted “the extraordinary fact that NSA’s end-to-end review overlooked unauthorized acquisitions that were documented in virtually every record of what was acquired.” Bates went on, “[I]t must be added that those responsible for conducting oversight at NSA failed to do so effectively.”

Even after these details became public in 2014 (or perhaps because the intelligence community buried such disclosures in documents with dates obscured), commentators have generally given the NSA the benefit of the doubt in its good faith to operate its dragnet(s) under the rules set by the FISA Court.

But an IG Report from 2007 (PDF 24-56) released in Charlie Savage’s latest FOIA return should disabuse commentators of that opinion.

This is a report from early 2007, almost 3 years after the Stellar Wind Internet dragnet moved under FISA authority and close to 30 months after Judge Colleen Kollar-Kotelly ordered NSA to implement more oversight measures, including those spot checks. We know that rough date because the IG Report post-dates the January 8, 2007 initiation of the FISC-spying compartment and it reflects 10 dragnet order periods of up to 90 days apiece (see page 21). So the investigation in it should date to no later than February 8, 2007, with the final report finished somewhat later. It was completed by Brian McAndrew, who served as Acting Inspector General from the time Joel Brenner left in 2006 until George Ellard started in 2007 (but who also got asked to sign at least one document he couldn’t vouch for in 2002, again as Acting IG).

The IG Report is bizarre. It gives the NSA a passing grade on what it assessed.

The management controls designed by the Agency to govern the collection, dissemination, and data security of electronic communications metadata and U.S. person information obtained under the Order are adequate and in several aspects exceed the terms of the Order.

I believe that by giving a passing grade, the IG made it less likely his results would have to get reported (for example, to the Intelligence Oversight Board, which still wasn’t getting reporting on this program, and probably also to the Intelligence Committees, which didn’t start getting most documentation on this stuff until late 2008) in any but a routine manner, if even that. But the report also admits it did not assess “the effectiveness of management controls[, which] will be addressed in a subsequent report.” (The 2011 report examined here identified previous PRTT reports, including this one, and that subsequent report doesn’t appear in any obvious form.) Then, having given the NSA a passing grade but deferring the most important part of the review, the IG notes “additional controls are needed.”

And how.

As to the issue of the spot checks, mandated by the FISA Court and intended to prevent years of ongoing violations, the IG deems such checks “largely ineffective” because management hadn’t adopted a methodology for those spot checks. They appear to have just swooped in and checked queries already approved by an analyst’s supervisor, in what they called a superaudit.

Worse still, they didn’t write anything down.

As mandated by the Order, OGC periodically conducts random spot checks of the data collected [redaction] and monitors the audit log function. OGC does not, however document the data, scope, or results of the reviews. The purpose of the spot checks is to ensure that filters and other controls in place on the [redaction] are functioning as described by the Order and that only court authorized data is retained. [snip] Currently, an OGC attorney meets with the individuals responsible [redaction] and audit log functions, and reviews samples of the data to determine compliance with the Order. The attorney stated that she would formally document the reviews only if there were violations or other discrepancies of note. To date, OGC has found no violations or discrepancies.

So this IG review was done more than two years after Kollar-Kotelly had ordered these spot checks, during which period 18 spot checks should have been done. Yet at that point, NSA had no documentary evidence a single spot check had been done, just the say-so of the lawyer who claimed to have done them.

Keep in mind, too, that Oversight and Control were, at this point, implementing a new-and-improved spot-check process. That’s what the IG reviewed, the new-and-improved process, because (of course) reviewers couldn’t review the past process because there was no documentation of it. It’s the new-and-improved process that was inadequate to the task.

But that’s not the only problem the IG found in 2007. For example, the logs used in auditing did not accurately document what seed had been used for queries, which means you couldn’t review whether those queries really met the incredibly low bar of Reasonable Articulable Suspicion or that they were pre-approved.  Nor did they document how many hops out analysts chained, which means any given query could have sucked in a great deal of Americans (which might happen by the third or fourth hop) and thrown them into the corporate store for far more intrusive anlaysis. While the IG didn’t point this out directly, the management response made clear log files also didn’t document whether a seed was a US person and therefore entitled to a First Amendment review. In short, NSA didn’t capture any — any!!! — of the data that would have been necessary to assess minimal compliance with FISC orders.

NSA’s lawyers also didn’t have a solid list of everyone who had access to the databases (and therefore who needed to be trained or informed of changes to the FISC order). The Program Management Office had a list that it periodically compared to who was actually accessing the data (though as made clear later in the report, that included just the analysts). And NSA’s Office of General Counsel would also periodically review to ensure those accessing the data had the information they needed to do so legally. But “the attorney conducting the review relie[d] on memory to verify the accuracy and completeness of the list.” DOD in general is wonderfully neurotic about documenting any bit of training a given person has undergone, but with the people who had access to the Internet metadata documenting a great deal of Americans’ communication in the country, NSA chose just to work from memory.

And this non-existent manner of tracking those with database access extended to auditing as well. The IG reported that NSA also didn’t track all queries made, such as those made by “those that have the ability to query the PRTT data but are not on the PMO list or who are not analysts.” While the IG includes people who’ve been given new authorization to query the data in this discussion, it’s also talking about techs who access the data. It notes, for example, “two systems administrators, who have the ability to query PRTT data, were also omitted from the audit report logs.” The thing is, as part of the 2009 “reforms,” NSA got approval to exempt techs from audits. I’ve written a lot about this but will return to it, as there is increasing evidence that the techs have always had the ability — and continue to have the ability — to bypass limits on the program.

There are actually far more problems reported in this short report, including details proving that — as I’ve pointed out before — NSA’s training sucks.

But equally disturbing is the evidence that NSA really didn’t give a fuck about the fact they’d left a database of a significant amount of Americans’ communications metadata exposed to all sorts of control problems. The disinterest in fixing this problem dates back to 2004, when NSA first admitted to Kollar-Kotelly they were violating her orders. They did an IG report at the time (under the guidance of Joel Brenner), but it did “not make formal recommendations to management. Rather, the report summarize[d] key facts and evaluate[d] responsibility for the violation.” That’s unusual by itself: for audits to improve processes, they are supposed to provide recommendations and track whether those are implemented. Moreover, while the IG (who also claimed the clusterfuck in place in 2007 merited a passing grade) assessed that “management has taken steps to prevent recurrence of the violation,” it also noted that NSA never really fixed the monitoring and change control process identified as problems back in 2004. In other words, it found that NSA hadn’t fixed key problems IDed back in 2004.

As to this report? It did make recommendations and management even concurred with some of them, going so far as to agree to document (!!) their spot checks in the future. With others — such as the recommendation that shift supervisors should not be able to make their own RAS determinations — management didn’t concur, they just said they’d monitor those queries more closely in the future. As to the report as a whole, here’s what McAndrew had to say about management’s response to the report showing the PRTT program was a clusterfuck of vulnerabilities: “Because of extenuating circumstances, management was unable to provide complete responses to the draft report.”

So in 2007, NSA’s IG demonstrated that the oversight over a program giving NSA access to the Internet metadata of a good chunk of all Americans was laughably inadequate.

And NSA’s management didn’t even bother to give the report a full response.

2011 Internet Dragnet Audit Didn’t Find Significant Violation Reported to IOB

This will be the second of three posts on the NSA IG’s failures to correct problems with the Internet (PRTT) dragnet. In the first, I showed how quickly NSA nuked the PRTT (or at least claimed to) after John Bates ruled, a second time, that NSA could not illegally wiretap the content of Americans’ communications. Here, I’ll examine another IG Report, completed earlier in 2011 and also liberated by Charlie Savage, that appears to show the PRTT dragnet was hunky dory just weeks before it became clear again that it was not.

The report (see PDF 4-23) must date to between March 15 and May 25, 2011. It was related to a series of reports on the phone dragnet (these reports appear to have been solicited by or encouraged by Reggie Walton in the wake of the 2009 dragnet problems) that Savage liberated earlier this year. It lists all those reports on pages A-2 to A-3. But it lists the final, summary report in that series, (ST-10-0004L), as a draft, dated March 15, 2011. The copy provided to Savage is the final, dated May 25, 2011 (see PDF 203).

The reason for doing this, the PRTT report, is curious. The report notes “we began this review in [redacted, would be some time in summer 2009] but suspended it when NSA allowed the PR/TT Order to expire.” That is, this was the report that got started, but then halted, when someone discovered that every single record the NSA had collected under the program included categories of information violating the rules set by FISC in 2004.

But then NSA started a review of the phone dragnet covering all the activity in 2010 (reflected in monthly reports in Savage’s earlier release). So the NSA decided to do a review of PRTT at the same time. But remember: the Internet dragnet was shut down until at least July 2010, when John Bates authorized its resumption, and it took some time to turn the dragnet back on. That means NSA conducted a review of a dragnet that was largely on hiatus or just resuming. During the review period, both the phone and Internet dragnet reflect few finalized reports based on either dragnet. Indeed, it appears likely that there were no phone dragnet disseminations in August 2010 (see 155). There are probably two explanations for that. It suggests that after Reggie Walton told NSA they had to start following the rules, the amount of intelligence they got from the dragnet appears to have gone down from both the phone and Internet dragnet. But there may be a reason for that: we know that in 2011 NSA was training analysts to re-run queries that came up in both FISA and EO 12333 searches using EO 12333, so the results could be disseminated more broadly. So it’s likely that a lot of what had been reports reporting FISA authorized data before 2009 (which didn’t always follow FISC’s rules) started getting disseminated as EO 12333 authorized reports afterward. Still, in the case of the Internet dragnet reviewed for this report, “the dissemination did not contain PR/TT-derived USP information” so they “did not formally test dissemination objectives” (see footnote 1). None of the reports on the US Internet dragnet reviewed in some period in 2010 included US person data.

So much for collecting all of Americans’ email records to catch Americans, I guess.

All that said, both the Internet and phone dragnet found that the dragnets had adequate controls to fulfill the requirements of the FISC orders, but did say (this is laid out in unredacted form more explicitly in the phone dragnet report) that the manual monitoring of dissemination would become unworkable if analysts started using the dragnet more. The phone dragnet reports also suggest they weren’t good at monitoring less formal disseminations (via email or conversation), and by the time of these summary reports, NSA was preparing ask FISC to change the rules on reporting of non-US person dissemination. Overall in spring 2011, NSA’s IG found, the process worked according to the rules, but in part only because it was so little used.

That’s the assessment of the PRTT dragnet as of sometime between March and May 2011, less than 9 months before they’d nuke the dragnet really quickly, based mostly off a review of what NSA was doing during a period when the dragnet was largely inactive.

Which is all very interesting, because sometime before June 30, 2011 there was a PRTT violation that got reported — in a far more extensive description than the actual shut down of the dragnet in 2009 — to Intelligence Oversight Board. (see PDF 10)

Screen shot 2015-11-21 at 12.55.36 PM

There’s no mention of reporting to Congress on this, which is interesting because PATRIOT Act was being reauthorized again during precisely this period, based off notice, dated February 2, 2011, that the compliance problems were largely solved.

So here’s what happened: After having had its IG investigation shut down in fall 2009 because NSA had never been in compliance with limits on the PRTT dragnet, NSA’s IG tried again during a period when the NSA wasn’t using it all that much. It gave NSA a clean bill of health no earlier than March 15, 2011. But by June 30, 2011, something significant enough to get reported in two full paragraphs to IOB happened.

It turns out things weren’t quote so hunky dory after all.

The NSA (Said It) Ate Its Illegal Domestic Content Homework before Having to Turn It in to John Bates

The question of whether NSA can keep its Section 215 dragnet data past November 28 has been fully briefed for at least 10 days, but Judge Michael Mosman has not yet decided whether the NSA can keep it — at least not publicly. But given what the NSA IG Report on NSA’s destruction of the Internet dragnet says (liberated by Charlie Savage and available starting on PDF 60), we should assume the NSA may be hanging onto that data anyway.

This IG Report documents NSA’s very hasty decision to shut down the Internet dragnet and destroy all the data associated with it at the end of 2011, in the wake of John Bates’ October 3, 2011 opinion finding, for the second time, that if NSA knew it had collected US person content, it would be guilty of illegal wiretapping. And even with the redactions, it’s clear the IG isn’t entirely certain NSA really destroyed all those records.

The report adds yet more evidence to support the theory that the NSA shut down the PRTT program because it recognized it amounted to illegal wiretapping. The evidence to support that claim is laid out in the timeline and working notes below.

The report tells how, in early 2011, NSA started assessing whether the Internet dragnet was worth keeping under the form John Bates had approved in July 2010, which was more comprehensive and permissive than what got shut down around October 30, 2009. NSA would have had SPCMA running in big analytical departments by then, plus FAA, so they would have been obtaining these benefits over the PRTT dragnet already. Then, on a date that remains redacted, the Signals Intelligence Division asked to end the dragnet and destroy all the data. That date has to post-date September 10, 2011 (that’s roughly when the last dragnet order was approved), because SID was advising to not renew the order, meaning it happened entirely during the last authorization period. Given the redaction length it’s likely to be October (it appears too short to be September), but could be anytime before November 10. [Update: As late as October 17, SID was still working on a training program that covered PRTT, in addition to BRFISA, so it presumably post-dates that date.] That means that decision happened at virtually the same time or after, but not long after, John Bates raised the problem of wiretapping violations under FISA Section 1809(a)(2) again on October 3, 2011, just 15 months after having warned NSA about Section 1809(a)(2) violations with the PRTT dragnet.

The report explains why SID wanted to end the dragnet, though three of four explanations are redacted. If we assume bullets would be prioritized, the reason we’ve been given — that NSA could do what it needed to do with SPCMA and FAA — is only the third most important reason. The IG puts what seems like a non sequitur in the middle of that paragraph. “In addition, notwithstanding restrictions stemming from the FISC’s recent concerns regarding upstream collection, FAA §702 has emerged as another critical source for collection of Internet communications of foreign terrorists” (which seems to further support that the decision post-dated that ruling). Indeed, this is not only a non sequitur, it’s crazy. Everyone already knew FAA was useful. Which suggests it may not be a non sequitur at all, but instead something that follows off of the redacted discussions.

Given the length of the redacted date (it is one character longer than “9 December 2011”), we can say with some confidence that Keith Alexander approved the end and destruction of the dragnet between November 10 and 30 — during the same period the government was considering appealing Bates’ ruling, close to the day — November 22 — NSA submitted a motion arguing that Section 1809(a)(2)’s wiretapping rules don’t apply to it, and the day, a week later, it told John Bates it could not segregate the pre-October 31 dragnet data from post October 31 dragnet data.

Think how busy a time this already was for the legal and tech people, given the scramble to keep upstream 702 approved! And yet, at precisely the same time, they decided they should nuke the dragnet, and nuke it immediately, before the existing dragnet order expired, creating another headache for the legal and tech people. My apologies to the people who missed Thanksgiving dinner in 2011 dealing with both these headaches at once.

Not only did NSA nuke the dragnet, but they did it quickly. As I said, it appears Alexander approved nuking it November 10 or later. By December 9, it was gone.

At least, it was gone as far as the IG can tell. As far as the 5 parts of the dragnet (which appear to be the analyst facing side) that the technical repository people handled, that process started on December 2, with the IG reviewing the “before” state, and ended mostly on December 7, with final confirmation happening on December 9, the day NSA would otherwise have had to have new approval of the dragnet. As to the the intake side, those folks started destroying the dragnet before the IG could come by and check their before status:

However, S3 had completed its purge before we had the opportunity to observe. As a result we were able to review the [data acquisition database] purge procedures only for reasonableness; we were not able to do the before and after comparisons that we did for the TD systems and databases disclosed to us.

Poof! All gone, before the IG can even come over and take a look at what they actually had.

Importantly, the IG stresses that his team doesn’t have a way of proving the dragnet isn’t hidden somewhere in NSA’s servers.

It is important to note that we lack the necessary system accesses and technical resources to search NSA’s networks to independently verify that only the disclosed repositories stored PR/TT metadata.

That’s probably why the IG repeatedly says he is confirming purging of the data from all the “disclosed” databases (@nailbomb3 observed this point last night). Perhaps he’s just being lawyerly by including that caveat. Perhaps he remembers how he discovered in 2009 that every single record the NSA had received over the five year life of the dragnet had violated Colleen Kollar-Kotelly’s orders, even in spite of 25 spot checks. Perhaps the redacted explanations for eliminating the dragnet explain the urgency, and therefore raise some concerns. Perhaps he just rightly believes that when people don’t let you check their work — as NSA did not by refusing him access to NSA’s systems generally — there’s more likelihood of hanky panky.

But when NSA tells — say — the EFF, which was already several years into a lawsuit against the NSA for illegal collection of US person content from telecom switches, and which already had a 4- year old protection order covering the data relevant to that suit, that this data got purged in 2011?

Even NSA’s IG says he thinks it did but he can’t be sure.

But what we can be sure of is, after John Bates gave NSA a second warning that he would hold them responsible for wiretapping if they kept illegally collecting US person content, the entire Internet dragnet got nuked within 70 days — gone!!! — all before anyone would have to check in with John Bates again in connection with the December 9 reauthorization and tell him what was going on with the Internet dragnet.

Update: Added clarification language.

Update: The Q2 2011 IOB report (reporting on the period through June 30, 2011) shows a 2-paragraph long, entirely redacted violation (PDF 10), which represents a probably more substantive discussion than the systematic overcollection that shut down the system in 2009.

Read more

The Reasons to Shut Down the (Domestic) Internet Dragnet: Purpose and Dissemination Limits, Correlations, and Functionality

Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.

It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:

  • Purpose and dissemination limits
  • Correlations
  • Functionality

Purpose and dissemination limits

Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.

Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.

But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.

Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.

The primary new responsibility is the requirement:

  • to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]

Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.

That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.

So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.

The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.

In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.

FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.

Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.

In this same passage, the definition of metadata is curious.

For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.

I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).

Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.

So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.

At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.

One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.

Correlations

Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.

  • a primary gmail account
  • two secondary gmail accounts
  • a second name tied to one of those gmail accounts
  • a backup email (Yahoo) address
  • a backup phone (unknown provider) account
  • Google phone number
  • Google SMS number
  • a primary login IP
  • 4 other IP logins they were tracking
  • 3 credit card accounts
  • Respectively 40, 5, and 11 Google services tied to the primary and two secondary Google accounts, much of which would be treated as separate, correlated identifiers

Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).

This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.

Functionality

While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.

The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.

In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.

I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.

Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.

1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]

Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).

That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.

Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.

Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.

All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.

Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.

Author of Story Based on Leaks about Surveillance Parrots Brennan Condemning Leaks about Surveillance

Josh Rogin is among many journalists who covered John Brennan’s complaints about how “a number of unauthorized disclosures”and hand-wringing about our surveillance capabilities this morning (which was a response to Rogin asking “what went wrong” in Paris in questions).

But Brennan also said that there had been a significant increase in the operational security of terrorists and terrorist networks, who have used new commercially available encryption technologies and also studied leaked intelligence documents to evade detection.

“They have gone to school on what they need to do in order to keep their activities concealed from the authorities,” he said. “I do think this is a time for particularly Europe as well as the U.S. for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence services to protect the people that they are asked to serve.”

The FBI has said that Internet “dark spaces” hinder monitoring of terrorism suspects. That fuels the debate over whether the government should have access to commercial applications that facilitate secure communications.

Brennan pointed to “a number of unauthorized disclosures” over the past several years that have made tracking suspected terrorists even more difficult. He said there has been “hand wringing” over the government’s role in tracking suspects, leading to policies and legal action that make finding terrorists more challenging, an indirect reference to the domestic surveillance programs that were restricted after leaks by Edward Snowden revealed their existence.

I find it interesting that Rogin, of all people, is so certain that this is an “indirect reference to the domestic surveillance programs that were restricted after leaks by Edward Snowden revealed their existence.” It’s a non-sensical claim on its face, because no surveillance program has yet been restricted in the US, though FBI has been prevented from using NSLs and Pen Registers to bulk collection communications. The phone dragnet, however, is still going strong for another 2 weeks.

That reference — as I hope to show by end of day — probably refers to tech companies efforts to stop the NSA and GCHQ from hacking them anymore, as well as European governments and the EU trying to distance themselves from the US dragnet. That’s probably true, especially, given that Brennan emphasized international cooperation in his response.

I’m also confused by Rogin’s claim Jim Comey said Tor was thwarting FBI, given that the FBI Director said it wasn’t in September.

Even more curious is that Rogin is certain this is about Snowden and only Snowden. After all, while Snowden’s leaks would give terrorists a general sense of what might not be safe (though not one they tracked very closely, given the Belgian Minister of Home Affair’s claim that they’re using Playstation 4 to communicate, given that one of Snowden’s leaks said NSA and CIA were going after targets use of gaming consoles to communicate at least as early as 2008).

But a different leak would have alerted terrorists that their specific communications techniques had been compromised. The leak behind this story (which was a follow-up on leaks to the NYT, McClatchy, and WaPo).

It wasn’t just any terrorist message that triggered U.S. terror alerts and embassy closures—but a conference call of more than 20 far-flung al Qaeda operatives, Eli Lake and Josh Rogin report.
The crucial intercept that prompted the U.S. government to close embassies in 22 countries was a conference call between al Qaeda’s senior leaders and representatives of several of the group’s affiliates throughout the region.

The intercept provided the U.S. intelligence community with a rare glimpse into how al Qaeda’s leader, Ayman al-Zawahiri, manages a global organization that includes affiliates in Africa, the Middle East, and southwest and southeast Asia.

Several news outlets reported Monday on an intercepted communication last week between Zawahiri and Nasser al-Wuhayshi, the leader of al Qaeda’s affiliate based in Yemen. But The Daily Beast has learned that the discussion between the two al Qaeda leaders happened in a conference call that included the leaders or representatives of the top leadership of al Qaeda and its affiliates calling in from different locations, according to three U.S. officials familiar with the intelligence. All told, said one U.S. intelligence official, more than 20 al Qaeda operatives were on the call.

[snip]

Al Qaeda leaders had assumed the conference calls, which give Zawahiri the ability to manage his organization from a remote location, were secure. But leaks about the original intercepts have likely exposed the operation that allowed the U.S. intelligence community to listen in on the al Qaeda board meetings.

That story — by Josh Rogin himself! (though again, this was a follow-up on earlier leaks) — gave Al Qaeda, though maybe not ISIS, specific notice that one of their most sensitive communication techniques was compromised.

It’s really easy for journalists who want to parrot John Brennan and don’t know what the current status of surveillance is to blame Snowden. But those who were involved in the leak exposing the Legion of Doom conference call (which, to be sure, originated in Yemen, as many leaks that blow US counterterrorism efforts there do) might want to think twice before they blame other journalism.

Surveillance Hawk Stewart Baker Confirms Dragnet Didn’t Work as Designed

The French authorities are just a day into investigating the horrid events in Paris on Friday. We’ll know, over time, who did this and how they pulled it off. For that reason, I’m of the mind to avoid any grand claims that surveillance failed to find the perpetrators (thus far, French authorities say they know one of the attackers, who is a French guy they had IDed as an extremist, but did not know of people identified by passports found at the Stade — though predictably those have now been confirmed to be fake [update: now authorities say the Syrian one is genuine, though it’s not yet clear it belonged to the attacker], so authorities may turn out to know their real identity). In any case, Glenn Greenwald takes care of that here. I think it’s possible the terrorists did manage to avoid detection via countersurveillance — though the key ways they might have done so were available and known before Edward Snowden’s leaks (as Glenn points out).

But there is one claim by a surveillance hawk that deserves a response. That’s former DHS and NSA official Stewart Baker’s claim that because of this attack we shouldn’t stop the bulk collection of US persons’ phone metadata.

Screen Shot 2015-11-15 at 7.41.03 AM

The problem with this claim is that the NSA has a far more extensive dragnet covering the Middle East and Europe than it does on Americans. It can and does bulk collect metadata overseas without the restrictions that existed for the Section 215 dragnet. In addition to the metadata of phone calls and Internet communications, it can collect GPS location, financial information, and other metadata scraped from the content of communications.

The dragnet covering these terrorists is the kind of dragnet the NSA would love to have on Americans, if Americans lost all concern for their privacy.

And that’s just what the NSA (and GCHQ) have. The French have their own dragnet. They already had permission to hold onto metadata, but after the Charlie Hebdo attacks, they expanded their ability to wiretap without court approval. So the key ingredients to a successful use of the metadata were there: the ability to collect the metadata and awareness that one of the people was someone of concern.

The terrorists may have used encryption and therefore made it more difficult for authorities to get to the content of their Internet communications (though at this point, any iPhone encryption would only now be stalling investigators).

But their metadata should still have been available. There’s no good way to hide metadata, which is why authorities find metadata dragnets so useful.

French authorities knew of at least one of these guys, and therefore would have been able to track his communication metadata, and both the Five Eyes and France have metadata dragnets restricted only by technology, and therefore might have been able to ID the network that carried out this attack.

Stewart Baker claims that Section 215 was designed to detect a plot like this. But the metadata dragnet covering France and the Middle East is even more comprehensive than Section 215 ever was. And it didn’t detect the attack (it also didn’t detect the Mumbai plot, even though — or likely because — one of our own informants was a key player in it). So rather than be a great argument for why we need to keep a dragnet that has never once prevented an attack in the US, Baker’s quip is actually proof that the dragnets don’t work as promised.

 

Weeks after Missing Claimed Russian Bomb Plot, US and UK Take Out Jihadi John

Politico has a big piece tied to a Showtime documentary on the living CIA Directors. As should be expected of a collection of paid liars, there are a lot of myths and score settling, most notably with expanded George Tenet claims about the strength of the warnings he gave about 9/11.

But I’m most interested in this insight, which seems very apt given recent intelligence failures and successes.

What’s the CIA’s mission? Is it a spy agency? Or a secret army? “Sometimes I think we get ourselves into a frenzy—into believing that killing is the only answer to a problem,” says Tenet. “And the truth is, it’s not. That’s not what our reason for existence is.” When Petraeus became CIA director, his predecessor, Hayden took him aside. Never before, Hayden warned him, had the agency become so focused on covert military operations at the expense of intelligence gathering. “An awful lot of what we now call analysis in the American intelligence community is really targeting,” Hayden says. “Frankly, that has been at the expense of the broader, more global view. We’re safer because of it, but it has not been cost-free. Some of the things we do to keep us safe for the close fight—for instance, targeted killings—can make it more difficult to resolve the deep fight, the ideological fight. We feed the jihadi recruitment video that these Americans are heartless killers.”

This is, of course, the counterpoint to Hayden’s claim that “we kill people based on metadata.” But it says much more: it describes how we’re viewing the world in terms of targets to kill rather than people to influence or views to understand. Hayden argues that prevents us from seeing the broader view, which may include both theaters where we’re not actively killing people but also wider trends.

Which is why I’m so interested in the big festival the US and UK — David Cameron, especially (of course, he’s in the middle of an effort to get Parliament to rubber stamp the existing British dragnet) — are engaging in with the presumed drone-killing of Mohammed Emwazi, nicknamed Jihadi John by the press.

Given that ISIS has plenty of other fighters capable of executing prisoners, some even that speak British accented English, this drone-killing seems to be more about show, the vanquishing of a public figure rather than a functional leader — contrary to what David Cameron says. As WaPo notes,

“If this strike was successful, and we still await confirmation of that, it will be a strike at the heart of ISIL,” Cameron said, using an acronym for the Islamic State.

Cameron alternated between speaking about Emwazi in the past and the present tenses, describing him as a “barbaric murderer” who was the Islamic State’s “lead executioner.”

“This was an act of self defense. It was the right thing to do,” he said.

[snip]

But it is not clear that Emwazi had a meaningful role in Islamic State’s leadership structure. Analysts said the impact of his possible death could be limited.

“Implications? None beyond the symbolism,” said a Twitter message from Shiraz Maher, an expert on extremism at King’s College London.

It also might be a way to permanently silence questions about the role that British targeting of Emwazi had in further radicalizing him.

And all this comes just a few weeks after ISIS affiliates in Egypt claim to have brought down a Russian plane — depending on how you count, the largest terrorist attack since 9/11. Clearly, the combined British and US dragnet did not manage to prevent the attack, but there are even indications GCHQ, at least, wasn’t the agency that first picked up chatter about it.

Information from the intelligence agency of another country, rather than Britain’s own, led the Government to conclude that a bomb probably brought down the Russian airliner that crashed in the Sinai.

It was reports from an undisclosed “third party” agency, rather than Britain’s own GCHQ, that revealed the so-called “chatter” among extremists after the disaster that killed all 224 passengers and crew – and ended with the suspension of all British flights to Sharm el-Sheikh, according to authoritative sources.

British officials are said to have asked whether the same information had also been passed to Egypt, and were told that it had.

[snip]

Sources declined to say which friendly country passed the information. The US and Israel – whose own borders have been threatened by Isis in Sinai – as well as Arab nations in the region all have an interest in monitoring activity in the area.

So while it’s all good that the Americans and Brits took out an ISIS executioner in Syria — thereby avenging the deaths of their country men — it’s not like this great dragnet is doing what it always promises to do: prevent attacks, or even understand them quickly.

Perhaps that’s because, while we approach ever closer to “collect[ing] it all,” we’re targeting rather than analyzing the data?

Response to Snooper’s Charter: URL Searches Are Broadly Available in the US

In an unsuccessful effort to beat ACLU in a lawsuit over the constitutionality of the Child Online Protection Act, in 2005 DOJ sent a subpoena to Google asking for “all URL’s that are available to be located to a query on your company’s search engine as of July 31, 2005” and “all queries that have been entered on your company’s search engine between June 1, 2005 and July 31, 2005.” By challenging the order, Google was able to get the request significantly reduced. But it is understood that DOJ sent the same request to Yahoo, Microsoft, and AOL, and those providers substantially complied (it’s possible they negotiated what DOJ claimed was a more reasonable production of 1 million randomly-selected URLs and one week of actual searches with Personally Identifiable Information removed, but they are presumed to have done at least that much).

That’s a demonstration of the fact that the Federal government can and has gotten massive amounts of URL data from search engine operators with only a subpoena. The government can and does get such information in criminal investigations with a subpoena as well. The government probably faces more scrutiny when using FISA to get such information, as since 2009 it has likely falled under Section 215 and the minimization procedures finally adopted in 2013, but that would still represent access to URLs with a relevance standard.

Which means the primary limit on the government’s access to URL searches with a subpoena in the US is providers’ data retention policy. And that means URL searches are, in general, readily available. Neither Google nor Microsoft state in their privacy policy how long they retain this stuff — though in response to European pressure and to stave off regulation on the issue, in 2010 Google stated it would “only” retain and associate URLs with individual users for 18-24 months, and Microsoft claimed it would only associate Bing records with IPs for 6 months (though that claim is no longer available on its site). Yahoo keeps search data tracked to user for 18 months, with some law enforcement exceptions. All would keep the searches, but de-identify from individual users, thereafter.

Google now permits users to delete past searches (though again, it keeps the searches themselves).

That means for 97% of US users, URL searches will be available to law enforcement with a subpoena for at least 6 months and more often 18 months, unless opting out in Google makes such things genuinely unavailable to law enforcement requests.

On the ISP side, Comcast — which serves half of America’s broadband users — in the recent past has said it keeps IP records for 6 months (though I’m not sure if that’s still in their privacy policy). Time Warner, which has a 13% market sharedoesn’t appear to say, though it has said 6 months in the past. So for the overwhelming majority of broadband subscribers in the US, that information will be available for at least 6 months and possibly far longer. That information, too, is available with a subpoena.

I raise this because one of the things in the British Snooper’s Charter — a scary, comprehensive new surveillance bill designed, for the most part, to provide legal basis for the existing practice — rolled out earlier this week that people have reacted against is the proposed mandate in the bill that would require all providers to keep records of internet activity for a year. That is a problem. But not only does the proposal appear to be intended for more targeted use (that is, data retention requests that would override all of the above retention deadlines), it also is explicitly intended for more limited use. Unlike in the US, investigators are not supposed to be able to find out details of what people were doing online. Such information commonly appears in terrorist (especially) criminal cases.

That is, in most areas (not all; location data is one area where UK practice is clearly worse) where the Snooper’s Charter seems extreme, the reality for the overwhelming majority of Americans rivals what will be mandated under the UK bill. What the UK bill may do is eliminate the safety of services like DuckDuckGo (which doesn’t keep records of your searches), as well as the value of opt-out policies to the extent they really protect a user from law enforcement.

But if people think what’s in the Snooper’s Charter is bad, then you also need to be worried about the reality in the US for most users.

I will have far more to say about the Snooper’s Charter going forward. But one reason why people seem more worried about the Snooper’s Charter than similar permissions here in the US is that we have not had a Snowden for the FBI. That is, much of what is described in the Snooper’s Charter involves domestic intelligence. And the FBI has never been asked to provide a comprehensive view of all the kinds of surveillance it uses (indeed, it has succeeded in evading legal oversight in a number of ways), and very very little of it got included in Snowden’s leaks.

For all the problems of the policies laid out in the Snooper’s Charter, at least the UK’s spooks and cops have had to reveal what they’re actually doing. It’s high time for FBI (and DEA and all the other surveillance-crazy domestic law enforcement agencies in the US) to do the same.

Updated: Corrected an error in DOJ’s “reasonable” request to Google and tweaked for clarity.

It’s Not Just the FISA Court, It’s the Game of Surveillance Whack-a-Mole

In response to this post from Chelsea Manning, the other day I did the first in what seems to have become a series of posts arguing that we should eliminate the FISA Court, but that the question is not simple. In that post, I laid out the tools the FISC has used, with varying degrees of success, in reining in Executive branch spying, especially in times of abuse.

In this post, I want to lay out how reining in surveillance isn’t just about whether the secret approval of warrants and orders would be better done by the FISC or a district court. It’s about whack-a-mole.

That’s because, right now, there are four ways the government gives itself legal cover for expansive surveillance:

  • FISC, increasingly including programs
  • EO 12333, including SPCMA
  • Magistrate warrants and orders without proper briefing
  • Administrative orders and/or voluntary cooperation

FISA Court

The government uses the FISA court to get individualized orders for surveillance in this country and, to a less clear extent, surveillance of Americans overseas. That’s the old-fashioned stuff that could be done by a district court. But it’s also one point where egregious source information — be it a foreign partner using dubious spying techniques, or, as John Brennan admitted in his confirmation hearing, torture — gets hidden. No defendant has ever been able to challenge the basis for the FISA warrant used against them, which is clearly not what Congress said it intended in passing FISA. But given that’s the case, it means a lot of prosecutions that might not pass constitutional muster, because of that egregious source information, get a virgin rebirth in the FISC.

In addition, starting 2004, the government started using the FISA Court to coerce corporations to continue domestic collection programs they had previously done voluntarily. As I noted, while I think the FISC’s oversight of these programs has been mixed, the FISC has forced the government to hew closer (though not at) the law.

EO 12333, including SPCMA

The executive branch considers FISA just a subset of EO 12333, the Reagan Executive Order governing the intelligence community — a carve out of collection requiring more stringent rules. At times, the Intelligence Community have operated as if EO 12333 is the only set of rules they need to follow — and they’ve even secretly rewritten it at least once to change the rules. The government will always assert the right to conduct spying under EO 12333 if it has a technical means to bypass that carve out. That’s what the Bush Administration claimed Stellar Wind operated under. And at precisely the time the FISC was imposing limits on the Internet dragnet, the Executive Brach was authorizing analysis of Americans’ Internet metadata collected overseas under SPCMA.

EO 12333 derived data does get used against defendants in the US, though it appears to be laundered through the FISC and/or parallel constructed, so defendants never get the opportunity to challenge this collection.

Magistrate warrants and orders

Even when the government goes to a Title III court — usually a magistrate judge — to get an order or warrant for surveillance, that surveillance often escapes real scrutiny. We’ve seen this happen with Stingrays and other location collection, as well as FBI hacking; in those cases, the government often didn’t fully brief magistrates about what they’re approving, so the judges didn’t consider the constitutional implications of it. There are exceptions, however (James Orenstein, the judge letting Apple challenge the use of an All Writs Act to force it to unlock a phone, is a notable one), and that has provided periodic checks on collection that should require more scrutiny, as well as public notice of those methods. That’s how, a decade after magistrates first started to question the collection of location data using orders, we’re finally getting circuit courts to review the issue. Significantly, these more exotic spying techniques are often repurposed foreign intelligence methods, meaning you’ll have magistrates and other TIII judges weighing in on surveillance techniques being used in parallel programs under FISA. At least in the case of Internet data, that may even result in a higher standard of scrutiny and minimization being applied to the FISA collection than the criminal investigation collection.

Administrative orders and/or voluntary cooperation

Up until 2006, telecoms willing turned over metadata on Americans’ calls to the government under Stellar Wind. Under Hemisphere, AT&T provides the government call record information — including results of location-based analysis, on all the calls that used its networks, not just AT&T customers — sometimes without an order. For months after Congress was starting to find a way to rein in the NSA phone dragnet with USA Freedom Act, the DEA continued to operate its own dragnet of international calls that operated entirely on administrative orders. Under CISA, the government will obtain and disseminate information on cybersecurity threats that it wouldn’t be able to do under upstream 702 collection; no judge will review that collection. Until 2009, the government was using NSLs to get all the information an ISP had on a user or website, including traffic information. AT&T still provides enhanced information, including the call records of friends and family co-subscribers and (less often than in the past) communities of interest.

These six examples make it clear that, even with Americans, even entirely within the US, the government conducts a lot of spying via administrative orders and/or voluntary cooperation. It’s not clear this surveillance had any but internal agency oversight, and what is known about these programs (the onsite collaboration that was probably one precursor to Hemisphere, the early NSL usage) makes it clear there have been significant abuses. Moreover, a number of these programs represent individual (the times when FBI used an NSL to get something the FISC had repeatedly refused to authorize under a Section 215 order) or programmatic collection (I suspect, CISA) that couldn’t be approved under the auspices of the FISC.

All of which is to say the question of what to do to bring better oversight over expansive surveillance is not limited to the short-comings of the FISC.  It also must contend with the way the government tends to move collection programs when one method proves less than optimal. Where technologically possible, it has moved spying offshore and conducted it under EO 12333. Where it could pay or otherwise bribe and legally shield providers, it moved to voluntary collection. Where it needed to use traditional courts, it often just obfuscated about what it was doing. The primary limits here are not legal, except insofar as legal niceties and the very remote possibility of transparency raise corporate partner concerns.

We need to fix or eliminate the FISC. But we need to do so while staying ahead of the game of whack-a-mole.

The FISA Court’s Uncelebrated Good Points

I’m working on a post responding to this post from Chelsea Manning calling to abolish the FISA Court. Spoiler alert: I largely agree with her, but I think the question is not that simple.

As background to that post, I wanted to shift the focus from a common perception of the FISC — that it is a rubber stamp that approves all requests — to a better measure of the FISC — the multiple ways it has tried to rein in the Executive. I think the FISC has, at times, been better at doing so than often given credit for. But as I’ll show in my larger post, those efforts have had limited success.

Minimization procedures

The primary tool the FISC uses is in policing the Executive is minimization procedures approved by the court. Royce Lamberth unsuccessfully tried to use minimization procedures to limit the use of FISA-collected data in prosecutions (and also, tools for investigation, such as informants). Reggie Walton was far more successful at using and expanding very detailed limits on the phone — and later, the Internet — dragnet to force the government to stop treating domestically collected dragnet data under its own EO 12333 rules and start treating it under the more stringent FISC-imposed rules. He even shut down the Internet dragnet in fall (probably October 30) 2009 because it did not abide by limits imposed 5 years earlier by Colleen Kollar-Kotelly.

There was also a long-running discussion (that involved several briefs in 2006 and 2009, and a change in FISC procedure in 2010) about what to do with Post Cut Through Dialed Digits (those things you type in after a call or Internet session has been connected) collected under pen registers. It appears that FISC permitted (and probably still permits) the collection of that data under FISA (that was not permitted under Title III pen registers), but required the data get minimized afterwards, and for a period over collected data got sequestered.

Perhaps the most important use of minimization procedures, however, came when Internet companies stopped complying with NSLs requiring data in 2009, forcing the government to use Section 215 orders to obtain the data. By all appearances, the FISC imposed and reviewed compliance of minimization procedures until FBI, more than 7 years after being required to, finally adopted minimization procedures for Section 215. This surely resulted in a lot less innocent person data being collected and retained than under NSL collection. Note that this probably imposed a higher standard of review on this bulky collection of data than what existed at magistrate courts, though some magistrates started trying to impose what are probably similar requirements in 2014.

Such oversight provides one place where USA Freedom Act is a clear regression from what is (today, anyway) in place. Under current rules, when the government submits an application retroactively for an emergency search of the dragnet, the court can require the government to destroy any data that should not have been collected. Under USAF, the Attorney General will police such things under a scheme that does not envision destroying improperly collected data at all, and even invites the parallel construction of it.

First Amendment review

The FISC has also had some amount — perhaps significant — success in making the Executive use a more restrictive First Amendment review than it otherwise would have. Kollar-Kotelly independently imposed a First Amendment review on the Internet dragnet in 2004. First Amendment reviews were implicated in the phone dragnet changes Walton pushed in 2009. And it appears that in the government’s first uses of the emergency provision for the phone dragnet, it may have bypassed First Amendment review — at least, that’s the most logical explanation for why FISC explicitly added a First Amendment review to the emergency provision last year. While I can’t prove this with available data, I strongly suspect more stringent First Amendment reviews explain the drop in dragnet searches every time the FISC increased its scrutiny of selectors.

In most FISA surveillance, there is supposed to be a prohibition on targeting someone for their First Amendment protected activities. Yet given the number of times FISC has had to police that, it seems that the Executive uses a much weaker standard of First Amendment review than the FISC. Which should be a particularly big concern for National Security Letters, as they ordinarily get no court review (one of the NSL challenges that has been dismissed seemed to raise First Amendment concerns).

Notice of magistrate decisions

On at least two occasions, the FISC has taken notice of and required briefing after magistrate judges found a practice also used under FISA to require a higher standard of evidence. One was the 2009 PCTDD discussion mentioned above. The other was the use of combined orders to get phone records and location data. And while the latter probably resulted in other ways the Executive could use FISA to obtain location data, it suggests the FISC has paid close attention to issues being debated in magistrate courts (though that may have more to do with the integrity of then National Security Assistant Attorney General David Kris than the FISC itself; I don’t have high confidence it is still happening). To the extent this occurs, it is more likely that FISA practices will all adjust to new standards of technology than traditional courts, given that other magistrates will continue to approve questionable orders and warrants long after a few individually object, and given that an individual objection isn’t always made public.

Dissemination limits

Finally, the FISC has limited Executive action by limiting the use and dissemination of certain kinds of information. During Stellar Wind, Lamberth and Kollar-Kotelly attempted to limit or at least know which data came from Stellar Wind, thereby limiting its use for further FISA warrants (though it’s not clear how successful that was). The known details of dragnet minimization procedures included limits on dissemination (which were routinely violated until the FISC expanded them).

More recently John Bates twice pointed to FISA Section 1809(a)(2) to limit the government’s use of data collected outside of legal guidelines. He did so first in 2010 when he limited the government’s use of illegally collected Internet metadata. He used it again in 2011 when he used it to limit the government’s access to illegally collected upstream content. However, I think it likely that after both instances, the NSA took its toys and went elsewhere for part of the relevant collection, in the first case to SPCMA analysis on EO 12333 collected Internet metadata, and in the second to CISA (though just for cyber applications). So long as the FISC unquestioningly accepts EO 12333 evidence to support individual warrants and programmatic certificates, the government can always move collection away from FISC review.

Moreover, with USAF, Congress partly eliminated this tool as a retroactive control on upstream collection; it authorized the use of data collected improperly if the FISC subsequently approved retention of it under new minimization procedures.

These tools have been of varying degrees of usefulness. But FISC has tried to wield them, often in places where all but a few Title III courts were not making similar efforts. Indeed, there are a few collection practices where the FISC probably imposed a higher standard than TIII courts, and probably many more where FISC review reined in collection that didn’t have such review.