The Internet Dragnet Was a Clusterfuck … and NSA Didn’t Care

Here’s my best description from last year of the mind-boggling fact that NSA conducted 25 spot checks between 2004 and 2009 and then did a several months’ long end-to-end review of the Internet dragnet in 2009 and found it to be in pretty good shape, only then to have someone discover that every single record received under the program had violated rules set in 2004.

Exhibit A is a comprehensive end-to-end report that the NSA conducted in late summer or early fall of 2009, which focused on the work the agency did in metadata collection and analysis to try and identify people emailing terrorist suspects.

The report described a number of violations that the NSA had cleaned up since the beginning of that year — including using automatic alerts that had not been authorized and giving the FBI and CIA direct access to a database of query results. It concluded the internet dragnet was in pretty good shape. “NSA has taken significant steps designed to eliminate the possibility of any future compliance issues,” the last line of the report read, “and to ensure that mechanisms are in place to detect and respond quickly if any were to occur.”

But just weeks later, the Department of Justice informed the FISA Court, which oversees the NSA program, that the NSA had been collecting impermissible categories of data — potentially including content — for all five years of the program’s existence.

The Justice Department said the violation had been discovered by NSA’s general counsel, which since a previous violation in 2004 had been required to do two spot checks of the data quarterly to make sure NSA had complied with FISC orders. But the general counsel had found the problem only after years of not finding it. The Justice Department later told the court that “virtually every” internet dragnet record “contains some metadata that was authorized for collection and some metadata that was not authorized for collection.” In other words, in the more than 25 checks the NSA’s general counsel should have done from 2004 to 2009, it never once found this unauthorized data.

The following year, Judge John Bates, then head of FISC, emphasized that the NSA had missed the unauthorized data in its comprehensive report. He noted “the extraordinary fact that NSA’s end-to-end review overlooked unauthorized acquisitions that were documented in virtually every record of what was acquired.” Bates went on, “[I]t must be added that those responsible for conducting oversight at NSA failed to do so effectively.”

Even after these details became public in 2014 (or perhaps because the intelligence community buried such disclosures in documents with dates obscured), commentators have generally given the NSA the benefit of the doubt in its good faith to operate its dragnet(s) under the rules set by the FISA Court.

But an IG Report from 2007 (PDF 24-56) released in Charlie Savage’s latest FOIA return should disabuse commentators of that opinion.

This is a report from early 2007, almost 3 years after the Stellar Wind Internet dragnet moved under FISA authority and close to 30 months after Judge Colleen Kollar-Kotelly ordered NSA to implement more oversight measures, including those spot checks. We know that rough date because the IG Report post-dates the January 8, 2007 initiation of the FISC-spying compartment and it reflects 10 dragnet order periods of up to 90 days apiece (see page 21). So the investigation in it should date to no later than February 8, 2007, with the final report finished somewhat later. It was completed by Brian McAndrew, who served as Acting Inspector General from the time Joel Brenner left in 2006 until George Ellard started in 2007 (but who also got asked to sign at least one document he couldn’t vouch for in 2002, again as Acting IG).

The IG Report is bizarre. It gives the NSA a passing grade on what it assessed.

The management controls designed by the Agency to govern the collection, dissemination, and data security of electronic communications metadata and U.S. person information obtained under the Order are adequate and in several aspects exceed the terms of the Order.

I believe that by giving a passing grade, the IG made it less likely his results would have to get reported (for example, to the Intelligence Oversight Board, which still wasn’t getting reporting on this program, and probably also to the Intelligence Committees, which didn’t start getting most documentation on this stuff until late 2008) in any but a routine manner, if even that. But the report also admits it did not assess “the effectiveness of management controls[, which] will be addressed in a subsequent report.” (The 2011 report examined here identified previous PRTT reports, including this one, and that subsequent report doesn’t appear in any obvious form.) Then, having given the NSA a passing grade but deferring the most important part of the review, the IG notes “additional controls are needed.”

And how.

As to the issue of the spot checks, mandated by the FISA Court and intended to prevent years of ongoing violations, the IG deems such checks “largely ineffective” because management hadn’t adopted a methodology for those spot checks. They appear to have just swooped in and checked queries already approved by an analyst’s supervisor, in what they called a superaudit.

Worse still, they didn’t write anything down.

As mandated by the Order, OGC periodically conducts random spot checks of the data collected [redaction] and monitors the audit log function. OGC does not, however document the data, scope, or results of the reviews. The purpose of the spot checks is to ensure that filters and other controls in place on the [redaction] are functioning as described by the Order and that only court authorized data is retained. [snip] Currently, an OGC attorney meets with the individuals responsible [redaction] and audit log functions, and reviews samples of the data to determine compliance with the Order. The attorney stated that she would formally document the reviews only if there were violations or other discrepancies of note. To date, OGC has found no violations or discrepancies.

So this IG review was done more than two years after Kollar-Kotelly had ordered these spot checks, during which period 18 spot checks should have been done. Yet at that point, NSA had no documentary evidence a single spot check had been done, just the say-so of the lawyer who claimed to have done them.

Keep in mind, too, that Oversight and Control were, at this point, implementing a new-and-improved spot-check process. That’s what the IG reviewed, the new-and-improved process, because (of course) reviewers couldn’t review the past process because there was no documentation of it. It’s the new-and-improved process that was inadequate to the task.

But that’s not the only problem the IG found in 2007. For example, the logs used in auditing did not accurately document what seed had been used for queries, which means you couldn’t review whether those queries really met the incredibly low bar of Reasonable Articulable Suspicion or that they were pre-approved.  Nor did they document how many hops out analysts chained, which means any given query could have sucked in a great deal of Americans (which might happen by the third or fourth hop) and thrown them into the corporate store for far more intrusive anlaysis. While the IG didn’t point this out directly, the management response made clear log files also didn’t document whether a seed was a US person and therefore entitled to a First Amendment review. In short, NSA didn’t capture any — any!!! — of the data that would have been necessary to assess minimal compliance with FISC orders.

NSA’s lawyers also didn’t have a solid list of everyone who had access to the databases (and therefore who needed to be trained or informed of changes to the FISC order). The Program Management Office had a list that it periodically compared to who was actually accessing the data (though as made clear later in the report, that included just the analysts). And NSA’s Office of General Counsel would also periodically review to ensure those accessing the data had the information they needed to do so legally. But “the attorney conducting the review relie[d] on memory to verify the accuracy and completeness of the list.” DOD in general is wonderfully neurotic about documenting any bit of training a given person has undergone, but with the people who had access to the Internet metadata documenting a great deal of Americans’ communication in the country, NSA chose just to work from memory.

And this non-existent manner of tracking those with database access extended to auditing as well. The IG reported that NSA also didn’t track all queries made, such as those made by “those that have the ability to query the PRTT data but are not on the PMO list or who are not analysts.” While the IG includes people who’ve been given new authorization to query the data in this discussion, it’s also talking about techs who access the data. It notes, for example, “two systems administrators, who have the ability to query PRTT data, were also omitted from the audit report logs.” The thing is, as part of the 2009 “reforms,” NSA got approval to exempt techs from audits. I’ve written a lot about this but will return to it, as there is increasing evidence that the techs have always had the ability — and continue to have the ability — to bypass limits on the program.

There are actually far more problems reported in this short report, including details proving that — as I’ve pointed out before — NSA’s training sucks.

But equally disturbing is the evidence that NSA really didn’t give a fuck about the fact they’d left a database of a significant amount of Americans’ communications metadata exposed to all sorts of control problems. The disinterest in fixing this problem dates back to 2004, when NSA first admitted to Kollar-Kotelly they were violating her orders. They did an IG report at the time (under the guidance of Joel Brenner), but it did “not make formal recommendations to management. Rather, the report summarize[d] key facts and evaluate[d] responsibility for the violation.” That’s unusual by itself: for audits to improve processes, they are supposed to provide recommendations and track whether those are implemented. Moreover, while the IG (who also claimed the clusterfuck in place in 2007 merited a passing grade) assessed that “management has taken steps to prevent recurrence of the violation,” it also noted that NSA never really fixed the monitoring and change control process identified as problems back in 2004. In other words, it found that NSA hadn’t fixed key problems IDed back in 2004.

As to this report? It did make recommendations and management even concurred with some of them, going so far as to agree to document (!!) their spot checks in the future. With others — such as the recommendation that shift supervisors should not be able to make their own RAS determinations — management didn’t concur, they just said they’d monitor those queries more closely in the future. As to the report as a whole, here’s what McAndrew had to say about management’s response to the report showing the PRTT program was a clusterfuck of vulnerabilities: “Because of extenuating circumstances, management was unable to provide complete responses to the draft report.”

So in 2007, NSA’s IG demonstrated that the oversight over a program giving NSA access to the Internet metadata of a good chunk of all Americans was laughably inadequate.

And NSA’s management didn’t even bother to give the report a full response.

2011 Internet Dragnet Audit Didn’t Find Significant Violation Reported to IOB

This will be the second of three posts on the NSA IG’s failures to correct problems with the Internet (PRTT) dragnet. In the first, I showed how quickly NSA nuked the PRTT (or at least claimed to) after John Bates ruled, a second time, that NSA could not illegally wiretap the content of Americans’ communications. Here, I’ll examine another IG Report, completed earlier in 2011 and also liberated by Charlie Savage, that appears to show the PRTT dragnet was hunky dory just weeks before it became clear again that it was not.

The report (see PDF 4-23) must date to between March 15 and May 25, 2011. It was related to a series of reports on the phone dragnet (these reports appear to have been solicited by or encouraged by Reggie Walton in the wake of the 2009 dragnet problems) that Savage liberated earlier this year. It lists all those reports on pages A-2 to A-3. But it lists the final, summary report in that series, (ST-10-0004L), as a draft, dated March 15, 2011. The copy provided to Savage is the final, dated May 25, 2011 (see PDF 203).

The reason for doing this, the PRTT report, is curious. The report notes “we began this review in [redacted, would be some time in summer 2009] but suspended it when NSA allowed the PR/TT Order to expire.” That is, this was the report that got started, but then halted, when someone discovered that every single record the NSA had collected under the program included categories of information violating the rules set by FISC in 2004.

But then NSA started a review of the phone dragnet covering all the activity in 2010 (reflected in monthly reports in Savage’s earlier release). So the NSA decided to do a review of PRTT at the same time. But remember: the Internet dragnet was shut down until at least July 2010, when John Bates authorized its resumption, and it took some time to turn the dragnet back on. That means NSA conducted a review of a dragnet that was largely on hiatus or just resuming. During the review period, both the phone and Internet dragnet reflect few finalized reports based on either dragnet. Indeed, it appears likely that there were no phone dragnet disseminations in August 2010 (see 155). There are probably two explanations for that. It suggests that after Reggie Walton told NSA they had to start following the rules, the amount of intelligence they got from the dragnet appears to have gone down from both the phone and Internet dragnet. But there may be a reason for that: we know that in 2011 NSA was training analysts to re-run queries that came up in both FISA and EO 12333 searches using EO 12333, so the results could be disseminated more broadly. So it’s likely that a lot of what had been reports reporting FISA authorized data before 2009 (which didn’t always follow FISC’s rules) started getting disseminated as EO 12333 authorized reports afterward. Still, in the case of the Internet dragnet reviewed for this report, “the dissemination did not contain PR/TT-derived USP information” so they “did not formally test dissemination objectives” (see footnote 1). None of the reports on the US Internet dragnet reviewed in some period in 2010 included US person data.

So much for collecting all of Americans’ email records to catch Americans, I guess.

All that said, both the Internet and phone dragnet found that the dragnets had adequate controls to fulfill the requirements of the FISC orders, but did say (this is laid out in unredacted form more explicitly in the phone dragnet report) that the manual monitoring of dissemination would become unworkable if analysts started using the dragnet more. The phone dragnet reports also suggest they weren’t good at monitoring less formal disseminations (via email or conversation), and by the time of these summary reports, NSA was preparing ask FISC to change the rules on reporting of non-US person dissemination. Overall in spring 2011, NSA’s IG found, the process worked according to the rules, but in part only because it was so little used.

That’s the assessment of the PRTT dragnet as of sometime between March and May 2011, less than 9 months before they’d nuke the dragnet really quickly, based mostly off a review of what NSA was doing during a period when the dragnet was largely inactive.

Which is all very interesting, because sometime before June 30, 2011 there was a PRTT violation that got reported — in a far more extensive description than the actual shut down of the dragnet in 2009 — to Intelligence Oversight Board. (see PDF 10)

Screen shot 2015-11-21 at 12.55.36 PM

There’s no mention of reporting to Congress on this, which is interesting because PATRIOT Act was being reauthorized again during precisely this period, based off notice, dated February 2, 2011, that the compliance problems were largely solved.

So here’s what happened: After having had its IG investigation shut down in fall 2009 because NSA had never been in compliance with limits on the PRTT dragnet, NSA’s IG tried again during a period when the NSA wasn’t using it all that much. It gave NSA a clean bill of health no earlier than March 15, 2011. But by June 30, 2011, something significant enough to get reported in two full paragraphs to IOB happened.

It turns out things weren’t quote so hunky dory after all.

The NSA (Said It) Ate Its Illegal Domestic Content Homework before Having to Turn It in to John Bates

The question of whether NSA can keep its Section 215 dragnet data past November 28 has been fully briefed for at least 10 days, but Judge Michael Mosman has not yet decided whether the NSA can keep it — at least not publicly. But given what the NSA IG Report on NSA’s destruction of the Internet dragnet says (liberated by Charlie Savage and available starting on PDF 60), we should assume the NSA may be hanging onto that data anyway.

This IG Report documents NSA’s very hasty decision to shut down the Internet dragnet and destroy all the data associated with it at the end of 2011, in the wake of John Bates’ October 3, 2011 opinion finding, for the second time, that if NSA knew it had collected US person content, it would be guilty of illegal wiretapping. And even with the redactions, it’s clear the IG isn’t entirely certain NSA really destroyed all those records.

The report adds yet more evidence to support the theory that the NSA shut down the PRTT program because it recognized it amounted to illegal wiretapping. The evidence to support that claim is laid out in the timeline and working notes below.

The report tells how, in early 2011, NSA started assessing whether the Internet dragnet was worth keeping under the form John Bates had approved in July 2010, which was more comprehensive and permissive than what got shut down around October 30, 2009. NSA would have had SPCMA running in big analytical departments by then, plus FAA, so they would have been obtaining these benefits over the PRTT dragnet already. Then, on a date that remains redacted, the Signals Intelligence Division asked to end the dragnet and destroy all the data. That date has to post-date September 10, 2011 (that’s roughly when the last dragnet order was approved), because SID was advising to not renew the order, meaning it happened entirely during the last authorization period. Given the redaction length it’s likely to be October (it appears too short to be September), but could be anytime before November 10. [Update: As late as October 17, SID was still working on a training program that covered PRTT, in addition to BRFISA, so it presumably post-dates that date.] That means that decision happened at virtually the same time or after, but not long after, John Bates raised the problem of wiretapping violations under FISA Section 1809(a)(2) again on October 3, 2011, just 15 months after having warned NSA about Section 1809(a)(2) violations with the PRTT dragnet.

The report explains why SID wanted to end the dragnet, though three of four explanations are redacted. If we assume bullets would be prioritized, the reason we’ve been given — that NSA could do what it needed to do with SPCMA and FAA — is only the third most important reason. The IG puts what seems like a non sequitur in the middle of that paragraph. “In addition, notwithstanding restrictions stemming from the FISC’s recent concerns regarding upstream collection, FAA §702 has emerged as another critical source for collection of Internet communications of foreign terrorists” (which seems to further support that the decision post-dated that ruling). Indeed, this is not only a non sequitur, it’s crazy. Everyone already knew FAA was useful. Which suggests it may not be a non sequitur at all, but instead something that follows off of the redacted discussions.

Given the length of the redacted date (it is one character longer than “9 December 2011”), we can say with some confidence that Keith Alexander approved the end and destruction of the dragnet between November 10 and 30 — during the same period the government was considering appealing Bates’ ruling, close to the day — November 22 — NSA submitted a motion arguing that Section 1809(a)(2)’s wiretapping rules don’t apply to it, and the day, a week later, it told John Bates it could not segregate the pre-October 31 dragnet data from post October 31 dragnet data.

Think how busy a time this already was for the legal and tech people, given the scramble to keep upstream 702 approved! And yet, at precisely the same time, they decided they should nuke the dragnet, and nuke it immediately, before the existing dragnet order expired, creating another headache for the legal and tech people. My apologies to the people who missed Thanksgiving dinner in 2011 dealing with both these headaches at once.

Not only did NSA nuke the dragnet, but they did it quickly. As I said, it appears Alexander approved nuking it November 10 or later. By December 9, it was gone.

At least, it was gone as far as the IG can tell. As far as the 5 parts of the dragnet (which appear to be the analyst facing side) that the technical repository people handled, that process started on December 2, with the IG reviewing the “before” state, and ended mostly on December 7, with final confirmation happening on December 9, the day NSA would otherwise have had to have new approval of the dragnet. As to the the intake side, those folks started destroying the dragnet before the IG could come by and check their before status:

However, S3 had completed its purge before we had the opportunity to observe. As a result we were able to review the [data acquisition database] purge procedures only for reasonableness; we were not able to do the before and after comparisons that we did for the TD systems and databases disclosed to us.

Poof! All gone, before the IG can even come over and take a look at what they actually had.

Importantly, the IG stresses that his team doesn’t have a way of proving the dragnet isn’t hidden somewhere in NSA’s servers.

It is important to note that we lack the necessary system accesses and technical resources to search NSA’s networks to independently verify that only the disclosed repositories stored PR/TT metadata.

That’s probably why the IG repeatedly says he is confirming purging of the data from all the “disclosed” databases (@nailbomb3 observed this point last night). Perhaps he’s just being lawyerly by including that caveat. Perhaps he remembers how he discovered in 2009 that every single record the NSA had received over the five year life of the dragnet had violated Colleen Kollar-Kotelly’s orders, even in spite of 25 spot checks. Perhaps the redacted explanations for eliminating the dragnet explain the urgency, and therefore raise some concerns. Perhaps he just rightly believes that when people don’t let you check their work — as NSA did not by refusing him access to NSA’s systems generally — there’s more likelihood of hanky panky.

But when NSA tells — say — the EFF, which was already several years into a lawsuit against the NSA for illegal collection of US person content from telecom switches, and which already had a 4- year old protection order covering the data relevant to that suit, that this data got purged in 2011?

Even NSA’s IG says he thinks it did but he can’t be sure.

But what we can be sure of is, after John Bates gave NSA a second warning that he would hold them responsible for wiretapping if they kept illegally collecting US person content, the entire Internet dragnet got nuked within 70 days — gone!!! — all before anyone would have to check in with John Bates again in connection with the December 9 reauthorization and tell him what was going on with the Internet dragnet.

Update: Added clarification language.

Update: The Q2 2011 IOB report (reporting on the period through June 30, 2011) shows a 2-paragraph long, entirely redacted violation (PDF 10), which represents a probably more substantive discussion than the systematic overcollection that shut down the system in 2009.

Read more

10 Goodies USA Freedom Act Gives the Intelligence Community

Since the Paris attack has turned much of our country into a shriveling pack of cowards, Republicans have ratcheted up claims that USA Freedom Act will make us less safe. Those claims tend to be so ignorant they claim the law — passed in June but not fully implemented until a week from Sunday — prevented the Intelligence Community from preventing the Paris attack. That would not be possible for two reasons. First, because the key provision hasn’t started yet (though some of the benefits for the IC have). And, because according to reports the network that carried out the Paris attack had no ties to the US, and therefore the dragnet couldn’t have shown anything useful.

All that said, I thought both the fear-mongering and the imminent changeover made it a good time to update (and in a few places, correct) this post, which laid out 10 things the IC gets out of USAF.

1. Inclusion of cell and (probably) some Internet “calls” in chaining system

Since early 2014, intelligence sources have been leaking that the phone dragnet misses 70% of US calls. That number is probably an exaggeration (and doesn’t account for what the NSA collects under significantly redundant collection under EO 12333). But there are probably several reasons for why the old dragnet had incomplete coverage. First, providers that only keep cell records with location data attached could not be obligated to turn over those records under the existing program (when AT&T started turning over cell records in 2011, it stripped location data for the NSA voluntarily, but no providers were obligated to do so). In a declaration submitted in Larry Klayman’s challenge to the phone dragnet, NSA makes it clear the ability to demand production in the form NSA wants is one big difference in the program (as is having facilities onsite, which probably mirrors the PRISM program).

Screen shot 2015-11-20 at 11.33.10 AM

In addition, USA Freedom is technology neutral; unlike phone dragnet orders, it does not limit collection to telephony calls, though it does limit collection to “phone companies,” which I presume includes handset makers Apple, Microsoft, and Google. This probably means the government will fill the gap in calls that has been growing of late, probably including VOIP and iMessage.

2. Addition of emergency provision for all Section 215 applications

Before USAF passed, there was a FISC-authorized emergency provision for the phone dragnet, but not the rest of Section 215 production. That was a problem, because the most common use of Section 215 is for more targeted (though it is unclear how targeted it really is) Internet production, and the application process for Section 215 can be slow. USAF made emergency application procedures available for all kinds of Section 215 applications.

3. Creation of parallel construction loophole under emergency provision

Not only does USAF extend emergency provision authority to all Section 215 applications, but it changes the status quo FISC created in a way that invites abuse. That’s because, even if the FISC finds an agency collected records improperly under the emergency provision, the government doesn’t have to destroy those records. It prohibits the use of “derivative” evidence in any proceeding, but there is abundant reason to believe the government still finds a way to parallel construct evidence even in other laws with such limitation on “derivative” evidence and so we should expect the same to happen here. The risk that the government will do this is not illusory; in the 18 months or so since FISC created this emergency provision, they’ve already had reason to explicitly remind the government that even under emergency collection, the government still can’t collect on Americans solely for First Amendment protected activities.

4. Chaining on “connections” rather than “calls,” which might be used to access unavailable smart phone data

Rather than chaining on calls made, USAF chains on “connections,” with Call Detail Record defined based on “session identifier.” This is probably intended to permit the government to obtain the call records of “correlated” identities, including things like all the records from a “Friends and Family” account. And while the House Report specifically prohibited some potentially troubling uses (like having providers chain on location information), in the era of smart phones and super cookies, the language of the bill leaves open the possibility of vastly expanded “connections.”

5. Elimination of pushback from providers

USAF gives providers two things they don’t get under existing Section 215: immunity and compensation. This will make it far less likely that providers will push back against even unreasonable requests. Given the parallel construction loophole in the emergency provisions and the potentially expansive uses of connection chaining, this is particularly worrisome.

6. Expansion of data sharing

Currently, chaining data obtained under the phone dragnet is fairly closely held. Only specially trained analysts at NSA may access the data returned from phone dragnet queries, and analysts must get a named manager to certify that the data is for a counterterrorism purpose to share outside that group of trained analysts. Under this new law, all the returned data will be shared — in full, apparently — with the NSA, CIA, and FBI. And the FBI is exempted from reporting on how many back door searches it does of this data.

Thus, this data, which would ostensibly be collected for a counterterrorism purpose, will apparently be available to FBI every time it does an assessment or opens up certain kinds of intelligence, even for non-counterterrorism purposes. Furthermore, because FBI’s data sharing rules are much more permissive than NSA’s, this data will be able to be shared more widely outside the federal government, including to localities. Thus, not only will it draw from far more data, but it will also share the data it obtains far more broadly.

7. Mooting of court challenges

As we’ve seen in both ACLU v. Clapper and Klayman v. Obama, USAF mooted court challenges to the dragnet, including ones that looked likely to rule the expansive “relevant to” based collections unconstitutional. In addition, the law may moot EFF’s First Unitarian Church v. NSA challenge to the dragnet, which of all the challenges is most likely to get at some of the underlying constitutional problems with the dragnet.

8. Addition of 72-hour spying provisions

In addition to the additional things the IC got related to its Section 215 spying, there are three unrelated things the House added. First, the law authorized the “emergency roamer” authority the IC has been asking for since 2013. It permits the government to continue spying on a legitimate non-US target if he enters the US for a 72-hour period, with Attorney General authorization. While in practice, the IC often misses these roamers until after this window, this will save the IC a lot of paperwork and bring down their violation numbers.

9. Expansion of proliferation-related spying

USAF also expanded the definition of “foreign power” under FISA to include not just those proliferating in weapons of mass destruction, but also those who “knowingly aid or abet” or “conspire” with those doing so. This will make it easier for the government to spy on more Iran-related targets (and similar such targets) in the US.

10. Lengthening of Material Support punishments

In perhaps the most gratuitous change, USAF lengthened the potential sentence for someone convicted of material support for terrorism — which, remember, may be no more than speech! — from 15 years to 20. I’m aware of no real need to do this (except, perhaps, to more easily coerce people to inform for the government). But it is clearly something someone in the IC wanted.

Let me be clear: some of these provisions (like permission to chain on Internet calls) will likely make the chaining function more useful and therefore more likely to prevent attacks, even if it will also expose more innocent people to expanded spying. Some of these provisions (like the roamer provision) are fairly reasonably written. Some (like the changes from status quo in the emergency provision) are hard to understand as anything but clear intent to break the law, particularly given IC intransigence about fixing obvious problems with the provision as written. I’m not claiming that all of these provisions are bad for civil liberties (though a number are very bad). But all of them are (or were, for those that have already gone into force) clear expansions on the authorities and capabilities the IC used to have.

The Reasons to Shut Down the (Domestic) Internet Dragnet: Purpose and Dissemination Limits, Correlations, and Functionality

Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.

It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:

  • Purpose and dissemination limits
  • Correlations
  • Functionality

Purpose and dissemination limits

Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.

Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.

But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.

Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.

The primary new responsibility is the requirement:

  • to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]

Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.

That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.

So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.

The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.

In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.

FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.

Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.

In this same passage, the definition of metadata is curious.

For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.

I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).

Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.

So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.

At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.

One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.

Correlations

Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.

  • a primary gmail account
  • two secondary gmail accounts
  • a second name tied to one of those gmail accounts
  • a backup email (Yahoo) address
  • a backup phone (unknown provider) account
  • Google phone number
  • Google SMS number
  • a primary login IP
  • 4 other IP logins they were tracking
  • 3 credit card accounts
  • Respectively 40, 5, and 11 Google services tied to the primary and two secondary Google accounts, much of which would be treated as separate, correlated identifiers

Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).

This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.

Functionality

While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.

The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.

In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.

I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.

Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.

1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]

Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).

That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.

Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.

Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.

All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.

Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.

How the Government Uses Location Data from Mobile Apps

Screen shot 2015-11-19 at 9.24.26 AMThe other day I looked at an exchange between Ron Wyden and Jim Comey that took place in January 2014, as well as the response FBI gave Wyden afterwards. I want to return to the reason I was originally interested in the exchange: because it reveals that FBI, in addition to obtaining cell location data directly from a phone company or a Stingray, will sometimes get location data from a mobile app provider.

I asked Magistrate Judge Stephen Smith from Houston whether he had seen any such requests — he’s one of a group of magistrates who have pushed for more transparency on these issues. He explained he had had several hybrid pen/trap/2703(d) requests for location and other data targeting WhatsApp accounts. And he had one fugitive probation violation case where the government asked for the location data of those in contact with the fugitive’s Snapchat account, based on the logic that he might be hiding out with one of the people who had interacted with him on Snapchat. The providers would basically be asked to to turn over the cell site location information they had obtained from the users’ phone along with other metadata about those interactions. To be clear, this is not location data the app provider generates, it would be the location data the phone company generates, which the app accesses in the normal course of operation.

The point of getting location data like this is not to evade standards for a particular jurisdiction on CSLI. Smith explained, “The FBI apparently considers CSLI from smart phone apps the same as CSLI from the phone companies, so the same legal authorities apply to both, the only difference being that the ‘target device’ identifier is a WhatsApp/Snapchat account number instead of a phone number.” So in jurisdictions where you can get location data with an order, that’s what it takes, in jurisdictions where you need a probable cause warrant, that’s what it will take. The map above, which ACLU makes a great effort to keep up to date here, shows how jurisdictions differ on the standards for retrospective and prospective location information, which is what (as far as we know) will dictate what it would take to get, say, CSLI data tied to WhatsApp interactions.

Rather than serving as a way to get around legal standards, the reason to get CSLI from the app provider rather than the phone company that originally produces it is to get location data from both sides of a conversation, rather than just the target phone. That is, the app provides valuable context to the location data that you wouldn’t get just from the target’s cell location data.

The fact that the government is getting location data from mobile app providers — and the fact that they comply with the same standard for CSLI obtained from phones in any given jurisdiction — may help to explain a puzzle some have been pondering for the last week or so: why Facebook’s transparency report shows a big spike in wiretap warrants last year.

[T]he latest government requests report from Facebook revealed an unexpected and dramatic rise in real-time interceptions, or wiretaps. In the first six months of 2015, US law enforcement agencies sent Facebook 201 wiretap requests (referred to as “Title III” in the report) for 279 users or accounts. In all of 2014, on the other hand, Facebook only received 9 requests for 16 users or accounts.

Based on my understanding of what is required, this access of location data via WhatsApp should appear in several different categories of Facebook’s transparency report, including 2703(d), trap and trace, emergency request, and search warrant. That may include wiretap warrants, because this is, after all, prospective interception, and not just of the target, but also of the people with whom the target communicates. That may be why Facebook told Motherboard “we are not able to speculate about the types of legal process law enforcement chooses to serve,” because it really would vary from jurisdiction to jurisdiction and possibly even judge to judge.

In any case, we can be sure such requests are happening both on the criminal and the intelligence side, and perhaps most productively under PRISM (which could capture foreign to domestic communications at a much lower standard of review). Which, again, is why any legislation covering location data should cover the act of obtaining location data, whether via the phone company, a Stingray, or a mobile app provider.

Brennan Was Probably Talking about the Telegram PRISM Gap as Much as Encryption

I noted the other day that at a pre-scheduled appearance Monday, Josh Rogin cued John Brennan to explain how the Paris attack happened without warning. In my opinion, the comment has been badly misreported as an indictment solely of Edward Snowden (though it is that) and encryption. I’ve put the entire exchange below but the key exchange was this:

And as I mentioned, there are a lot of technological capabilities that are available right now that make it exceptionally difficult, both technically as well as legally, for intelligence and security services to have the insight they need to uncover it. And I do think this is a time for particularly Europe, as well as here in the United States, for us to take a look and see whether or not there have been some inadvertent or intentional gaps that have been created in the ability of intelligence and security services to protect the people that they are asked to serve. And in the past several years because of a number of unauthorized disclosures and a lot of handwringing over the government’s role in the effort to try to uncover these terrorists, there have been some policy and legal and other actions that are taken that make our ability collectively internationally to find these terrorists much more challenging. And I do hope that this is going to be a wake-up call, particularly in areas of Europe where I think there has been a misrepresentation of what the intelligence security services are doing by some quarters that are designed to undercut those capabilities.

Brennan talks about technology that makes it difficult technically and legally to uncover plots. Encryption is a technical problem — one the NSA has proven its ability to overcome — that might be called a legal one if you ignore that NSA has the ability to overcome the lack of a legal requirement to provide back doors. But I agree this passage speaks to encryption, if not other issues.

In the next sentence, though, he talks about inadvertent or intentional gaps created “particularly in Europe.” He talks about plural unauthorized disclosures — as I noted, Josh Rogin’s own disclosure that the US had broken AQAP’s online conferencing technique may have been more directly damaging than most of Snowden’s leaks —  and “handwringing.” Those have led to “policy and legal and other actions” that have made it harder to find terrorists. In the next sentence, Brennan again emphasizes that “particularly in areas of Europe,” there needs to be a “wake-up call” because “there has been a misrepresentation” of what the spooks are doing, which he suggests was deliberately “designed to undercut those capabilities.”

So the paragraph where he speaks of these problems, he twice emphasizes that Europe in particular needs to adjust its approach.

Last I checked, Europe didn’t pass USA Freedom Act (which would not, in any way, have restricted review of Parisian targeters). Some countries in Europe are more vigorously considering limits on encryption, but those would be just as ineffective as eliminating the code that’s already out there.

What Europe has done, however, is make it harder for our PRISM providers to share data back and forth between Europe (and with providers considering moving servers to Europe, it will raise new questions about the applicability of PRISM for that data). And Europe (not just Europe, but definitely including Europe) has created a market need for US tech companies to distance themselves from the government.

And in the case of Germany, politicians have been investigating how much its BND has done for NSA, and especially which impermissible German people and companies were targeted as part of the relationship. I noted that Brennan raised similar issues just days after the BND investigation turned scandalous in March, and recent revelations have raised new pressure on BND.

With that in mind, in particular, consider what one of the more responsible reports on Brennan’s speech, that of Shane Harris, focused on — terrorists’ use of Berlin headquartered social messaging app Telegram. If terrorists were using WhatsApp (which a lot of the fearmongering focused on), the metadata, at least, would be available via Facebook. But since Telegram is not a US company, it cannot be obliged under Section 702 of FISA, and that surely creates just the kind of gap Brennan was talking about.

Since Brennan’s speech, Telegram has started deleting the special channels set up by ISIS to communicate.

I’m sure Brennan is complaining about encryption and if he can get Congress to force domestic back doors, I’m sure he will (though ISIS reportedly shies away from Apple products, so forcing Apple to give up its encrypted iMessage won’t help track down ISIS). But his speech seemed focused much more intently on ways in which, in the aftermath of the Snowden leaks, Europeans have opportunistically localized data and, in the process, made that data far less accessible to the NSA. Brennan, as I made clear in March, definitely would prefer the Europeans rely on Americans for their SIGINT (and in the process agree to some inappropriate spying in their home country), and the gap created by terrorists’ reliance on Telegram is one way to exert pressure on that point.

Read more

Surveillance Hawk Stewart Baker Confirms Dragnet Didn’t Work as Designed

The French authorities are just a day into investigating the horrid events in Paris on Friday. We’ll know, over time, who did this and how they pulled it off. For that reason, I’m of the mind to avoid any grand claims that surveillance failed to find the perpetrators (thus far, French authorities say they know one of the attackers, who is a French guy they had IDed as an extremist, but did not know of people identified by passports found at the Stade — though predictably those have now been confirmed to be fake [update: now authorities say the Syrian one is genuine, though it’s not yet clear it belonged to the attacker], so authorities may turn out to know their real identity). In any case, Glenn Greenwald takes care of that here. I think it’s possible the terrorists did manage to avoid detection via countersurveillance — though the key ways they might have done so were available and known before Edward Snowden’s leaks (as Glenn points out).

But there is one claim by a surveillance hawk that deserves a response. That’s former DHS and NSA official Stewart Baker’s claim that because of this attack we shouldn’t stop the bulk collection of US persons’ phone metadata.

Screen Shot 2015-11-15 at 7.41.03 AM

The problem with this claim is that the NSA has a far more extensive dragnet covering the Middle East and Europe than it does on Americans. It can and does bulk collect metadata overseas without the restrictions that existed for the Section 215 dragnet. In addition to the metadata of phone calls and Internet communications, it can collect GPS location, financial information, and other metadata scraped from the content of communications.

The dragnet covering these terrorists is the kind of dragnet the NSA would love to have on Americans, if Americans lost all concern for their privacy.

And that’s just what the NSA (and GCHQ) have. The French have their own dragnet. They already had permission to hold onto metadata, but after the Charlie Hebdo attacks, they expanded their ability to wiretap without court approval. So the key ingredients to a successful use of the metadata were there: the ability to collect the metadata and awareness that one of the people was someone of concern.

The terrorists may have used encryption and therefore made it more difficult for authorities to get to the content of their Internet communications (though at this point, any iPhone encryption would only now be stalling investigators).

But their metadata should still have been available. There’s no good way to hide metadata, which is why authorities find metadata dragnets so useful.

French authorities knew of at least one of these guys, and therefore would have been able to track his communication metadata, and both the Five Eyes and France have metadata dragnets restricted only by technology, and therefore might have been able to ID the network that carried out this attack.

Stewart Baker claims that Section 215 was designed to detect a plot like this. But the metadata dragnet covering France and the Middle East is even more comprehensive than Section 215 ever was. And it didn’t detect the attack (it also didn’t detect the Mumbai plot, even though — or likely because — one of our own informants was a key player in it). So rather than be a great argument for why we need to keep a dragnet that has never once prevented an attack in the US, Baker’s quip is actually proof that the dragnets don’t work as promised.

 

It’s Harder for FBI to Get Location Data from Phone Companies Under FISA than Other Ways

I was looking for something else on Ron Wyden’s website yesterday and noticed this exchange between Wyden and Jim Comey from January 29, 2014 (see my transcription below). At first it seemed to be another of Wyden’s persistent questions about how the government collects location data — which we generally assume to be via telephone provider or Stingray — but then realized he was asking something somewhat different. After asking about Cell Site Location Information from phone companies, Wyden then asked whether the FBI uses the same (order, presumably a Pen Register) standard when collecting location from a smart phone app.

Oh yeah! The government can collect location information via apps (and thereby from Google or WhatsApp other providers) as well.

Here’s the FBI’s response, which hasn’t been published before.

The response is interesting for several reasons, some of which may explain why the government hasn’t been getting all the information from cell phones that it wanted under the Section 215 phone dragnet.

First, when the FBI is getting prospective CSLI, it gets a full FISA order, based on a showing of probable cause (it can get historical data using just an order). The response to Wyden notes that while some jurisdictions permit obtaining location data with just an order, because others require warrants, “the FBI elects to seek prospective CSLI pursuant to a full content FISA order, thus matching the higher standard imposed in some U.S. districts.”

Some of this FISA discussed in 2006 in response to some magistrates’ rulings that you needed more than an order to get location, though there are obviously more recent precedents that are stricter about needing a warrant.

This means it is actually harder right now to get prospective CSLI under FISA than it is under Title III in some states. (The letter also notes sometimes the FBI “will use criminal legal authorities in national security investigations,” which probably means FBI will do so in those states with a lower standard).

The FBI’s answer about smart phone apps was far squirrelier. It did say that when obtaining information from the phone itself, it gets a full-content FISA order, absent any exception to the Fourth Amendment (such as the border exception, which is one of many reasons FBI loves to search phones at the border and therefore hates Apple’s encryption); note this March 6, 2014 response was before the June 24, 2014 Riley v. CA decision that required a warrant to search a cell phone, which says FISA was on a higher standard there, too, until SCOTUS caught up.

But as to getting information from smartphone apps itself, here’s what FBI answered.

Which legal authority we would use is very much dependent upon the type of information we are seeking and how we intend to obtain that information. Questions considered include whether or not the information sought would target an individual in an area in which that person has a reasonable expectation of privacy, what type of data we intend to obtain (GPS or other similarly precise location information), and how we intend to obtain the data (via a request for records from the service provider or from the mobile device itself).

In other words, after having thought about how to answer Wyden for five weeks rather than the one they had promised, they didn’t entirely answer the question, which was what it would take for the FBI to get information from apps, rather than cell phone providers, though I think that may be the same standard as a CSLI from a cell phone company.

But this seems to say that, in the FISA context, it may well be easier — and a lower standard of evidence — for the FBI to get location data from a Stingray.

This explains why Wyden’s location bill — which he was pushing just the other day, after the Supreme Court refused to take Quartavious Davis’ appeal — talks about location collection generally, rather than using (for example) a Stingray.


Wyden: I’d like to ask you about the government’s authority to track individuals using things like cell site location information and smart phone applications. Last fall the NSA Director testified that “we–the NSA–identify a number we can give that to the FBI. When they get their probable cause then they can get the locational information they need.”

I’ve been asking the NSA to publicly clarify these remarks but it hasn’t happened yet. So, is the FBI required to have probable cause in order to acquire Americans’ cell site location information for intelligence purposes?

Comey: I don’t believe so Senator. We — in almost all circumstances — we have to obtain a court order but the showing is “a reasonable basis to believe it’s relevant to the investigation.”

Wyden: So, you don’t have to show probable cause. You have cited another standard. Is that standard different if the government is collecting the location information from a smart phone app rather than a cell phone tower?

Comey: I don’t think I know, I probably ought to ask someone who’s a little smarter what the standard is that governs those. I don’t know the answer sitting here.

Wyden: My time is up. Can I have an answer to that within a week?

Comey: You sure can.

The Proliferation-as-Terrorism Rule

Last week, Chairman of the House Homeland Security Committee tried to get Assistant Secretary of State Anne Patterson to list the Iran Republican Guard as a terrorist organization.

Rep. Michael McCaul (R., Texas) pressed Anne Patterson, assistant secretary of state in the bureau of near eastern affairs, during a hearing last week on Iran’s rogue activities.

Since the nuclear deal, “Iran has taken several provocative actions, including ballistic missile tests, the jailing of Americans on frivolous charges, and support for terrorist activities via the IRGC, the Iranian Revolutionary Guard Corps,” McCaul said.

The corps has been linked to terrorist operations across the Middle East and beyond, including arming terror proxy groups fighting against the United States and Israel.

“I sent a letter to the president of the United States requesting that the IRGC be placed on the Foreign Terrorist Organization list because they are the terror arm of Iran,” McCaul said. “This would not lift the sanctions. It would keep the sanctions in place on the very terrorist activities that Iran wants to take the $100 billion and ship them toward these activities. What is your response to whether or not designating the IRGC as an FTO [foreign terrorist organization], whether that is a good decision?”

Patterson sidestepped the question, but said that the State Department does not think the group can legally be categorized as a terrorist organization.

“I can’t answer that question, Mr. McCaul,” Patterson said. “I’ll have to get back to you. I would not think they would meet the legal criteria, but I don’t really know.”

Now, I’m not actually interested in getting the IRGC listed as a terrorist organization, particularly not for arming militias, because I think that would be a very bad precedent for the world’s biggest arms proliferator. Moreover, I’m sure Patterson sees this effort as another attempt to squelch efforts for peace with Iran.

But I am interested in her squirming given that for some years — we don’t know how many, but there was a new group approved in June 2007 and another approved in July 2009, so probably at least 6 years — the NSA has targeted Iran using the counterterrorism phone dragnet. So the government has convinced a FISC judge that IRGC (or Iran more generally) is a terrorist group. But now the State Department is telling us they’re not.

Up until USA F-ReDux passed this year, when Congress extended the proliferation-related definition of a foreign power under FISA to include those aiding or conspiring with those actually doing the proliferation, the government seems to have always pushed whom could be spied on well beyond the definitions in the law (there appears to have been a non-NSA certificate for it under Protect America Act, for example). That extends to the phone dragnet, and does so in such a way that probably includes a lot of American businesses.

And, Patterson’s dodges notwithstanding, the government hasn’t been above calling Iran a terrorist organization to do it.