UNITEDRAKE and Hacking under FISA Orders

As I noted yesterday, along with the encrypted files you have to pay for, on September 6, Shadow Brokers released the manual for an NSA tool called UNITEDRAKE.

As Bruce Schneier points out, the tool has shown up in released documents on multiple occasions — in the catalog of TAO tools leaked by a second source (not Snowden) and released by Jacob Appelbaum, and in three other Snowden documents (one, two, three) talking about how the US hacks other computers, all of which first appeared in Der Spiegel’s reporting (one, two, three). [Update: See ElectroSpaces comments about this Spiegel reporting and its source.]

The copy, as released, is a mess — it appears to have been altered by an open source graphics program and then re-saved as a PDF. Along with classification marks, the margins and the address for the company behind it appears to have been altered.

The NSA is surely doing a comparison with the real manual (presumably as it existed at the time it may have been stolen) in an effort to understand how and why it got manipulated.

I suspect Shadow Brokers released it as a message to those pursuing him as much as to entice more Warez sales, for the observations I lay out below.

The tool permits NSA hackers to track and control implants, doing things like prioritizing collection, controlling when an implant calls back and how much data is collected at a given time, and destroying an implant and the associated UNITEDRAKE code (PDF 47 and following includes descriptions of these functions).

It includes doing things like impersonating the user of an implanted computer.

Depending on how dated this manual is, it may demonstrate that Shadow Brokers knows what ports the NSA will generally use to hack a target, and what code might be associated with an implant.

It also makes clear, at a time when the US is targeting Russia’s use of botnets, that the NSA carries out its own sophisticated bot-facilitated collection.

Finally of particular interest to me, the manual shows that UNITEDRAKE can be used to hack targets of FISA orders.

To use it to target people under a FISA order, the NSA hacker would have to enter both the FISA order number and the date the FISA order expires. After that point, UNITEDRAKE will simply stop collecting off that implant.

Note, I believe that — at least in this deployment — these FISA orders would be strictly for use overseas. One of the previous references to UNITEDRAKE describes doing a USSID-18 check on location.

SEPI analysts validate the target’s identity and location (USSID-18 check), then provide a deployment list to Olympus operators to load a more sophisticated Trojan implant (currently OLYMPUS, future UNITEDRAKE).

That suggests this would be exclusively EO 12333 collection — or collection under FISA 704/705(b) orders.

But the way in which UNITEDRAKE is used with FISA is problematic. Note that it doesn’t include a start date. So the NSA could collect data from before the period when the court permitted the government to spy on them. If an American were targeted only under Title I (permitting collection of data in motion, therefore prospective data), they’d automatically qualify for 705(b) targeting with Attorney General approval if they traveled overseas. Using UNITEDRAKE on — say, the laptop they brought with them — would allow the NSA to exfiltrate historic data, effectively collecting on a person from a time when they weren’t targeted under FISA. I believe this kind of temporal problem explains a lot of the recent problems NSA has had complying with 704/705(b) collection.

In any case, Shadow Brokers may or may not have UNITEDRAKE among the files he is selling. But what he has done by publishing this manual is tell the world a lot of details about how NSA uses implants to collect intelligence.

And very significantly for anyone who might be targeted by NSA hacking tools under FISA (including, presumably, him), he has also made it clear that with the click of a button, the NSA can pretend to be the person operating the computer. This should create real problems for using data hacked by NSA in criminal prosecutions.

Except, of course, especially given the provenance problems with this document, no defendant will ever be able to use it to challenge such hacking.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

FBI Imagines Using Assessments to Recruit US Engineers for Insight onto Spying in Semiconductor Industry

For something else, I’m reviewing the section of the FBI Domestic Investigations and Operations Guide on assessments made available in unredacted form to the Intercept. Of particular interest are the scenarios the DIOG uses to explain whether an Agent would or could use an assessment to collect information without opening a preliminary investigation. One way the FBI uses assessments is to identify potential informants. As one of the scenarios for when it might do so, it uses the example of trying to find out about a particular country X’s targeting of engineers and high tech workers involved in the production of semiconductor chips. For an engineer who travels frequently to country X, the FBI might either target him, or try to recruit him. (see page 117)

This is important for two reasons. First, the FBI is permitted to search FBI’s own databases to conduct this assessment. That would include information collected via Section 702. So when people talk about the risks of back door searches, it could mean a completely innocent engineer getting targeted for recruitment as an informant.

The other reason this is important is because it is precisely what appears to have happened with Professor Xiaoxing Xi, who was falsely accused of sharing semiconductor technology with China. After Xi and his attorney Peter Zeidenberg explained to the FBI that they had badly misunderstood the technology they were looking at, the case against Xi was dismissed.

In fact, Xi claims in a lawsuit against the government that the emails on which was the case was built were improperly searched using Section 702 or EO 12333.

On information and belief, both before and after obtaining the FISA orders, defendant Haugen and/or Doe(s) caused the interception of Professor Xi’s communications, including his emails, text messages, and/or phone calls, without obtaining a warrant from any court. In conducting this surveillance, the defendants may have relied on the purported authority of Section 702 of FISA or Executive Order 12333. Although neither Section 702 nor Executive Order 12333 permits the government to “target” Americans directly, the government nonetheless relies on these authorities to obtain without a warrant the communications of Americans who are in contact with individuals abroad, as Professor Xi was with his family and in the course of his scientific and academic work.

On information and belief, defendant Haugen and/or defendant Does searched law enforcement databases for communications of Professor Xi that the government had intercepted without a warrant, including his private communications intercepted under Section 702 of FISA and Executive Order 12333, and examined, retained, and/or used such communications.

[snip]

The actions of defendants Haugen and/or Doe(s) in searching law enforcement databases for, examining, retaining, and using Professor Xi’s communications, including his emails, text messages, and/or phone calls, that were obtained without a warrant, and without notice to Professor Xi, violated Professor Xi’s clearly established constitutional rights against unlawful search and seizure and his right to privacy under the Fourth Amendment.

Given how closely this scenario matches his own case, I’d say the chances his emails were first identified via a back door search are quite high. Note, too, that Temple University, where he works, has its email provided by Google, meaning these emails might be available on via PRISM.

Of additional interest, the one description of sensitive potential Confidential Human Sources that the officially released DIOG redacts which is revealed in the Intercept copy is academic personnel. (see page 112)

So they will recruit professors like Professor Xi as informants — they would just require special approval to do so.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

The 702 Compliance Reporting

This will be a very weedy post on two quarterly reports on 702 compliance released to ACLU under FOIA: March 2014, March 2015; the March reports both cover the December 1 through February 28 period. ACLU obtained them not by FOIAing quarterly compliance reporting directly. Rather, ACLU asked for all the documents referred in this Summary of Notable Section 702 Requirements, which they had received earlier. But the released copies are entirely useless in elucidating the Notable Requirements. The 2015 report, for example, was provided in part to explain how NSA assesses whether a selector will provide foreign intelligence information, but the section of the report that details with it (item 28 on page 46) has been withheld entirely (see break between PDF 8 and 9). In addition, there must be at least one more citation to it that is redacted in the Notable Requirements document. The reference(s) to the 2014 report are entirely redacted.

There are a few places such redacted references to the two reports might be: There’s a missing citation in Pre- and Post-Tasking Due Diligence (the redaction at the bottom of 2). There may be a citation missing in the continued assessment section at the bottom of page 4. There’s definitely one missing in the Obligation to Review section (page 5). There’s likely to be one in the long redacted passage on page 6 pertaining to resolving post-tasking problems as quickly as possible. And the sole footnote (see page 11) in the Summary has a reference, which is likely one on FBI techniques to analyze Section 702 information the government identified as being withheld in its entirety.

So the Compliance reports don’t help us — at all — to understand the requirements the government places on itself with respect to 702.

But they do show us, in more granular detail than show up in the Semiannual reports (this one includes the March 2014 period and this one includes the March 2015 period), the kinds of things that show up in the compliance reviews. The compliance reporting in both is generally organized in to the same sections (see page 29):

  • Tasking Issues
  • Detasking Issues
  • Notification Delays
  • Documentation Issues
  • Overcollection
  • Minimization
  • Other

And — as the Semiannual Report makes clear — we’re just seeing a fraction of the granular descriptions in the quarterly reports, because we’re not seeing the tasking, detasking, notification, or documentation issues. That means the unredacted content in the released reports represents less than 20% of the total number of compliance incidents for these two quarters.

Though we may be able to use the reports in conjunction to identify how many selectors, on average, are tasked at any given time. If the 25 minimization issues cited in the March 2015 report are representative (meaning there’d be 50 for the entire six month period), then there’d be roughly 338 incidents across all topics for the six month period (it’s not entirely clear how they deal with overlap). Given a compliance rate of .35% per average facilities tasked, this means roughly 96,571 facilities tasked at any given time, thought that may be low given the vastly different lead times on these reports (meaning in the interim year, the government might ID many more compliance issues that get reported primarily in the Semiannual report). There were 94,368 targets across the whole year in FY 2015 (which covers this entire period because the Fiscal Year begins in October). What that suggests is that for some targets, you’ll have more than one facility tasked at any given time, but unless there’s a lot of turnover in a given year (meaning that most targets are only tasked for some weeks or months), not that many.

Which leaves us with what the reports do show us: the other (largely dissemination) and minimization (largely overly broad queries and US person queries) compliance errors, errors which I’ve roughly tallied in this document.

Dissemination

Between the two quarterly reports, there are 13 incidences of what I’m lumping under improper dissemination (the report treats database dissemination differently from disseminating unmasked USP identities). Most of these are fairly non-descript, true error. In three cases, analysts at other agencies alerted the NSA that they had not masked a US person identity.

The exceptions are 2015-19 and -20, which are almost entirely redacted but pretty clearly deal with NSA sharing raw data with FBI and/or CIA improperly.

I find the second one — which includes no unredacted discussion of emergency detasking or other mitigation — to be the more alarming of the two. But in general, the possibility that NSA might mistakenly send FBI (especially) the wrong data is troubling because once things get to FBI they get far less direct scrutiny (both in terms of compliance reviews and in terms of auditing) than NSA gets. Sending the collection on an entire selector over to another agency is far more intrusive than sending over one unmasked name (though it’s not clear this raw data belonged to a US person). Plus, once things get to FBI they can start having repercussions.

Overbroad Queries

The overbroad queries are interesting not so much because they affect US persons directly (though they do in perhaps two cases), but for what they say about the querying process. Here’s what the 2015 Semiannual Report says about overbroad queries, which it acknowledges is a problem even while attributing the problem to errors in constructing Boolean queries.

(U) NSA’s minimization procedures require queries of Section 702-acquired data to be designed in a manner “reasonably likely to return foreign intelligence information.” Approximately 29% of the minimization errors in this reporting period involved non-compliance with this rule regarding queries (54% in the last reporting period).56 As with prior Joint Assessments, this is the cause of most compliance incidents involving NSA’s minimization procedures. These types of errors are typically traceable to a typographical or comparable error in the construction for the query. For example, an overbroad query can be caused when an analyst mistakenly inserts an “or” instead of an “and” in constructing a Boolean query, and thereby potentially received overbroad results as a result of the query. No incidents of an analyst purposely running a query for nonforeign intelligence reasons against Section 702-acquired data were identified during the reporting period, nor did any of the overbroad queries identified involve the use of a United States person identifier as a query term.

That generally accords with the most common description of the compliance errors: an analyst constructs a query poorly, recognizes as soon as she gets the results (presumably resulting in far more returned records than expected), someone (the reports as often as not don’t tell us who) deletes them, and it gets reported. There are a few incidents where analysts run multiple such queries before discovering the problem — that seems like more of a concern, as fat-fingering a Boolean connector shouldn’t explain it. I’m interested in the errors (2015-7, -8, and -9) where the redaction seems to suggest either some other kind of query or some embarrassment about disclosing that top secret method, Boolean search; it’s possible this pertains to XKS searches, which can also involve scripts. One of these overboard queries was done by a linguist (which given the Reality Winner case is interesting). There are also discrepancies about whether the analyst themselves discovered the problem or an auditor, the latter of which happened at least five times (two incidences don’t describe who discovered them). Finally, there are interesting differences in the description of the coaching that happens after an issue. Sometimes none is described. Most often, the report describes the analyst getting a talking to. But in a number of cases, “personnel,” which might be plural, get coaching. I’m interested in when more than one person would get such coaching.

Finally, consider what it means that most of these violations seem to involved multiple authorities, including 702. That’s not at all surprising: you’d want to track a target across all the collection you had on the person. But that also includes upstream 702, which may be part of the problem upstream became such a problem.

US Person Queries

Finally, there are the queries using US person identifiers that, for some reason, were improper under the guidelines first approved in 2011. As I’ve noted, these have been a consistent problem since at least 2013. The Semiannual Report acknowledges this, or at least the problems with searching upstream 702 data, which was prohibited in the 2011 guidelines.

(U) Additionally, as noted in prior Joint assessments, the joint oversight team believes NSA should assess modifications to systems used to query raw Section 702-acquired data to require analysts to identify when they believe they are using a United States person identifier as a query term. Such an improvement, even if it cannot be adopted universally in all NSA systems, could help prevent compliance instances with respect to the use of United States person query terms.59 NSA plans to test and implement this recommendation during calendar year 2016. The new internal compliance control mechanism being developed for NSA data repositories containing unevaluated and unminimized Section 702 information will require analysts to document whether the query being executed against the database includes a known United States person identifier. Once the query is executed, the details concerning the query will be passed to NSA’s auditing system of record for post-query review and potential metrics compilation. As part of the testing, NSA will evaluate the accuracy of reporting this number in future Joint Assessments.60

As you review the violations discovered in 2014 and 2015, remember that (as noted in the 2017 702 authorization), these results were in a period where NSA was just discovering far more pervasive problems with US person searches. As it is, in each quarter here, there were 10 or 11 inappropriate US person searches. In 2014, a number of those (2,5, 8, 17) were searches of 702 data using identifiers associated with US persons already targeted under Title I, 704, or 705(b). Just one (5) of the 2015 violations was approved for individual targeting, and that appears to be one of the earlier violations in the quarter (note it must have occurred in December 2014). That’s interesting, because this undated guideline on USP queries of 702 collections says any US person approved for individualized targeting or RAS (under the old phone dragnet) could be backdoor searched. It seems likely, then, they changed the policy in 2015 (which is particularly alarming, given that they did so just as NSA was moving towards discovering how bad their upstream searches were. In other words, they seem to have made legal one of the practices that was coming up as a violation.

These violation descriptions are also interesting for the (often redacted) specificity about the kind of selector used, sometimes described as email, telephony (which could include messaging), and in others as “facilities” (which might include cookies or IPs). That’s an indication of the range of identifiers under which you can search 702 data, which is in turn (because 702 searches are all supposed to derive from PRISM collection) a testament to the kinds of things that get turned over in PRISM returns.

Of the violations described, just one obviously pertains to the search on an identifier for which the authorization had expired. That’s interesting, because searches on expired warrants appeared far more frequently in past reports. Significantly, the IG Report reviewing compliance 704/705(b), which reviewed queries for two months that overlapped with the 2015 report at issue (January and February 2015; the compliance report included December 2014 whereas the IG Report included March 2015), did find persistent problems with expired authorizations, but in EO 12333 data (suggesting FISA queries might have fixed earlier such problems). But the discussion of these problems in Rosemary Collyer’s 702 reauthorization opinion shows that for one tool, 85% of 704/705(b) queries conducted from November 2015 through April 2016 — well after the later quarter covered here — were non-compliant. “Many of these non-compliant queries involved use of the same identifiers over different date ranges.” NSA was unable to segregate and destroy the improper queries. That’s perhaps unsurprising, because as late as April 2017, the NSA was still having difficulties identifying all the queries run against 702 data.

And in spite of the reports, from later 702 reporting that some of the 704/705(b) queries of 702 did not get included in auditing systems, a good number of these violations were not discovered by analysts (as often happened with improper queries) but by auditors, suggesting the violations may have had an impact on US persons.

All that said, there’s not all that much there there, aside from the sheer number (which the Semiannual report seems to think is just NSA’s serial refusal to fix the problem of default search settings). These two snap-shots of the 702 upstream query problem, capturing 702 collection in the period immediately before it started to blow up, are also an indication of how much ODNI/DOJ’s oversight of NSA (which is far more rigorous than the oversight than the same agencies give CIA and especially FBI) was missing.

[Photo: National Security Agency via Wikimedia]

If a Tech Amicus Falls in the Woods but Rosemary Collyer Ignores It, Would It Matter?

Six senators (Ron Wyden, Pat Leahy, Al Franken, Martin Heinrich, Richard Blumenthal, and Mike Lee) have just written presiding FISA Court judge Rosemary Collyer, urging her to add a tech amicus — or even better, a full time technical staffer — to the FISA Court.

The letter makes no mention of Collyer’s recent consideration of the 702 reauthorization certificates, nor even of any specific questions the tech amicus might consider.

That’s unfortunate. In my opinion, the letter entirely dodges the real underlying issue, at least as it pertains to Collyer, which is her unwillingness to adequately challenge or review Executive branch assertions.

In her opinion reauthorizing Section 702, Collyer apparently never once considered appointing an amicus, even a legal one (who, under the USA Freedom structure, could have suggested bringing in a technical expert). She refused to do so in a reconsideration process that — because of persistent problems arising from technical issues — stretched over seven months.

I argued then that that means Collyer broke the law, violating USA Freedom Act’s requirement that the FISC at least consider appointing an amicus on matters raising novel or significant issues and, if choosing not to do so, explain that decision.

In any case, this opinion makes clear that what should have happened, years ago, is a careful discussion of how packet sniffing works, and where a packet collected by a backbone provider stops being metadata and starts being content, and all the kinds of data NSA might want to and does collect via domestic packet sniffing. (They collect far more under EO 12333.) As mentioned, some of that discussion may have taken place in advance of the 2004 and 2010 opinions approving upstream collection of Internet metadata (though, again, I’m now convinced NSA was always lying about what it would take to process that data). But there’s no evidence the discussion has ever happened when discussing the collection of upstream content. As a result, judges are still using made up terms like MCTs, rather than adopting terms that have real technical meaning.

For that reason, it’s particularly troubling Collyer didn’t use — didn’t even consider using, according to the available documentation — an amicus. As Collyer herself notes, upstream surveillance “has represented more than its share of the challenges in implementing Section 702” (and, I’d add, Internet metadata collection).

At a minimum, when NSA was pitching fixes to this, she should have stopped and said, “this sounds like a significant decision” and brought in amicus Amy Jeffress or Marc Zwillinger to help her think through whether this solution really fixes the problem. Even better, she should have brought in a technical expert who, at a minimum, could have explained to her that SCTs pose as big a problem as MCTs; Steve Bellovin — one of the authors of this paper that explores the content versus metadata issue in depth — was already cleared to serve as the Privacy and Civil Liberties Oversight Board’s technical expert, so presumably could easily have been brought into consult here.

That didn’t happen. And while the decision whether or not to appoint an amicus is at the court’s discretion, Collyer is obligated to explain why she didn’t choose to appoint one for anything that presents a significant interpretation of the law.

A court established under subsection (a) or (b), consistent with the requirement of subsection (c) and any other statutory requirement that the court act expeditiously or within a stated time–

(A) shall appoint an individual who has been designated under paragraph (1) to serve as amicus curiae to assist such court in the consideration of any application for an order or review that, in the opinion of the court, presents a novel or significant interpretation of the law, unless the court issues a finding that such appointment is not appropriate;

For what it’s worth, my guess is that Collyer didn’t want to extend the 2015 certificates (as it was, she didn’t extend them as long as NSA had asked in January), so figured there wasn’t time. There are other aspects of this opinion that make it seem like she just gave up at the end. But that still doesn’t excuse her from explaining why she didn’t appoint one.

Instead, she wrote a shitty opinion that doesn’t appear to fully understand the issue and that defers, once again, the issue of what counts as content in a packet.

Without even considering an amicus, Collyer for the first time affirmatively approved the back door searches of content she knows will include entirely domestic communications, effectively affirmatively permitting the NSA to conduct warrantless searches of entirely domestic communications, and with those searches to use FISA for domestic surveillance. In approving those back door searches, Collyer did not conduct her own Fourth Amendment review of the practice.

Moreover, she adopted a claimed fix to a persistent problem — the collection of domestic communications via packet sniffing — without showing any inkling of testing whether the fix accomplished what it needed to. Significantly, in spite of 13 years of problems with packet sniffing collection under FISA, the court still has no public definition about where in a packet metadata ends and content begins, making her “abouts” fix — a fix that prohibits content sniffing without defining content — problematic at best.

I absolutely agree with these senators that the FISC should have its own technical experts.

But in Collyer’s case, the problem is larger than that. Collyer simply blew off USA Freedom Act’s obligation to consider an amicus entirely. Had she appointed Marc Zwillinger, I’m confident he would have raised concerns about the definition of content (as he did when he served as amicus on a PRTT application), whether or not he persuaded her to bring in a technical expert to further lay out the problems.

Collyer never availed herself of the expertise of Zwillinger or any other independent entity, though. And she did so in defiance of the intent of Congress, that she at least explain why she felt she didn’t need such outside expertise.

And she did so in an opinion that made it all too clear she really, really needed that help.

In my opinion, Collyer badly screwed up this year’s reauthorization certificates, kicking the problems created by upstream collection down the road, to remain a persistent FISA problem for years to come. But she did so by blowing off the clear requirement of law, not because she didn’t have technical expertise to rely on (though the technical expertise is probably necessary to finally resolve the issues raised by packet sniffing).

Yet no one but me — not even privacy advocates testifying before Congress — want to call her out for that.

Congress already told the FISA court they “shall” ask for help if they need it. Collyer demonstrably needed that help but refused to consider using it. That’s the real problem here.

I agree with these senators that FISC badly needs its own technical experts. But a technical amicus will do no good if, as Collyer did, a FISC judge fails to consult her amici.

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

Did NSA Start Using Section 702 to Collect from VPNs in 2014?

I’ve finally finished reading the set of 702 documents I Con the Record dumped a few weeks back. I did two posts on the dump and a related document Charlie Savage liberated. Both pertain, generally, to whether a 702 “selector” gets defined in a way that permits US person data to be sucked up as well. The first post reveals that, in 2010, the government tried to define a specific target under 702 (both AQAP and WikiLeaks might make sense given the timing) as including US persons. John Bates asked for legal justification for that, and the government withdrew its request.

The second reveals that, in 2011, as Bates was working through the mess of upstream surveillance, he asked whether the definition of “active user,” as it applies for a multiple communication transaction, referred to the individual user. The question is important because if a facility is defined to be used by a group — say, Al Qaeda or Wikileaks — it’s possible a user of that facility might be an unknown US person user, the communications of which would only be segregated under the new minimization procedures if the individual user’s communication were reviewed (not that it mattered in the end; NSA doesn’t appear to have implemented the segregation regime in meaningful fashion). Bates never got a public answer to that question, which is one of a number of reasons why Rosemary Collyer’s April 26 702 opinion may not solve the problem of upstream collection, especially not with back door searches permitted.

As it happens, some of the most important documents released in the dump may pertain to a closely related issue: whether the government can collect on selectors it knows may be used by US persons, only to weed out the US persons after the fact.

In 2014, a provider challenged orders (individual “Directives” listing account identifiers NSA wanted to collect) that it said would amount to conducting surveillance “on the servers of a U.S.-based provider” in which “the communications of U.S. persons will be collected as part of such surveillance.” The provider was prohibited from reading the opinions that set the precedent permitting this kind of collection. Unsurprisingly, the provider lost its challenge, so we should assume that some 702 collection collects US person communications, using the post-tasking process rather than pre-targeting intelligence to protect American privacy.

The documents

The documents that lay out the failed challenge are:

2014, redacted date: ACLU Document 420: The government response to the provider’s filing supporting its demand that FISC mandate compliance.

2014, redacted date: EFF Document 13: The provider(s) challenging the Directives asked for access to two opinions the government relied on in their argument. Rosemary Collyer refused to provide them, though they have since been released.

2014, redacted date: EFF Document 6 (ACLU 510): Unsurprisingly, Collyer also rejected the challenge to the individual Directives, finding that post-tasking analysis could adequately protect Americans.

The two opinions the providers requested, but were refused, are:

September 4, 2008 opinion: This opinion, by Mary McLaughlin, was the first approval of FAA certifications after passage of the law. It lays out many of the initial standards that would be used with FAA (which changed slightly from PAA). As part of that, McLaughin adopted standards regarding what kinds of US person collection would be subject to the minimization procedures.

August 26, 2014 opinion: This opinion, by Thomas Hogan, approved the certificates under which the providers had received Directives (which means the challenge took place between August and the end of 2014). But the government also probably relied on this opinion for a change Hogan had just approved, permitting NSA to remain tasked on a selector even if US persons also used the selector.

The argument also relies on the October 3, 2011 John Bates FAA opinion and the August 22, 2008 FISCR opinion denying Yahoo’s challenge to Protect America Act. The latter was released in a second, less redacted form on September 11, 2014, which means the challenge likely post-dated that release.

The government’s response

The government’s response consists of a filing by Stuart Evans (who has become DOJ’s go-to 702 hawk) as well as a declaration submitted by someone in NSA that had already reviewed some of the taskings done under the 2014 certificates (which again suggests this challenge must date to September at the earliest). There appear to be four sections to Evans’ response. Of those sections, the only one left substantially unredacted — as well as the bulk of the SIGINT declaration — pertains to the Targeting Procedures. So while targeting isn’t the only thing the provider challenged (another appears to be certification of foreign intelligence value), it appears to be the primary thing.

Much of what is unredacted reviews the public details of NSA’s targeting procedure. Analysts have to use the totality of circumstances to figure out whether someone is a non US person located overseas likely to have foreign intelligence value, relying on things like other SIGINT, HUMINT, and (though the opinion redacts this) geolocation information and/or filters to weed out known US IPs. After a facility has been targeted, the analyst is required to do post-task analysis, both to make sure that the selector is the one intended, but also to make sure that no new information identifies the selector as being used by a US person, as well as making sure that the target hasn’t “roamed” into the US. Post-task analysis also ensures that the selector really is providing foreign intelligence information (though in practice, per PCLOB and other sources, this is not closely reviewed).

Of particular importance, Evans dismisses concerns about what happens when a selector gets incorrectly tasked as a foreigner. “That such a determination may later prove to be incorrect because of changes in circumstances or information of which the government was unaware does not render unreasonable either the initial targeting determination or the procedures used to reach it.”

Evans also dismisses the concern that minimization procedures don’t protect the providers’ customers (presumably because they provide four ways US person content may be retained with DIRNSA approval). Relying on the 2008 opinion that states in part…

The government argues that, by its terms, Section 1806(i) applies only to a communication that is unintentionally acquired,” not to a communication that is intentionally acquired under a mistaken belief about the location or non-U.S. person status of the target or the location of the parties to the communication. See Government’s filing of August 28, 2008. The Court finds this analysis of Section 1806(i) persuasive, and on this basis concludes that Section 1806(i) does not require the destruction of the types of communications that are addressed by the special retention provisions.”

Evans then quotes McClaughlin judging that minimization procedures “constitute a safeguard against improper use of information about U.S. persons that is inadvertently or incidentally acquired.” In other words, he cites an opinion that permits the government to treat stuff that is initially targeted, even if it is later discovered to be an American’s communication, differently than it does other US person information as proof the minimization procedures are adequate.

The missing 2014 opinion references

As noted above, the provider challenging these Directives asked for both the 2008 opinion (cited liberally throughout the unredacted discussion in the government’s reply) and the 2014 one, which barely appears at all beyond the initial citation.  Given that Collyer reviewed substantial language from both opinions in denying the provider’s request to obtain them, the discussion must go beyond simply noting that the 2014 opinion governs the Directives in question. There must be something in the 2014 opinion, probably the targeting procedures, that gets cited in the vast swaths of redactions.

That’s especially true given that on the first page of Evans’ response claims the Directives address “a critical, ongoing foreign intelligence gap.” So it makes sense that the government would get some new practice approved in that year’s certification process, then serve Directives ostensibly authorized by the new certificate, only to have a provider challenge a new type of request and/or a new kind of provider challenge their first Directives.

One thing stands out in the 2014 opinion that might indicate the closing of a foreign intelligence gap.

Prior to 2014, the NSA could say an entity — say, Al Qaeda — used a facility, meaning they’d suck up any people that used that facility (think how useful it would be to declare a chat room a facility, for example). But (again, prior to 2014) as soon as a US person started “using” that facility — the word use here is squishy as someone talking to the target would not count as “using” it, but as incidental collection — then NSA would have to detask.

The 2014 certifications for the first time changed that.

The first revision to the NSA Targeting Procedures concerns who will be regarded as a “target” of acquisition or a “user” of a tasked facility for purposes of those procedures. As a general rule, and without exception under the NSA targeting procedures now in effect, any user of a tasked facility is regarded as a person targeted for acquisition. This approach has sometimes resulted in NSA’ s becoming obligated to detask a selector when it learns that [redacted]

The relevant revision would permit continued acquisition for such a facility.

[snip]

For purposes of electronic surveillance conducted under 50 U.S.C. §§ 1804-1805, the “target” of the surveillance ‘”is the individual or entity … about whom or from whom information is sought.”‘ In re Sealed Case, 310 F.3d 717, 740 (FISA Ct. Rev. 2002) (quoting H.R. Rep. 95-1283, at 73 (1978)). As the FISC has previously observed, “[t]here is no reason to think that a different meaning should apply” under Section 702. September 4, 2008 Memorandum Opinion at 18 n.16. It is evident that the Section 702 collection on a particular facility does not seek information from or about [redacted].

In other words, for the first time in 2014, the FISC bought off on letting the NSA target “facilities” that were used by a target as well as possibly innocent Americans, based on the assumption that the NSA would weed out the Americans in the post-tasking process, and anyway, Hogan figured, the NSA was unlikely to read that US person data because that’s not what they were interested in anyway.

Mind you, in his opinion approving the practice, Hogan included a bunch of mostly redacted language pretending to narrow the application of this language.

This amended provision might be read literally to apply where [redacted]

But those circumstances fall outside the accepted rationale for this amendment. The provision should be understood to apply only where [redacted]

But Hogan appears to be policing this limiting language by relying on the “rationale” of the approval, not any legal distinction.

The description of this change to tasking also appears in a 3.5 page discussion as the first item in the tasking discussion in the government’s 2014 application, which Collyer would attach to her opinion.

Collyer’s opinion

Collyer’s opinion includes more of the provider’s arguments than the Reply did. It describes the Directives as involving “surveillance conducted on the servers of a U.S.-based provider” in which “the communications of U.S. person will be collected as part of such surveillance.” (29) It says [in Collyer’s words] that the provider “believes that the government will unreasonably intrude on the privacy interests of United States persons and persons in the United States [redacted] because the government will regularly acquire, store, and use their private communications and related information without a foreign intelligence or law enforcement justification.” (32-3) It notes that the provider argued there would be “a heightened risk of error” in tasking its customers. (12) The provider argued something about the targeting and minimization procedures “render[ed] the directives invalid as applied to its service.” (16) The provider also raised concerns that because the NSA “minimization procedures [] do not require the government to immediately delete such information[, they] do not adequately protect United States person.” (26)

All of which suggests the provider believed that significant US person data would be collected off their servers without any requirement the US person data get deleted right away. And something about this provider’s customers put them at heightened risk of such collection, beyond (for example) regular upstream surveillance, which was already public by the time of this challenge.

Collyer, too, says a few interesting things about the proposed surveillance. For example, she refers to a selector as an “electronic communications account” as distinct from an email — a rare public admission from the FISC that 702 targets things beyond just emails. And she treats these Directives as an “expansion of 702 acquisitions” to some new provider or technology. Finally, Collyer explains that “the 2014 Directives are identical, except for each directive referencing the particular certification under which the directive is issued.” This means that the provider received more than one Directive, and they fall under more than one certificate, which means that the collection is being used for more than one kind of use (counterterrorism, counterproliferation, and foreign government plus cyber). So the provider is used by some combination of terrorists, proliferators, spies, or hackers.

Ultimately, though, Collyer rejected the challenge, finding the targeting and minimization procedures to be adequate protection of the US person data collected via this new approach.

Now, it is not certain that all this relied on the new targeting procedure. Little in Collyer’s language reflects passing familiarity with that new provision. Indeed, at one point she described the risk to US persons to involve “the government may mistakenly task the wrong account,” which suggests a more individualized impact.

Except that after her almost five pages entirely redacted of discussion of the provider’s claim that the targeting procedures are insufficient, Collyer argues that such issues don’t arise that frequently, and even if they do, they’d be dealt with in post-targeting analysis.

The Court is not convinced that [redacted] under any of the above-described circumstances occurs frequently, or even on a regular basis. Assuming arguendo that such scenarios will nonetheless occur with regard to selectors tasked under the 2014 Directives, the targeting procedures address each of the scenarios by requiring NSA to conduct post-targeting analysis [redacted]

Similarly, Collyer dismissed the likelihood that Americans’ data would be tasked that often.

[O]ne would not expect a large number of communications acquired under such circumstances to involve United States person [citation to a redacted footnote omitted]. Moreover, a substantial proportion of the United States person communications acquired under such circumstances are likely to be of foreign intelligence value.

As she did in her recent shitty opinion, Collyer appears to have made these determinations without requiring NSA to provide real numbers on past frequency or likely future frequency.

However often such collection had happened in the past (which she didn’t ask the NSA to explain) or would happen as this new provider started responding to Directives, this language does sound like it might implicate the new case of a selector that might be used both by legitimate foreign intelligence targets and by innocent Americans.

Does the government use 702 collection to obtain VPN traffic?

As I noted, it seems likely, though not certain, that the new collection exploited the new permission to keep tasking a selector even if US persons were using it, in addition to the actual foreigners targeted. I’m still trying to puzzle this through, but I’m wondering if the provider was a VPN provider, being asked to hand over data as it passed through the VPN server. (I think the application approved in 2014 would implicate Tor traffic as well, but I can’t see how a Tor provider would challenge the Directives, unless it was Nick Merrill again; in any case, there’d be no discussion of an “account” with Tor in the way Collyer uses it).

What does this mean for upstream surveillance

In any case, whether my guesstimates about what this is are correct, the description of the 2014 change and the discussion about the challenge would seem to raise very important questions given Collyer’s recent decision to expand the searching of upstream collection. While the description of collection from a provider’s server is not upstream, it would seem to raise the same problems, the collection of a great deal of associated US person collection that could later be brought up in a search. There’s no hint in any of the public opinions that such problems were considered.

When NSA Talks about Unintended Consequences, You Need to Ask a Follow-Up Question

In yesterday’s hearing on Section 702 reauthorization, Dianne Feinstein asked all DOJ, FBI, and NSA whether they opposed a statutory prohibition on “about” searches.

DOJ’s Stuart Evans falsely claimed that the FISC has found “about” collection to be legal; that’s not true given the assumption — which has proven out in practice — that NSA would do back door searches on the resulting domestic communications that result. Indeed, both judges who considered whether collecting and searching MCTs including domestic communications was constitutional, John Bates and Rosemary Collyer, called it a Fourth Amendment problem.

But I’m more interested in NSA Deputy General Counsel for Operations Paul Morris’ answer.

Morris: NSA opposes a statutory change at this point because that would box us in and possibly have unintended consequences.

Feinstein: Are you saying you would oppose this?

Morris: Oppose, right, we don’t think it would be a good idea at this time.

Feinstein: Huh. Thank you. That answers my question.

When the NSA complains preemptively about being “boxed in” to prevent a practice the FISC has found constitutionally problematic, it ought to elicit a follow-up question. Why doesn’t the NSA want to be prohibited from an activity that is constitutionally suspect?

More importantly, especially given that “abouts” collection is currently not defined in a way that has any technical meaning, Feinstein should have followed up to ask about what “unintended consequences” Morris worried about. Morris’ comment leads me to believe my suspicion — that the NSA continues to do things that have the same effect as “abouts” collection, even if they don’t reach into the “content” of emails that are only a subset of the kinds of things that get collected using upstream collection — is correct. It seems likely that Morris wants to protect collection that would violate any meaningful technical description of “abouts.”

Which suggests the heralded “end” to “abouts” collection is no such thing, it’s just the termination of one kind of collection that sniffs into content layers of packets.


Links to all posts on yesterday’s 702 hearing:

NSA talks about unintended consequences … no one asks what they might be

NSA argues waiting 4 years before dealing with systematic violations is not a lack of candor

FBI’s can only obtain raw feeds on selectors “relevant to” a full investigation

Everyone claims an FBI violation authorized by MOU aren’t willful 

Even amicus fans neglect to mention Rosemary Collyer violated USAF in not considering one

 

Confirmed: The FISA Court Is Less of a Rubber Stamp than Article III Courts

Although Rosemary Collyer’s recent 702 opinion has made me rethink my position, I’ve long argued that the FISA Court gets a bad rap when it is called a rubber stamp.

But today, for the first time, we can test that claim. Today is the first time we have had US Court reports for for an entire year for both the FISC and for Article III Courts — as close as we can get to comparing apples to apples.

The FISC report showed that that court denied in full 8 of 1485 individual US based applications, at a rate of .5%, along with partially denying or modifying a significant number of others.

The Article III report showed that out of 3170 requests, state and federal courts denied just 2 requests.

A total of 3,168 wiretaps were reported as authorized in 2016, compared with 4,148 the previous year. Of those, 1,551 were authorized by federal judges, compared with 1,403 in 2015. A total of 1,617 wiretaps were authorized by state judges, compared with 2,745 in 2015. Two wiretap applications were reported as denied in 2016.

That’s a denial rate of .06%.

And remember, just 336 or so of the FISA orders target Americans, whereas the majority of the Article III warrants would target Americans.

None of that diminishes the potential privacy implications of either kind of warrant. Indeed, the relative ease with which Article III courts grant warrants may invite — as the differential standards for location data already have — FBI to use criminal courts when a FISC order would be too hard to obtain.

But if people are worried about rubber stamp courts, they probably need to focus more closely on the magistrate courts in their backyard.

Update: Swapped Article for Title because I was being an idiot. Thanks to JT for nagging.

Update: We get complaints from one of everyone’s favorite magistrates, Stephen Smith.

Please remind your devoted readers that federal magistrate judges do not issue wiretaps. That fun task is reserved for the federal article III judges with lifetime appointments. We do issue all the other electronic surveillance orders and warrants, but unfortunately no stats are kept by anyone on our grants/denials/modifications. DOJ does keep track of pen/traps obtained, but of course the judge’s role on those is purely clerical–we don’t review the evidence, but merely check to see that the application is signed by the AUSA and in proper form. Some of us are working on the MJ warrant reporting issue, which is a pet peeve of mine. But I do not think it fair to tar all federal magistrate judges with the rubber stamp label, especially not based on the wiretap numbers with which we have nothing to do.

Corrected accordingly, and my apologies to the magistrates I’ve maligned.

 

[Photo: National Security Agency, Ft. Meade, MD via Wikimedia]

NSA’s Unsatisfying Response to Rosemary Collyer’s “Lack of Candor” Accusations

In yesterday’s 702 hearing, Chuck Grassley asked NSA and FBI to explain why Rosemary Collyer (who I believe is the worst presiding FISA judge of the modern tennis era) accused them of a lack of candor.

FBI’s Carl Ghattas dodged one such accusation, but basically admitted what I laid out here with regards to the other — that FBI really wasn’t set up to fulfill Thomas Hogans 2015 order to report on any queries that return criminal information. Ghattas promised FBI would fix that; I’m skeptical the current structure of FBI audits will facilitate that happening but I’m happy to be proven wrong.

I want to look more closely at how Paul Morris, NSA’s Deputy General Counsel for Operations, explained the 10-month delay in informing the FISC about the NSA’s prohibited searches of upstream content.

We had initially identified that we had made some errors of US person queries against our upstream collection. So since 2011, our minimization procedures had prohibited outright any US person queries running against upstream 702 collection, largely because of abouts communications. We had reported the initial query errors — I believe it was in 2015 when we made the initial report, but our Office of Inspector General as well as our compliance group had separate reviews ongoing to try to determine the scope and scale of the problem. So during the course of filing the renewal for the 702 certifications that were pending, the court held a hearing in early October 2016 when it asked about various compliance matters to include the improper queries and we reported on the status of those investigations as we knew them to be at that time. On about two, I think, two or three weeks later, the Office of Inspector General completed its follow-up review of the US person query and discovered that the scope of the problem was larger than we’d originally reported. Soon as we identified that the problem was larger than we thought it was, we notified the Justice Department and the ODNI, in turn the court was notified and the court held a hearing on October 26 to go into further detail about the problem and it ultimately led to a couple of extensions of the certifications and ultimately our decision to terminate abouts collection in order to remedy the compliance problem. So my sense is that the institutional lack of candor that the court was referring to was really frustration that when we had the hearing on October 6 [sic] we did not know the full scope and scale of the problem until later which was reported roughly, again, October 24, which led to a hearing on October 26, which was the day before the court was supposed to rule on extending the certification.

As a reminder, this problem actually extends back to at least 2013. As I’ll eventually show, NSA obtained back door search authority in 2011 after a series of unauthorized back door searches, meaning they were just approving something that was already being done, just as this year’s opinion just approved searches that were going on in uncontrolled fashion.

Furthermore, while NSA surely informed the FISC of some of these problems along the way (otherwise I wouldn’t have known about them when I called them out last August), it did not deal with the ongoing problems in its application, which would have flagged an ongoing compliance problem of the magnitude shown even by the 2016 IG Report.

Morris’ claim that NSA’s IG reached some kind of conclusory decision between the first hearing on October 4 and the notice of the further problems on October 24 is dubious, given that the NSA said that follow-up study was still ongoing in a January 3 filing.

In anticipation of the January 31 deadline, the government updated the Court on these querying issues in the January 3, 2017 Notice. That Notice indicated that the IG’s follow-on study (covering the first quarter of 2016) was still ongoing.

As Collyer noted, at that point the NSA was still identifying all the systems implicated, notably finding queries that elude NSA’s query audit system.

It also appeared that NSA had not yet fully assessed the scope of the problem: the IG and OCO reviews “did not include systems through which queries are conducted of upstream data but that do not interface with NSA’s query audit system.” Id. at 3 n.6. Although NSD and ODNI undertook to work with NSA to identify other tools and systems in which NSA analysts were able to query upstream data, id., and the government proposed training and technical measures, it was clear to the Court that the issue was not yet fully scoped out.

Also at this point, NSA was “disclosing” the root cause of the problem as the same one identified back in 2013 and 2014, when NSA dismissed the possibility of a technical fix to the opt-out problem.

The January 3, 2017 Notice stated that “human error was the primary factor” in these incidents, but also suggested that system design issues contributed. For example, some systems that are used to query multiple datasets simultaneously required analysts to “opt-out” of querying Section 702 upstream Internet data rather than requiring an affirmative “opt-in,” which, in the Court’s view, would have been more conducive to compliance. See January 3, 2017 Notice at 5-6.

Ultimately, this chronology — and Morris’ unsatisfactory explanation for it — ought to raise real questions about what the bar is for the NSA declaring systems to be totally out of control, requiring immediate corrective action. I believe the NSA had reached that point on upstream searches at least by 2015. But it kept doing prohibited back door searches (which Collyer, because she’s the worst presiding FISC judge in recent memory, retroactively blessed) on abouts collection for another two years before the front end of about collection was shut down.

So perhaps the problem isn’t a lack of candor? Perhaps the problem is NSA can continue spying on entirely domestic communications for two years after identifying the problem before any fix is put in place?

The FBI’s Standards for Ingesting Raw 702 Data

In most Section 702 hearings, there is no FBI witness, which means NSA witnesses can make claims about back door searches that are completely irrelevant to the biggest concern — FBI’s far more frequent back door searches.

Today was different. Carl Ghattas, FBI’s Executive Assistant Director for National Security, testified. And aside from totally dodging a Chuck Grassley question about why, according to Rosemary Collyer, FBI waited 11 months before informing the FISA Court about one violation, he was a very informative witness.

Take, for example, a detail he provided in his written testimony (after 34:50) about what FBI obtains in raw form (this may be public in the DIOG that the Intercept leaked, but I’m not otherwise aware of this detail). The FBI can only get raw data for selectors “relevant to” full investigations, not preliminary investigations or assessments.

It’s important to remember FBI receives a small fraction of the total collection that NSA receives under this program. In fact, the FBI only receives a small percentage of NSA’s downstream collection and none of NSA’s upstream collection. The reason for this is that the FBI can only request and receive Section 702 collection if the selector — that is, an email address or social media handle, for example — is relevant to a pending full investigation. The FBI cannot receive Section 702 collection during either a preliminary inquiry or an assessment. As a result, although the FBI conducts significantly more US person queries than NSA, those queries are running against a small fraction of the total 702 collection that is acquired by the US government. In other words, when the FBI runs a US person identifier through our database, that query is run against only FBI’s 702 collection that’s obtained during FBI full investigations and not the total collection maintained by NSA.

This does limit things, though as the FBI likes to say, it has thousands of investigations going at any time, the most emphasized of which (terrorism and counterintelligence) would likely implicate 702 data. Moreover, it raises questions about the foreign intelligence designations made, especially (prior to this year) regarding the data FBI shared in raw form with NCTC. And of course, we all know that the word “relevant to” has ceased to have real limiting meaning.

Also, the FBI may only obtain this information at the Full Investigation level, but it can query it at the assessment level. And today’s hearing, like all others, failed to discuss that the FBI uses those queries, in part, to find informants, some of whom may be guilty of nothing beyond doing something that FBI can use to coerce their cooperation.

So a full investigation (which may include an enterprise investigation targeted generally at, for example, ISIS or Russian spies) sucks in all relevant tasked selectors (Ghattas did not describe how the FBI nominates selectors), which can then be queried at the assessment level for the US person being queried.

The Willful FBI 702 Violation No One Admitted

In today’s 702 hearing, both Senators and (most) witnesses repeated over and over that while there had been compliance problems, there had been no willful violations.

I think that as of Rosemary Collyer’s recent opinion, that can no longer be said to be true. Among the violations she laid out, she described an “improper disclosure of raw information” to a contractor in a way that violated minimization procedures (starting on page 83).

Apparently, the FBI (possibly in a fusion center or JTTF situation) had provided access to raw data to an entity “largely staffed by private contractors” to obtain analytical assistance. The contractors’ access to raw data “went well beyond what was necessary to respond to the FBI’s requests.” Collyer considered their access under the provision of FBI SMPs that permit sharing of information for technical assistance, but she noted, “their access was not limited to raw information for which the FBI sought assistance and access continued even after they had completed work in response to an FBI request.”

FBI also appears to have delivered data to a non-Federal agency (it appears to be some kind of tech contractor) where employees were not under the direct supervision of FBI employees.

With one of these violations (it appears, though is not certain, to be the second one), the decision to give improper access to contractors “was the result of deliberate decisionmaking” supported by an interagency memorandum of understanding. As Collyer notes, “such a memorandum of understanding could not override the restrictions of Section 702 minimization procedures.”

The Intelligence Committees started requiring copies of all interagency IC related MOUs last year; this may be one reason why. Nevertheless, that doesn’t change the history, that FBI at an institutional level made a decision to provide (apparently small amounts of) data to people outside of the minimization procedures.

I don’t think witnesses and Senators can claim they know of no willful violations anymore.