FBI’s Surveillance Arbitrage, First Amendment Edition

While I was cycling around Provence without a care in the world last week, DOJ’s Inspector General released an IG Report mandated by the USA Freedom Act. It reports on the use of Section 215 from 2012 to 2014 (which means NSA and FBI have successfully avoided any review of their 215 orders from 2010 and 2011, not to mention any review of CIA’s use of the provision). The key takeaway is that the application process to get Section 215 orders is very time consuming — over 100 days on average. Which is probably why Republican Senators have been trying to permit FBI to obtain Electronic Communications Transaction Records with just a National Security Letter since the report was released to Congress in June.

The report also noted a sharp drop-off in the use of 215 orders in recent years, which I’ve been tracking here.

Those two factors are useful background for some other details in the report, however. First, DOJ and FBI interviewees offered many explanations for the decline in Section 215 use, one of which is Edward Snowden, but two more credible ones of which are the use of other authorities to get the same information, Section 702 or grand jury subpoenas.

NSD and FBI personnel attributed the subsequent decline between 2013 and 2015 to several factors, including the stigma attached to the use of Section 215 authority following the Snowden revelations, increased use of Section 702 of the FISA Amendments Act, providers’ resistance to business records orders, agents’ frustrations with the lack of timeliness and level of oversight in the business records process, and agents’ increasing use of criminal legal process instead of FISA authority in counterterrorism and cyber investigations.

They key point, though, is for most uses, there are other ways to get the same information. There is a limit to that, though. Apparently, grand jury subpoenas are only possible for counterterrorism and cybersecurity investigations, not counterintelligence ones.

When asked about this disparity, agents told us that business records orders frequently are the only option available in counterintelligence investigations given the nature and classification of the information involved. By contrast, agents handling counterterrorism and cyber investigations can in some instances open a parallel criminal investigation and use the grand jury process to obtain the same information more quickly and with less oversight than a business records order.

That’s why I’m so interested in a discussion of the applications that got filed — in counterterrorism cases — but either not submitted or withdrawn from the FISC in this period.

screen-shot-2016-10-07-at-10-51-46-am

Remember, the way the government and FISC avoid rejected applications is by not submitting or withdrawing things that it is clear the FISC won’t approve. What this redacted section effectively says is that at least “several” requests based on a target’s statements about jihad were withdrawn, apparently in the wake of a February 2013 order from John Bates on what constitutes targeting for First Amendment reasons.

We’ve seen a heavily redacted version of that opinion. As I laid out here, it’s a classic John Bates opinion: it hems and haws about Executive Branch behavior, but then approves the behavior in question (at least in this case, Bates didn’t approve an expansion of the questionable behavior, as he did in 2010 with the Internet dragnet).

Effectively Bates appears to have objected to the use of a target’s language (perhaps, support for jihad without endorsement of specific threats) in obtaining a Section 215 order, but then pointed to other peoples’ behavior in finding that the order didn’t stem exclusively from First Amendment protected activities.

And the IG Report says that, apparently in the wake of that wishy-washy opinion, DOJ decided to withdraw several applications based on stated support for jihad.

Remember, in 2006, the FBI withdrew two attempts at a 215 order because of FISC’s First Amendment concerns only to get the same information with NSLs. (See page 68ff) Congress made a particularly big stink about it, because the FBI was acting on its own in spite of FISC’s disapproval.

This feels similar. That is, given that FBI was already moving its Section 215 orders to grand jury subpoenas because they’re easier to get and undergo less oversight, it sure seems likely these requests reappeared as such. Unlike the earlier IG report that confirmed FBI arbitraged surveillance authorities to get around First Amendment protections, this report appears not to have pursued the issue (as I understand it, the declassification of this report was handled exclusively through redactions).

They did, however, ask why DOJ doesn’t track applications that are withdrawn, to avoid the appearance that the FISC is a rubber stamp. DOJ’s answer was rather unpersuasive.

The FISA Court did not deny any business records applications between 2012 and 2014. When asked why applications withdrawn after submission of a read copy to the FISA Court were not reported to Congress, potentially creating the inadvertent impression that the FISA Court is a “rubber stamp,” NSD supervisors told us that the Department includes only business records applications formally submitted to the FISA Court and denied or withdrawn, not those filed in “read copy” and subsequently withdrawn. 41 The NSD supervisors acknowledged that excluding applications withdrawn after the FISA Court indicates that it will not sign an order might lead to misunderstandings about the FISA Court’s willingness to question applications, but the supervisors noted that NSD and the FISA Court have talked about the “read” process publicly to address concerns about this. 42 In comments provided to the OIG after reviewing a draft of this report, NSD stated that it is currently considering whether to revise the methodology for counting withdrawn applications.

My guess is they want to avoid any records of withdrawn applications for those times when they do use a grand jury subpoena to obtain stuff that FISC made known it wouldn’t approve. That detail might have to be disclosed to defendants, after all. Here, there’s less paperwork.

It all seems to support a theory that the FBI continues to arbitrage surveillance authorities (as they, by their own admission, do with location tracking). With location tracking, there’s nothing patently illegal about that. But with First Amendment protections, that sure seems dubious.

Does a Fifth of Yahoo’s Value Derive from (Perceived) Security and Privacy?

The NYPost is reporting that Verizon is trying to get a billion dollar discount off its $4.8 billion purchase price for Yahoo.

“In the last day we’ve heard that [AOL head, who is in charge of these negotiations] Tim [Armstong] is getting cold feet. He’s pretty upset about the lack of disclosure and he’s saying can we get out of this or can we reduce the price?” said a source familiar with Verizon’s thinking.

That might just be tough talk to get Yahoo to roll back the price. Verizon had been planning to couple Yahoo with its AOL unit to give it enough scale to be a third force to compete with Google and Facebook for digital ad dollars.

The discount is being pushed because it feels Yahoo’s value has been diminished, sources said.

AOL/Yahoo will reach about 1 billion consumers if the deal closes in the first quarter, with a stated goal to reach 2 billion by 2020. AOL boss Tim Armstrong flew to the West Coast in the past few days to meet with Yahoo executives to hammer out a case for a price reduction, a source said.

At one level, this is just business. Verizon has the opportunity to save some money, and it is exploring that opportunity.

But the underlying argument is an interesting one, as it floats a potential value — over a fifth of the original purchase price — tied to Yahoo’s ability to offer its users privacy.

As I understand it, the basis for any discount would be an interesting debate, too. The NYP story implies this is a reaction to both Yahoo’s admission that upwards of 500 million Yahoo users got hacked in 2014 and the more recent admission that last year Yahoo fulfilled a FISA order to scan all its incoming email addresses without legal challenge.

Yahoo has claimed that it only recently learned about the 2014 hack of its users — it told Verizon within days of discovering the hack. If that’s true, it’s not necessarily something Yahoo could have told Verizon before the purchase. (Indeed, Verizon should have considered Yahoo’s security posture when buying it.) But there are apparently real questions about how forthcoming Yahoo has been about the extent of the hack. The number of people affected might be in the billions.

Yahoo can’t claim to have been ignorant about its willingness to respond to exotic FISA requests without legal challenge, however.

Verizon bought Yahoo at a time when Yahoo’s aggressive challenged to PRISM back in 2007 was public knowledge. Given that Verizon had been — or at least had been making a show — of limiting what it would agree to do under USA Freedom Act (Verizon got too little credit, in my opinion, for being the prime necessary driver behind the reform), that earlier legal challenge would have aligned with what Verizon itself was doing: limiting its voluntary cooperation with US government spying requests. But now we learn Yahoo had repurposed its own spam and kiddie porn filter to help the government spy, without complaint, and without even telling its own security team.

I’ll let the mergers and acquisitions lawyers fight over whether Verizon has a claim about the purchase price here. Obviously, the $1 billion is just the opening offer.

But there is a real basis for the claim, at least in terms of value. Verizon bought Yahoo to be able to bump its user base up high enough to be able to compete with Google and Facebook. The perception, particularly in Europe, that Yahoo has neither adequately valued user security nor pushed back against exotic US government demands (especially in the wake of the Snowden revelations) will make it a lot harder to maintain, much less expand, the user base that is the entire purpose for the purchase.

So we’re about to learn how much of an international Internet Service Provider’s value is currently tied to its ability to offer security to its users.

The Yahoo Scan: On Facilities and FISA

There are now two competing explanations for what Yahoo was asked by the government to do last year.

Individual FISA order or 702 directive?

NYT (including Charlie Savage, who FOIAed all the FISC opinions and then wrote a book about them) explains Yahoo got an individual FISA order to search for a “signature” that the FBI had convinced the FISA Court was associated with a state-sponsored terrorist group.

A system intended to scan emails for child pornography and spam helped Yahoo satisfy a secret court order requiring it to search for messages containing a computer “signature” tied to the communications of a state-sponsored terrorist organization, several people familiar with the matter said on Wednesday.

Two government officials who spoke on the condition of anonymity said the Justice Department obtained an individualized order from a judge of the Foreign Intelligence Surveillance Court last year. Yahoo was barred from disclosing the matter.

To comply, Yahoo customized an existing scanning system for all incoming email traffic, which also looks for malware, according to one of the officials and to a third person familiar with Yahoo’s response, who also spoke on the condition of anonymity.

With some modifications, the system stored and made available to the Federal Bureau of Investigation a copy of any messages it found that contained the digital signature.

Reuters — in a story emphasizing the upcoming debate about reauthorization — says that the order was a Section 702 order.

The collection in question was specifically authorized by a warrant issued by the secret Foreign Intelligence Surveillance Court, said the two government sources, who requested anonymity to speak freely.

Yahoo’s request came under the Foreign Intelligence Surveillance Act, the sources said. The two sources said the request was issued under a provision of the law known as Section 702, which will expire on Dec. 31, 2017, unless lawmakers act to renew it.

The FISA Court warrant related specifically to Yahoo, but it is possible similar such orders have been issued to other telecom and internet companies, the sources said.

Yet it also reports that both Intelligence Committees are investigating more about this request (which tells you something about Reuters’ potential sources and how much the spooks’ overseers actually know about this).

The intelligence committees of both houses of Congress, which are given oversight of U.S. spy agencies, are now investigating the exact nature of the Yahoo order, sources said.

For what it’s worth, at least until 2012, I think NSA and FBI might have been able to request this scan under 702; there are a bunch of court decisions, including one associated with what got reported as an upstream violation in 2012, that we haven’t seen on this point though. But particularly given Reuters’ discussion of a “warrant” — which is more often used with traditional FISA — I suspect NYT is correct on this.

“Hard” and “soft,” and “upstream,” “about,” and “PRISM” are confusing the debate

The source of the confusion seems to stem from two separate sets of vocabulary that are unhelpful in understanding how FISA works.

The first set has to do with “hard” and “soft” selectors, language used in XKeyscore, which basically conducts boolean searches of buffered Internet traffic. Hard selectors are name, email, or phone identifiers associated with a specific person. Soft selectors are characteristics that can range from geographic location to specific code — so a search might ask for users of the encryption tool Mujahadeen Secrets in Syria, for example, which will return a bunch of people whose identities may not be known but whose activities warrant interest. Soft selectors can include searches on what counts as “content,” but they also search on what counts as metadata.

I think the hard/soft distinction is misleading because — as far as I know — FISA has always operated on single selectors, not boolean searches. NSA isn’t asking providers — whether they’re phone companies or Internet providers — to go find people who are in interesting places and use interesting crypto (though AT&T may be an exception to this rule). Rather, they’re asking for communications obtained by searching on specific selectors.

To be sure, for each target, there will be a range of selectors, often a huge number of them. Even for one person, as I have noted, NSA and FBI probably know of at least a hundred selectors. One Google subpoena response I examined, for examined, included 15 “hard” identifiers for just one person (and multiply that by any major Internet service a person used). For a targeted organization like “Russian GRU hackers,” the NSA will probably have still more. But — again, as far as we know — FISA providers are asked to return data based off known selectors. But as I’ll show below, they’ve been asked to return data off selectors that would count as both hard and soft under XKeyscore.

The other set of confusing vocabulary comes from public debates about FISA (including PCLOB’s report on Section 702). Some debates have made a distinction between “upstream” and “PRISM.” Upstream is when NSA gives the telecoms a selector to collect information from scans conducted at switches, but it fundamentally refers to how something is collected, not who does it (and it’s possible there are backbone providers we haven’t thought of who also participate). PRISM is when NSA/FBI give Internet providers selectors to return activity on; it’s a description of from whom the information is collected. But even there, a PRISM provider will provide far more than just the email associated with a given selector.

Sometimes “upstream” collection is referred to as “about” collection. That’s misleading. “About” collection — that is, communications that contain a selector in what counts as content areas of the communication — is a subset of upstream collection. But what is really happening is that when the telecoms sniff packets to find a given selector, they need to sniff both the header and content to get all the communications they’re after, which is what PCLOB is saying here.

With regard to the NSA’s acquisition of “about” communications, the Board concludes that the practice is largely an inevitable byproduct of the government’s efforts to comprehensively acquire communications that are sent to or from its targets. Because of the manner in which the NSA conducts upstream collection, and the limits of its current technology, the NSA cannot completely eliminate “about” communications from its collection without also eliminating a significant portion of the “to/from” communications that it seeks. The Board includes a recommendation to better assess “about” collection and a recommendation to ensure that upstream collection as a whole does not unnecessarily collect domestic communications.

One hazard of using “about” to refer to “upstream” collection is it leads people to forget that the NSA needs to use upstream collection to comprehensively collect non-PRISM Internet traffic, even when working just from “hard” selectors like email addresses. Some of this collection (as the PCLOB passage above makes clear) is just looking for any emails involving a target, not emails talking “about” that target. But at least according to PCLOB, because of the way this collection is done, even if NSA is only searching for a hard selector email, it will get “about” traffic.

As you can see, however, this language is already going to be insufficient to discuss the Yahoo request, which is effectively an “upstream” search on a PRISM providers’ content (though I’m not clear whether it happens at the packet level or not). We also don’t yet know whether the signature involved counts as content, but the filters Yahoo adapted for the process clearly scan the content.

Public discussions have hidden how 702 includes non-email selectors

But the bigger problem with this discussion is that people are confused about what FISA permits the government to search on.

One huge shortcoming of the PCLOB report — one I pointed out at the time — is that it pretended that Section 702 was not used for cybersecurity. That’s unfortunate because cybersecurity is the area where Section 702 most obviously includes non-email selectors, what would be called “soft” selectors in XKeyscore. When I first confirmed that NSA was using 702 for cybersecurity back when I briefly worked at the Intercept, it was based off the search on a cyber “signature,” not an email. The target was a (state-sanctioned) hacker, but the search was not for the hacker’s email, but for his tools.

Here’s how PCLOB briefly alluded to this activity.

Although we cannot discuss the details in an unclassified public report, the moniker “about” collection describes a number of distinct scenarios, which the government has in the past characterized as different “categories” of “about” collection. These categories are not predetermined limits that confine what the government acquires; rather, they are merely ways of describing the different forms of communications that are neither to nor from a tasked selector but nevertheless are collected because they contain the selector somewhere within them.

The Semiannual reports are one place where the government has officially admitted that it searches on more than just email addresses.

Section 702 authorizes the targeting of non-United States persons reasonably believed to be located outside the United States. This targeting is effectuated by tasking communication facilities (also referred to as “selectors”), including but not limited to telephone numbers and electronic communications accounts, to Section 702 electronic communication service providers. [my emphasis]

As I said, the Snowden documents confirm that NSA has searched on malware signatures. Given the obvious application and the non-denials I have gotten from various quarters, I would bet a great deal of money that NSA has also searched on some signature associated with AQAP’s Inspire magazine, effectively allowing it to track anyone who downloads (or decrypts) the magazine.

In a series of tweets yesterday, Snowden confirmed that the scope is even more broad.

In practical terms, this means anything you can convince FISC to stamp. At NSA, I saw live examples of the following:

The usual suspects (emails, IPs, usernames, etc), but also cryptographic hashes that identify known files (MD5/SHA1), sub-strings from base-64 encoded email attachments (derived from things like embedded corporate logos), and any uncommon artifacts arising from a target’s tooling, for example if their app transmits a UUID (like a registration code or serial).

The possibilities here are basically limitless, and we can’t infer the specific nature of the string without more info.

The point is, “upstream” collection — whether done at a telecom switch or a tech server — can (and will, so long as FISC will authorize it) search on any string that will return the communications of interest, with “communications” extending to include “cyberattacks conducted by disembodied code.”

To understand FISA collection, then, it is best to think in terms of selectors or facilities that will return a desired target. Here’s some language from an Semiannual report that explains the distinction between target and facility (and why the classified numbers in the report are undoubtedly much larger than the unclassified 92,000 “target” number we’re given to explain the scope of FISA collection).

The provided number of facilities on average subject to acquisition during the reporting period remains classified and is different from the unclassified estimated number of targets affected by Section 702 released on June 26, 2014, by ODNI in its 2013 Transparency Report: Statistical Transparency Report Regarding Use of National Security Authorities (hereafter the 2013 Transparency Report). The classified number provided in the table above estimates the number of facilities subject to Section 702 acquisition, whereas the unclassified number provided in the 2013 Transparency Report estimates the number of targets affected by Section 702 (89,138). As noted in the 2013 Transparency Report, the “number of 702 ‘targets’ reflects an estimate of the number of known users of particular facilities (sometimes referred to as selectors) subject to intelligence collection under those Certifications.” Furthermore, the classified number of facilities in the table above accounts for the number of facilities subject to Section 702 acquisition during the current six month reporting period (e.g., June 1, 2013 – November 30, 2013), whereas the 2013 Transparency Report estimates the number of targets affected by Section 702 during the calendar year 2013.

As explained above, for any given target, there may be a slew of selectors or facilities that NSA can collect on (though they probably only collect on a limited selection of all the selectors they know; they use the other selectors to make sure they can find all the online activity of someone). The government tracks this internally by counting how many average selectors or facilities are targeted in a given day. These numbers will get more interesting, by the way, once the numbers incorporate USA Freedom Act compliance, which (in my opinion) significantly serves to require providers to provide all known selectors, that is, to even further expand the universe of known selectors.

A history of the word “facility”

But to understand the background to the Yahoo thing, it is absolutely necessary to understand how the word “facility” has evolved within FISC (and we only have access to some of this). As far as we know, the meaning of the word started to change in 2004 when Coleen Kollar-Kotelly approved the installation of “Pen Registers” (really, packet sniffers) at switches to accomplish with the Internet dragnet what Stellar Wind had been doing (that is, the collection of Internet metadata in bulk), based on the logic that al Qaeda was using those facilities to communicate. Her ruling changed the definition of facility from meaning an individual user (a phone number or email address) to many users including the target. When Kollar-Kotelly first approved it, she required the government to tell her which specific switches they were going to target — that is, which switches were likely to carry traffic from target countries like Yemen and Afghanistan. But when John Bates reauthorized the Internet dragnet in 2010, he let the government decide on a rolling basis which facilities it would collect metadata from.

Thus, starting in 2004 and expanded in 2010, “facility” — the things targeted under FISA — no longer were required to tie to an individual user or even a location exclusively used by targeted users.

When Kollar-Kotelly authorized the Internet dragnet, she distinguished what she was approving, which did not require probable cause, from content surveillance, where probable cause was required. That is, she tried to imagine that the differing standards of surveillance would prevent her order from being expanded to the collection of content. But in 2007, when FISC was looking for a way to authorize Stellar Wind collection — which was the collection on accounts identified through metadata analysis — Roger Vinson, piggybacking Kollar-Kotelly’s decision on top of the Roving Wiretap provision, did just that. That’s where “upstream” content collection got approved. From this point forward, the probable cause tied to a wiretap target was freed from a known identity, and instead could be tied to probable cause that the facility itself was used by a target.

There are several steps between how we got from there to the Yahoo order that we don’t have full visibility on (which is why PCLOB should have insisted on having that discussion publicly). There’s nothing in the public record that shows John Bates knew NSA was searching on non-email or Internet messaging strings by the time he wrote his 2011 opinion deeming any collection of a communication with a given selector in it to be intentional collection. But he — or FISC institutionally — would have learned that fact within the next year, when NSA and FBI tried to obtain a cyber certificate. (That may be what the 2012 upstream violation pertained to; see this post and this post for some of what Congress may have learned in 2012.) Nor is there anything in the 2012 Congressional debate that shows Congress was told about that fact.

One thing is clear from NSA’s internal cyber certificate discussions: by 2011, NSA was already relying on this broader sense of “facility” to refer to a signature of any kind that could be associated with a targeted user.

The point, however, is that sometime in the wake of the 2011 John Bates opinion on upstream, FISC must have learned more about how NSA was really using the term. It’s not clear how much of Congress has been told.

The leap from that — scanning on telephone switches for a given target’s known “facility” — to the Yahoo scan is not that far. In his 2010 opinion reauthorizing the Internet dragnet, Bates watered down the distinction between content and metadata by stripping protection for content-as-metadata that is also used for routing purposes. There may be some legal language authorizing the progression from packets to actual emails (though there’s nothing that is unredacted in any Bates opinion that leads me to believe he fully understood the distinction). In any case, FISCR has already been blowing up the distinction between content and metadata, so it’s not clear that the Yahoo request was that far out of the norm for what FISC has approved.

Which is not to say that the Yahoo scan would withstand scrutiny in a real court unaware of the FISC precedents (including the ones we haven’t yet seen). It’s just to say we started down this path 12 years ago, and the concept of “facilities” has evolved such that a search for a non-email signature counts as acceptable to the FISC.

If a facility is not a user, then how do you determine foreignness?

[Update: I realize this discussion is, given the increasing certainty that the Yahoo scan was done under an individual FISA order, irrelevant for the Yahoo case, because FBI has been cleared to collect on signatures in the US. But the issue is still an important one when discussing “facilities” that have been divorced from a geographically located user.]

There’s one final thing we don’t have visibility on.

When Kollar-Kotelly started down this path, she focused on facilities that were foreign-facing. That is, there was a high likelihood messages transiting those switches were one-side foreign, and therefore targetable, certainly for a PRTT. But as I noted, that foreign-facing distinction got badly watered down in 2010. And Yahoo’s entire universe of emails would not be particularly foreign focused (though a lot of foreigners use Yahoo).

The question is, if NSA or FBI is targeting a facility that is not tied to a given user, but is instead tied to an organization that is located overseas, how does the government determine foreignness on a signature? NSA’s General Counsel would permit analysts to collect on but not target metadata of, say, bots in the US based on the assumption that the ultimate source of the bot was overseas. If the signature that FBI searches on derives from overseas — as in the case where Inspire magazine is produced overseas — does that by itself deem a communication involving that signature to be “located” overseas, and therefore targetable.

I suspect that may be why NYT’s sources emphasized that the target of the Yahoo search was a state-sponsored terrorist organization, rather than just a terrorist organization, because by definition that state would be overseas. But I also suspect that a lot of the recent troubles at NSA pertaining to “roving” selectors stems from the ambiguity that arises when you start targeting selectors that are not by definition geographically bounded.

The way the government targets facilities is constitutionally problematic in any case. But this question of foreignness seems to present both statutory and constitutional problems.

Since September 20, 2012, FBI Has Been Permitted to Share FISA-Derived Hacking Information with Internet Service Providers

As I noted, yesterday Reuters reported that in 2015, Yahoo had been asked to scan its incoming email for certain strings. Since that time, Yahoo has issued a non-denial denial saying the story is “misleading” (but not wrong) because the “mail scanning described in the article does not exist on our systems.”

As I suggested yesterday, I think this most likely pertains to a cybersecurity scan of some sort, in part because FISC precedents would seem to prohibit most other uses of this. I’ve addressed a lot of issues pertaining to the use of Section 702 for cybersecurity purposes here; note that FISC might approve something more exotic under a traditional warrant, especially if Yahoo were asked to scan for some closely related signatures.

If you haven’t already, you should read my piece on why I think CISA provided the government with capabilities it couldn’t get from a 702 cyber certificate, which may explain why the emphasis on present tense from Yahoo is of particular interest. I think it quite likely tech companies conduct scans using signatures from the government now, voluntarily, under CISA. It’s in their best interest to ID if their users get hacked, after all.

But in the meantime, I wanted to point out this language in the 2015 FBI minimization procedures which, according to this Thomas Hogan opinion (see footnote 19), has been in FBI minimization procedures in some form since September 20, 2012, during a period when FBI badly wanted a 702 cyber certificate.

The FBI may disseminate FISA-acquired information that … is evidence of a crime and that it reasonably believes may assist in the mitigation or prevention of computer intrusions or attacks to private entities or individuals that have been or are at risk of being victimized by such intrusions or attacks, or to private entities or individuals (such as Internet security companies and Internet Service Providers) capable of providing assistance in mitigating or preventing such intrusions or attacks. Wherever reasonably practicable, such dissemination should not include United States person identifying information unless the FBI reasonably believes it is necessary to enable the recipient to assist in the mitigation or prevention of computer intrusions or attacks. [my emphasis]

This is not surprising language: it simply permits the FBI (but not, according to my read of the minimization procedures, NSA) to share cyber signatures discovered using FISA with private sector companies, either to help them protect themselves or because private entities (specifically including ISPs) might provide assistance in mitigating attacks.

To be sure, the language falls far short of permitting FBI to demand PRISM providers like Yahoo to use the signatures to scan their own networks.

But it’s worth noting that Thomas Hogan approved a version of this language (extending permitted sharing even to physical infrastructure and kiddie porn) in 2014. He remained presiding FISA judge in 2015, and as such would probably have reviewed any exotic or new programmatic requests. So it would not be surprising if Hogan were to approve a traditional FISA order permitting FBI (just as one possible example) to ask for evidence on a foreign-used cyber signature. Sharing a signature with Yahoo — which was already permitted under minimization procedures — and asking for any  results of a scan using it would not be a big stretch.

There’s one more detail worth remembering: way back the last time Yahoo challenged a PRISM order in 2007, there was significant mission creep in the demands the government made of Yahoo. In August 2007, when Yahoo was initially discussing compliance (but before it got its first orders in November 2007), the requests were fairly predictable: by my guess, just email content. But by the time Yahoo started discussing actual compliance in early 2008, the requests had expanded, apparently to include all of Yahoo’s services  (communication services, information services, storage services), probably even including information internal to Yahoo on its users. Ultimately, already in 2008, Yahoo was being asked to provide nine different things on users. Given Yahoo’s unique visibility into the details of this mission creep, their lawyers may have reason to believe that a request for packet sniffing or something similar would not be far beyond what FISCR approved way back in 2008.

The Yahoo Scans Closely Followed Obama’s Cybersecurity Emergency Declaration

Reuters has a huge scoop revealing that, in spring of 2015, Yahoo was asked and agreed to perform scans for certain selectors on all the incoming email to its users.

The company complied with a classified U.S. government directive, scanning hundreds of millions of Yahoo Mail accounts at the behest of the National Security Agency or FBI, said two former employees and a third person apprised of the events.

[snip]

It is not known what information intelligence officials were looking for, only that they wanted Yahoo to search for a set of characters. That could mean a phrase in an email or an attachment, said the sources, who did not want to be identified.

The timing of this is particularly interesting. We know that it happened sometime in the weeks leading up to May 2015, because after Alex Stamos’ security team found the code enabling the scan, he quit and moved to Facebook.

According to the two former employees, Yahoo Chief Executive Marissa Mayer’s decision to obey the directive roiled some senior executives and led to the June 2015 departure of Chief Information Security Officer Alex Stamos, who now holds the top security job at Facebook Inc.

[snip]

The sources said the program was discovered by Yahoo’s security team in May 2015, within weeks of its installation. The security team initially thought hackers had broken in.

When Stamos found out that Mayer had authorized the program, he resigned as chief information security officer and told his subordinates that he had been left out of a decision that hurt users’ security, the sources said. Due to a programming flaw, he told them hackers could have accessed the stored emails.

That would date the directive to sometime around the time, on April 1, 2015, that Obama issued an Executive Order declaring cyberattacks launched by persons located outside the US a national emergency.

I, BARACK OBAMA, President of the United States of America,find that the increasing prevalence and severity of malicious cyber-enabled activities originating from, or directed by persons located, in whole or in substantial part, outside theUnited States constitute an unusual and extraordinary threat to the national security, foreign policy, and economy of theUnited States. I hereby declare a national emergency to deal with this threat.

On paper, this shouldn’t create any authority to expand surveillance. Except that we know FISC did permit President Bush to expand surveillance — by eliminating the wall between intelligence and criminal investigations — after he issued his September 14, 2001 9/11 emergency declaration, before Congress authorized that expansion. And we know that Jack Goldsmith focused on that same emergency declaration in his May 2004 OLC opinion reauthorizing Stellar Wind.

Indeed, just days after Obama issued that April 2015 EO, I wrote this:

Ranking House Intelligence Member Adam Schiff’s comment that Obama’s EO is “a necessary part of responding to the proliferation of dangerous and economically devastating cyber attacks facing the United States,” but that it will be “coupled with cyber legislation moving forward in both houses of Congress” only adds to my alarm (particularly given Schiff’s parallel interest in giving Obama soft cover for his ISIL AUMF while having Congress still involved).  It sets up the same structure we saw with Stellar Wind, where the President declares an Emergency and only a month or so later gets sanction for and legislative authorization for actions taken in the name of that emergency.

And we know FISC has been amenable to that formula in the past.

We don’t know that the President has just rolled out a massive new surveillance program in the name of a cybersecurity Emergency (rooted in a hack of a serially negligent subsidiary of a foreign company, Sony Pictures, and a server JP Morgan Chase forgot to update).

We just know the Executive has broadly expanded surveillance, in secret, in the past and has never repudiated its authority to do so in the future based on the invocation of an Emergency (I think it likely that pre FISA Amendments Act authorization for the electronic surveillance of weapons proliferators, even including a likely proliferator certification under Protect America Act, similarly relied on Emergency Proclamations tied to all such sanctions).

I’m worried about the Cyber Intelligence Sharing Act, the Senate version of the bill that Schiff is championing. But I’m just as worried about surveillance done by the executive prior to and not bound by such laws.

Because it has happened in the past.

I have reason to believe the use of emergency declarations to authorize surveillance extends beyond the few data points I lay out in this post. Which is why I find it very interesting that the Yahoo request lines up so neatly with Obama’s cyber declaration.

I’m also mindful of Ron Wyden’s repeated concerns about the 2003 John Yoo common commercial services opinion that may be tied to Stellar Wind but that, Wyden has always made clear, has some application for cybersecurity. DOJ has already confirmed that some agencies have relied on that opinion.

In other words, this request may not just be outrageous because it means Yahoo is scanning all of its customers incoming emails. But it may also be (or have been authorized by) some means other than FISA.

“In the First Half of 2016” Signal Received an (Overbroad) Subpoena

This morning, the ACLU released a set of information associated with a subpoena served on Open Whisper Systems, the maker of Signal)\, for information associated with two phone numbers. As ACLU explained, OWS originally received the subpoena with a broad gag order. OWS was only able to turn over the account creation and last connection date for one of the phone numbers; the other account had no Signal account associated with it.

screen-shot-2016-10-04-at-7-31-15-am

But OWS also got ACLU to go challenge the gag associated with it, which led to the release of today’s information. All the specific data associated with the request is redacted (as reflected above), though ACLU was able to say the request was served on OWS in the first half of the year.

There are two interesting details of this. First, as OWS/ACLU noted in their response to the government, the government asked for far more information than they can obtain with a subpoena, including:

  • subscriber name
  • subscriber address
  • subscriber telephone numbers
  • subscriber email addresses
  • subscriber method of payment
  • subscriber IP registration
  • IP history logs and addresses
  • subscriber account history
  • subscriber toll records
  • upstream and downstream providers
  • any associated accounts acquired through cookie data
  • any other contact information from inception to the present

As OWS/ACLU noted,

OWS notes that not all of those types of information can be appropriately requested with a subpoena. Under ECPA, the government can use a subpoena to compel disclosure of information from an electro1lic communications service provider onJy if that information falls within the categories listed at 18 U.S.C. § 2703(c)(2). For other types of information, the government must obtain a court order or search warrant. OWS objects to use of the grand-jury subpoena to request information beyond what is authorized in Section 2703(c)(2).

I’ve got an email in with ACLU, but I believe ECPA would not permit the government to obtain the IP, cookie, and upstream/downstream information. Effectively, the government tried to do here what they have done with NSLs, obtain information beyond the subscriber and toll record information permitted by statute.

ACLU says this is “the only one ever received by OWS,” presumably meaning it is the only subpoena the company has obtained, but it notes the government has other ways of gagging compliance, including with NSLs (it doesn’t mention Section 215 orders, but that would be included as well).

I do wonder whether in the latter case — with a request for daily compliance under Section 215 — Signal might be able to turn over more information, given that they would know prospectively the government was seeking the information. That’s particularly worth asking given that the District that issued this subpoena — Eastern District of Virginia — is the one that specializes in hacking and other spying cases (and is managing the prosecution of Edward Snowden, who happens to use Signal), which means they’d have the ability to use NSLs or individualized 215 orders for many of their cases.

Update: Here’s a Chris Soghoian post from 2013 that deals with some, but not all, of the scope issues pertaining to text messaging.

Hillary Claims to Support Targeted Spying But Advisor Matt Olsen Was Champion of Bulk Spying

Spencer Ackerman has a story on what Hillary Clinton meant when she said she supports an “intelligence surge” to defeat terrorism. Amid a lot of vague language hinting at spying expansions (including at fusion centers and back doors), her staffers told Ackerman she supported the approach used in USA Freedom Act.

Domestically, the “principles” of Clinton’s intelligence surge, according to senior campaign advisers, indicate a preference for targeted spying over bulk data collection, expanding local law enforcement’s access to intelligence and enlisting tech companies to aid in thwarting extremism.

The campaign speaks of “balancing acts” between civil liberties and security, a departure from both liberaland conservative arguments that tend to diminish conflict between the two priorities. Asked to illustrate what Clinton means by “appropriate safeguards” that need to apply to intelligence collection in the US, the campaign holds out a 2015 reform that split the civil liberties community as a model for any new constraints on intelligence authorities.

The USA Freedom Act, a compromise that constrained but did not entirely end bulk phone records collection, “strikes the right balance”, said [former NSC and State Department staffer and current senior foreign policy advisor Laura] Rosenberger. “So those kinds of principles and protections offer something of a guideline for where any new proposals she put forth would be likely to fall.”

It then goes on to list a bunch of advisors who have been contributing advice on the “intelligence surge.”

The campaign did not identify the architects of the intelligence surge, but it pointed to prominent counter-terrorism advisers who have been contributing ideas.

They include former acting CIA director Michael Morell – who has come under recent criticism for his attacks on the Senate torture report – ex-National Counterterrorism Center director Matt Olsen; Clinton’s state department counter-terrorism chief Dan Benjamin; former National Security Council legal adviser Mary DeRosa; ex-acting Homeland Security secretary Rand Beers; Mike Vickers, a retired CIA operative who became Pentagon undersecretary for intelligence; and Jeremy Bash, Leon Panetta’s chief of staff at the CIA and Pentagon.

It appalls me that Hillary is getting advice from Mike Morell, who has clearly engaged in stupid propaganda both for her and the CIA (though he also participated in the Presidents Review Group that advocated far more reform than Obama has adopted). I take more comfort knowing Mary DeRosa is in the mix.

But I do wonder how you can take advice from Matt Olsen — who was instrumental in a lot of our current spying programs — and claim to adopt a balanced approach.

Olsen was the DOJ lawyer who oversaw the Yahoo challenge to PRISM in 2007 and 2008. He did two things of note. First, he withheld information from the FISC until forced to turn it over, not even offering up details about how the government had completely restructured PRISM during the course of Yahoo’s challenge, and underplaying details of how US person metadata is used to select foreign targets. He’s also the guy who threatened Yahoo with $250,000 a day fines for appealing the FISC decision.

Olsen was a key player in filings on the NSA violations in early 2009, presiding over what I believe to be grossly misleading claims about the intent and knowledge NSA had about the phone and Internet dragnets. Basically, working closely with Keith Alexander, he hid the fact that NSA had basically willfully treated FISA-collected data under the more lenient protection regime of EO 12333.

Charlie Savage provided two more details about Olsen’s fondness for bulk spying in Power Wars. As head of NCTC, Olsen was unsurprisingly the guy in charge of arranging, in 2012, for the NCTC to have access to any federal database it claimed might have terrorist information in it (thereby deeming all of us terrorists). Savage describes how, in response to his own reporting that NCTC was considering doing so — at a time when the plan was to have a further discussion about the privacy implications of the move — ODNI pushed through the change without that additional privacy consideration. That strikes me as the same kind of disdain for due process as Olsen exhibited during the Yahoo challenge.

Finally, Savage described how, when Obama was considering reforms to the phone dragnet in 2014, Olsen opposed having the FISC approve query terms before querying the database as legally unnecessary. It’s hard to imagine how Olsen would really be in favor of USAF type reforms, which codify that change.

In short, among Hillary’s named advisors, the one with the most direct past involvement in such decisions (and also the one likely to be appointed to a position of authority in the future) has advocated for more bulk spying, not less.

If Snowden Doesn’t Know Privacy Protections of 702, That’s a Problem with NSA Training

The House Intelligence Committee just released a report — ostensibly done to insist President Obama not pardon Snowden — that is instead surely designed as a rebuttal to the Snowden movie coming out in general release tomorrow. Why HPSCI sees it as their job to refute Hollywood I don’t know, especially since they didn’t make the same effort when Zero Dark Thirty came out, which suggests they are serving as handmaidens of the Intelligence Community, not an oversight committee.

There will be lots of debates about the validity of the report. In some ways, HPSCI admits they’re being as inflammatory as possible, as when they note that the IC only did a damage assessment of what they think Snowden took, whereas DOD did a damage assessment of every single thing he touched. HPSCI’s claims are all based on the latter.

There are things that HPSCI apparently doesn’t realize makes them and the IC look bad — not Snowden — such as the claim that he never obtained a high school equivalent degree; apparently people can just fake basic credentials and the CIA and NSA are incapable of identifying that. The report even admits a previously unknown contact between Snowden and CIA’s IG, regarding the training of IT specialists. BREAKING: Snowden did try to report something through an official channel!

It concerns me the “Intelligence Committee” can’t distinguish between details that help and hurt their case.

Meanwhile, Snowden has a bunch of rebuttals here, which extends the game of he says they say, but doesn’t help clarity much.

On one issue, however, I’m particularly concerned: with the HPSCI claim that Snowden may not understand the privacy impact of the programs he leaked because he failed Section 702 training:

It is also not clear Snowden understood the numerous privacy protections that govern the activities of the IC. He failed basic annual training for NSA employees on Section 702 of the Foreign Intelligence Surveillance Act (FISA) and complained the training was rigged to be overly difficult. This training included explanations of the privacy protections related to the PRISM program that Snowden would later disclose.

There are several implications about this allegation. First, the passage suggests that Snowden never passed 702 training. But he did. The Chief of the SIGINT Compliance Division said this in an email written on the low side (and as such, probably written with knowledge it would be released publicly). “He said he had failed it multiple times (I’d have to check with ADET on that). He did pass the course at some point.” Even in the middle of a big to-do over this training, the NSA knew one thing for certain: Snowden did pass the test (even if they weren’t sure whether he had really failed it).

The passage also suggests the training program was really basic. But a Lieutenant Colonel who clearly worked with a lot of 702 analysts at some point had this to say about it: “It is not a gentleman’s course; *I* failed it once, the first time I had to renew.”

The passage also suggests that the training was worthwhile. Except days before the conflict, NSA’s IG reissued an IG Report that revealed problems with this and related training — including that NSA still had outdated materials pertaining to the Protect America Act available as the “current” standard operating procedures available online.

There’s evidence the NSA’s training materials and courses at the time had significant errors. A revised Inspector General report on Section 702 of FISA, reissued just days before Snowden returned to Maryland for training on the program in 2013, found that the Standard Operating Procedures (SOPs) posted on the NSA’s internal website, purportedly telling analysts how to operate under the FISA Amendments Act passed in 2008, actually referenced a temporary law passed a year earlier, the Protect America Act.

“It is unclear whether some of the guidance is current,” the report stated, “because it refers only to the PAA,” a law that had expired years before. A key difference between the two laws pertains to whether the NSA can wiretap an American overseas under EO 12333 with approval from the attorney general rather than a judge in a FISA Court. If the SOPs remained on the website when Snowden was training, it would present a clear case in which NSA guidance permitted actions under EO 12333 that were no longer permitted under the law that had been passed in 2008.

Similarly, a key FISA Amendments Act training course (not the one described in the face-to-face exchange, but another one that would become mandatory for analysts) didn’t explain “the reasonable belief standard,” which refers to how certain an analyst must be that their target was not an American or a foreigner in the US — a key theme of Snowden’s disclosures. While some work on both these problems had clearly been completed between the time of the report’s initial release and its reissue just days before Snowden showed up in Maryland, both these findings remained open and had been assigned revised target completion dates in the reissued report, suggesting the IG had not yet confirmed they had been fixed.

Perhaps most troubling, to me, is that HPSCI repeats as true a story that should not be treated as such by anyone — because the story has a number of problems, and the person who told it almost certainly didn’t write it down for a full year after it happened, and then, only in response to Snowden’s claims about the interaction. I don’t know whether she was telling the truth or Snowden (or, most likely, both were shading the truth), but given the circumstances of the evidence, neither one should be assumed to be credible. But this report treats it, perhaps unaware of the many problems and inconsistencies with the story, as credible.

Ultimately, though, if Snowden didn’t fully appreciate the privacy protections of PRISM, you can’t attribute that to the training program, because he took and passed it.

Remarkably, this dodgy claim is the only evidence HPSCI has to claim that Snowden didn’t understand the privacy implications of what he was looking at. I’m fully willing to admit that reporting (that is, second-hand from Snowden) has made errors. But if NSA’s overseers can’t assess Snowden’s public comments about the programs they allegedly oversee, then they’re not doing their job.

Unless their job extends only to running PR for the agencies they are supposed to oversee.

The Government Uses FISCR Fast Track to Put Down Judges’ Rebellion, Expand Content Collection

Since it was first proposed, I’ve been warning (not once but twice!) about the FISCR Fast Track, a part of the USA Freedom Act that would permit the government to immediately ask the FISA Court of Review to review a FISC decision. The idea was sold as a way to get a more senior court to review dodgy FISC decisions. But as I noted, it was also an easy way for the government to use the secretive FISC system to get a circuit level decision that might preempt traditional court decisions they didn’t like (I feared they might use FISCR to invalidate the Second Circuit decision finding the phone dragnet to be unlawful, for example).

Sure enough, that’s how it got used in its first incarnation — not just to confirm that the FISC can operate by different rules than criminal courts, but also to put down a judges rebellion.

As I noted back in 2014, the FISC has long permitted the government to collect Post Cut Through Dialed Digits using FISA pen registers, though it requires the government to minimize anything counted as content after collection. PCTDD are the numbers you dial after connecting a phone call — perhaps to get a particular extension, enter a password, or transfer money. The FBI is not supposed to do this at the criminal level, but can do so under FISA provided it doesn’t use the “content” (like the banking numbers) afterwards. FISC reviewed that issue in 2006 and 2009 (after magistrates in the criminal context deemed PCTDD to be content that was impermissible).

At least year’s semiannual FISC judges’ conference, some judges raised concerns about the FISC practice, deciding they needed to get further briefing on the practice. So when approving a standing Pen Register, the FISC told the government it needed further briefing on the issue.

Screen Shot 2016-08-22 at 5.39.13 PM

The government didn’t deal with it for three months until just as they were submitting their next application. At that point, there was not enough time to brief the issue at the FISC level, which gave then presiding judge Thomas Hogan the opportunity to approve the PRTT renewal and kick the PCTDD issue to the FISCR, with an amicus.

Screen Shot 2016-08-22 at 5.43.08 PM

This minimized the adversarial input, but put the question where it could carry the weight of a circuit court.

Importantly, when Hogan kicked the issue upstairs, he did not specify that this legal issue applies only to phone PRTTs.

Screen Shot 2016-08-22 at 5.45.02 PM

At the FISCR, Mark Zwillinger got appointed as an amicus. He saw the same problem as I did. While the treatment of phone PCTDD is bad but, if properly minimized, not horrible, it becomes horrible once you extend it to the Internet.

Screen Shot 2016-08-22 at 5.59.12 PM

The FISCR didn’t much care. They found the collection of content using a PRTT, then promising not to use it except to protect national security (and a few other exceptions to the rule that the government has to ask FISC permission to use this stuff) was cool.

Screen Shot 2016-08-22 at 5.47.34 PM

Along the way, the FISCR laid out several other precedents that will have really dangerous implications. One is that content to a provider may not be content.

Screen Shot 2016-08-22 at 5.55.29 PM

This is probably the issue that made the bulk PRTT dragnet illegal in the first place (and created problems when the government resumed it in 2010). Now, the problem of collecting content in packets is eliminated!

Along with this, the FISCR extended the definition of “incidental” to apply to a higher standard of evidence.

Screen Shot 2016-08-22 at 6.07.50 PM

Thus, it becomes permissible to collect using a standard that doesn’t require probable cause something that does, so long as it is “minimized,” which doesn’t always mean it isn’t used.

Finally, FISCR certified the redefinition of “minimization” that FISC has long adopted (and which is crucial in some other programs). Collecting content, but then not using it (except for exceptions that are far too broad), is all good.

Screen Shot 2016-08-22 at 6.01.41 PM

In other words, FISCR not only approved the narrow application of using calling card data but not bank data and passwords (except to protect national security). But they also approved a bunch of other things that the government is going to turn around and use to resume certain programs that were long ago found problematic.

I don’t even hate to say this anymore. I told privacy people this (including someone involved in this issue personally). I was told I was being unduly worried. This is, frankly, even worse than I expected (and of course it has been released publicly so the FBI can start chipping away at criminal protections too).

Yet another time my concerns have been not only borne out, but proven to be insufficiently cynical.

Until at Least 2014, NSA Was Having Troubles Preventing Back Door Searches of Upstream Searches

Since NSA’s practice of conducting back door searches — searches of already collected data based off the targeting of foreigners — became widely known, the spooks have offered a few assurances about why we don’t have to worry about these back door searches. For example, the US person identifiers have to be pre-approved and the NSA won’t conduct back door searches of upstream data, which sometimes includes entirely domestic communications.

According to the Semiannual Reports on Section 702 released some weeks ago, those assurances are fairly hollow, or at least were during the 2013 to 2014 timeframe.

The March 2014 report, which covers the period from December 1, 2012 through May 31, 2013, revealed that the semiannual review process could not directly monitor back door searches on US person identifiers because that information is not kept in a centralized place.

It should be noted both that NSA’s efforts to review queries are not limited to Section 702 authorities and that, at this time, content queries are not specifically identified as containing United States person identifiers. As such, and as the Government previously represented to Congress, NSD and ODNI cannot at this time directly monitor content queries using United States person identifiers because these records are not kept in a centrally located repository. While the changes described above in NSA’s super audit process have not changed this status, NSA is exploring whether future queries using United States person identifiers could be identified and centralized. In the meantime, and in accordance with NSA’s minimization procedures, NSD and ODNI review NSA’s approval of any United States person identifiers used to query unminimized Section 702- acquired communications.

This appears to indicate that internal overseers could not audit the actual queries completed, but instead only reviewed the identifiers used to query data to make sure they were approved. Which, in turn, means the NSA’s targeting of foreigners and dissemination of reports on them got monitored more closely than NSA’s spying on Americans.

The following report — completed in October 2014 and covering the period June 1, 2013 through November 30, 2013 — reports a predictable consequence of the inability to monitor the actual queries conducted as back door searches: prohibited back door searches on upstream data.

(TS//SI//NF) The joint oversight team, however, is concerned about the increase in incidents involving improper queries using United States person identifiers, including incidents involving NSA’s querying of Section 702-acquired data in upstream data using United States Person identifiers. Specifically, although section 3(b)(5) of NSA’s Section 702 minimization procedures permits the scanning of media using United States person identifiers, this same section prohibits using United States person identifiers to query Internet communications acquired through NSA’s upstream collection techniques. NSA [redacted] incidents of non-compliance with this subsection of its minimization procedures, many of which involved analysts inadvertently searching upstream collection. For example, [redacted], the NSA analyst conducted approved querying with United States persons identifiers ([redacted]), but inadvertently forgot to exclude Section 702-acquired upstream data from his query.

While the actual number is redacted, the number is high enough to refer to to “many” improper searches of upstream content.

That explicit violation of the rules set by Bates in 2011 was part of a larger trend of back door search violations, including analysts not obtaining approval to query Americans’ identifiers.

(TS//SI//NF) In addition, section 3(b)(5) of NSA’s Section 702 minimization procedures requires that queries using United States person identifiers must be first be approved in accordance with NSA internal procedures. In this reporting period, [redacted] NSA was in non-compliance with this requirement, either because a prior authorization was not obtained or the authorization to query had expired. For example, in NSA Incidents [redacted] NSA analysts performed queries using United States person identifiers that had not been approved as query terms. These queries occurred for a variety of reasons, including because analysts continued queries on terms that they suspected (but had not confirmed) were used by United States persons, forgot to exclude Section 702 data from queries [redacted], or did not realize that [redacted] constitute a United States person identifier even if the analyst was seeking information on a non-United States person.

Among other things, the third redaction in this passage appears to suggest that analysts conduct back door searches on data generally, presumably including both EO 12333 and 702 obtained data, but have to affirmatively exclude Section 702 data to stay within the rules laid out in the minimization procedures.

Consider the timing of this: the reporting of “many” back door search and other US person query violations occurred in the first post-Snowden period. While the fact NSA did back door searches was knowable from the 2012 SSCI report on Section 702 renewal, it did not become general knowledge among members of Congress and the general public until Snowden leaked more explicit confirmation of it. And all of a sudden, as soon as people started complaining about back door searches and Congress considered regulating it, NSA’s overseers discovered that NSA wasn’t following an explicit prohibition on searching upstream data. One of several risks of back door searching upstream data is it may amount to searching data collected domestically, or even entirely domestic communications.

And while the details get even more redacted, it appears the problem did not go away in the following period, the December 1, 2013 through May 31, 2014 reviews reported in a June 2015 report. After a very long redaction on targeting, the report recommends NSA require analysts to state whether they believe they’re querying on a US person.

Additionally, but separately, the joint oversight team believes NSA should assess modifications to systems used to query raw Section 702-acquired data to require analysts to identify when they believe they are using a United States person identifier as a query term. Such an improvement, even if it cannot be adopted universally in all NSA systems, could help prevent instances of otherwise approved United States person query terms being used to query upstream Internet transactions, which is prohibited by the NSA minimization procedures.64

The footnote that modifies that discussion is entirely redacted.

The June 2015 report was the most recent one released, so it is unclear whether simply requiring analysts to confirm that they are querying Americans solved the improper back door searches of upstream data. But at least as of the most recently released report, the two most troubling aspects of Section 702 surveillance — the upstream searching on Internet streams and back door unwarranted searches on US person identifiers — were contributing to “many” violations of NSA’s rules.