Posts

The Heroic IRS Agent Story Should Raise More Questions about Silk Road Investigation

“In these technical investigations, people think they are too good to do the stupid old-school stuff. But I’m like, ‘Well, that stuff still works.’ ”

The NYT got this and many other direct quotes from IRS agent Gary Alford for a complimentary profile of him that ran on Christmas day. According to the story, Alford IDed Ross Ulbricht as a possible suspect for the Dread Pirate Roberts — the operator of the Dark Web site Silk Road — in early June 2013, but it took until September for Alford to get the prosecutor and DEA and FBI Agents working the case to listen to him. The profile claims Alford’s tip was “crucial,” though a typo suggests NYT editors couldn’t decide whether it was the crucial tip or just crucial.

In his case, though, the information he had was the crucial [sic] to solving one of the most vexing criminal cases of the last few years.

On its face, the story (and Alford’s quote) suggests the FBI is so entranced with its hacking ability that it has neglected very, very basic investigative approaches like Google searches. Indeed, if the story is true, it serves as proof that encryption and anonymity don’t thwart FBI investigations as much as Jim Comey would like us to believe when he argues the Bureau needs to back door all our communications.

But I don’t think the story tells the complete truth about the Silk Road investigation. I say that, first of all, because of the timing of Alford’s efforts to get others to further investigate Ulbricht. As noted, the story describes Alford IDing Ulbricht as a potential suspect in early June 2013, after which he put Ulbricht’s name in a DEA database of potential suspects, which presumably should have alerted anyone else on the team that US citizen Ross Ulbricht was a potential suspect in the investigation.

Mr. Alford’s preferred tool was Google. He used the advanced search option to look for material posted within specific date ranges. That brought him, during the last weekend of May 2013, to a chat room posting made just before Silk Road had gone online, in early 2011, by someone with the screen name “altoid.”

“Has anyone seen Silk Road yet?” altoid asked. “It’s kind of like an anonymous Amazon.com.”

The early date of the posting suggested that altoid might have inside knowledge about Silk Road.

During the first weekend of June 2013, Mr. Alford went through everything altoid had written, the online equivalent of sifting through trash cans near the scene of a crime. Mr. Alford eventually turned up a message that altoid had apparently deleted — but that had been preserved in the response of another user.

In that post, altoid asked for some programming help and gave his email address: [email protected]. Doing a Google search for Ross Ulbricht, Mr. Alford found a young man from Texas who, just like Dread Pirate Roberts, admired the free-market economist Ludwig von Mises and the libertarian politician Ron Paul — the first of many striking parallels Mr. Alford discovered that weekend.

When Mr. Alford took his findings to his supervisors and failed to generate any interest, he initially assumed that other agents had already found Mr. Ulbricht and ruled him out.

But he continued accumulating evidence, which emboldened Mr. Alford to put Mr. Ulbricht’s name on the D.E.A. database of potential suspects, next to the aliases altoid and Dread Pirate Roberts.

At the same time, though, Mr. Alford realized that he was not being told by the prosecutors about other significant developments in the case — a reminder, to Mr. Alford, of the lower status that the I.R.S. had in the eyes of other agencies. And when Mr. Alford tried to get more resources to track down Mr. Ulbricht, he wasn’t able to get the surveillance and the subpoenas he wanted.

Alford went to the FBI and DOJ with Ulbricht’s ID in June 2013, but FBI and DOJ refused to issue even subpoenas, much less surveil Ulbricht.

But over the subsequent months, Alford continued to investigate. In “early September” he had a colleague do another search on Ulbricht, which revealed he had been interviewed by Homeland Security in July 2013 for obtaining fake IDs.

In early September, he asked a colleague to run another background check on Mr. Ulbricht, in case he had missed something.

The colleague typed in the name and immediately looked up from her computer: “Hey, there is a case on this guy from July.”

Agents with Homeland Security had seized a package with nine fake IDs at the Canadian border, addressed to Mr. Ulbricht’s apartment in San Francisco. When the agents visited the apartment in mid-July, Mr. Ulbricht answered the door, and the agents identified him as the face on the IDs, without having any idea of his potential links to Silk Road.

When Alford told prosecutor Serrin Turner of the connection (again, this is September 2013), the AUSA finally did his own search in yet another database, the story claims, only to discover Ulbricht lived in the immediate vicinity of where Dread Pirate Roberts was accessing Silk Road. And that led the Feds to bust Ulbricht.

I find the story — the claim that without Alford’s Google searches, FBI did not and would not have IDed Ulbricht — suspect for two reasons.

First, early June is the date that FBI Agent Christopher Tarbell’s declaration showed (but did not claim) FBI first hacked Silk Road. That early June date was itself suspect because Tarbell’s declaration really showed data from as early as February 2013 (which is, incidentally, when Alford was first assigned to the team). In other words, while it still seems likely FBI was always lying about when it hacked into Silk Road, the coincidence between when Alford says he went to DOJ and the FBI with Ulbricht’s ID and when the evidence they were willing to share with the defense claimed to have first gotten a lead on Silk Road is of interest. All the more so given that the FBI claimed it could legally hack the server because it did not yet know the server was run by an American, and so it treated the Iceland-based server as a foreigner for surveillance purposes.

One thing that means is that DOJ may not have wanted to file paperwork to surveil Ulbricht because admitting they had probable cause to suspect an American was running Silk Road would make their hack illegal (and/or would have required FBI to start treating Ulbricht as the primary target of the investigation; it seems FBI may have been trying to do something else with this investigation). By delaying the time when DOJ took notice of the fact that Silk Road was run by an American, they could continue to squat on Silk Road without explaining to a judge what they were doing there.

The other reason I find this so interesting is because several of the actions to which corrupt DEA agent Carl Force pled guilty — selling fake IDs and providing inside information — took place between June and September 2013, during the precise period when everyone was ignoring Alford’s evidence and the fact that he had entered Ulbricht’s name as a possible alias for the Dread Pirate Roberts into a DEA database. Of particular note, Force’s guilty plea only admitted to selling the fake IDs for 400 bitcoin, and provided comparatively few details about that action, but the original complaint against Force explained he had sold the IDs for 800 bitcoin but refunded Ulbricht 400 bitcoin because “the deal for the fraudulent identification documents allegedly fell through” [emphasis mine].

Were those fake IDs that Force sold Ulbricht the ones seized by Homeland Security and investigated in July 2013? Did the complaint say the deal “allegedly” fell through because it didn’t so much fall through as get thwarted? Did something — perhaps actions by Force — prevent other team members from tying that seizure to Ulbricht? Or did everyone know about it, but pretend not to, until Alford made them pay attention (perhaps with a communications trail that other Feds couldn’t suppress)? Was the ID sale part of the investigation, meant to ID Ulbricht’s identity and location, but Force covered it up?

In other words, given the record of Force’s actions, it seems more likely that at least some people on the investigative team already knew what Alford found in a Google search, but for both investigative (the illegal hack that FBI might have wanted to extend for other investigative reasons) and criminal (the money Force was making) reasons, no one wanted to admit that fact.

Now, I’m not questioning the truth of what Alford told the NYT. But even his story (which is corroborated by people “briefed on the investigation,” but only one person who actually attended any of the meetings for it; most of those people are silent about Alford’s claims) suggests there may be other explanations why no one acted on his tip, particularly given the fact that he appears to have been unable to do database searches himself and that they refused to do further investigation into Ulbricht. (I also wonder whether Alford’s role explains why the government had the IRS in San Francisco investigate Force and corrupt Secret Service Agent Shaun Bridges, rather than New York, where agents would have known these details.)

Indeed, I actually think this complimentary profile might have been a way for Alford to expose further cover-ups in the Silk Road investigation without seeming to do so for any but self-interested reasons. Bridges was sentenced on December 7. Ulbricht was originally supposed to have submitted his opening appellate brief — focusing on Fourth Amendment issues that may be implicated by these details — on December 11, but on December 2, the court extended that deadline until January 12.

I don’t know whether Ulbricht’s defense learned these details. I’m admittedly not familiar enough with the public record to know, though given the emphasis on Tarbell’s declaration as the explanation for how they discovered Ulbricht and the NYT’s assertion Alford’s role and the delay was “largely left out of the documents and proceedings that led to Mr. Ulbricht’s conviction and life sentence this year,” I don’t think it is public. But if they didn’t, then the fact that the investigative team went out of their way to avoid confirming Ulbricht’s readily accessible identity until at least three and probably seven months after they started hacking Silk Road, even while key team members were stealing money from the investigation, might provide important new details about the government’s actions.

And if Alford gets delayed credit for doing simple Google searches as a result, all the better!

If a Close US Ally Backdoored Juniper, Would NSA Tell Congress?

You may have heard that Juniper Networks announced what amounts to a backdoor in its virtual private networks products. Here’s Kim Zetter’s accessible intro of what security researchers have learned so far. And here’s some technical background from Matthew Green.

As Zetter summarizes, the short story is that some used weaknesses encouraged by NSA to backdoor the security product protecting a lot of American businesses.

They did this by exploiting weaknesses the NSA allegedly placed in a government-approved encryption algorithm known as Dual_EC, a pseudo-random number generator that Juniper uses to encrypt traffic passing through the VPN in its NetScreen firewalls. But in addition to these inherent weaknesses, the attackers also relied on a mistake Juniper apparently made in configuring the VPN encryption scheme in its NetScreen devices, according to Weinmann and other cryptographers who examined the issue. This made it possible for the culprits to pull off their attack.

As Green describes, the key events probably happened at least as early as 2007 and 2012 (contrary to the presumption of surveillance hawk Stewart Baker looking to scapegoat those calling for more security). Which means this can’t be a response to the Snowden document strongly suggesting the NSA had pushed those weaknesses in Dual_EC.

I find that particularly interesting, because it suggests whoever did this either used public discussions about the weakness of Dual_EC, dating to 2007, to identify and exploit this weakness, or figured out what (it is presumed) the NSA was up to. That suggests two likely culprits for what has been assumed to be a state actor behind this: Israel (because it knows so much about NSA from having partnered on things like StuxNet) or Russia (which was getting records on the FiveEyes’ SIGINT activities from its Canadian spy, Jeffrey Delisle).  The UK would be another obvious guess, except an Intercept article describing how NSA helped UK backdoor Juniper suggests they used another method.

Which leads me back to an interesting change I noted between CISA — the bill passed by the Senate back in October — and OmniCISA — the version passed last week as part of the omnibus funding bill. OmniCISA still required the Intelligence Community to provide a report on the most dangerous hacking threats, especially state actors, to the Intelligence Committees. But it eliminated a report for the Foreign Relations Committees on the same topic. I joked at the time that that was probably to protect Israel, because no one wants to admit that Israel spies and has greater ability to do so by hacking than other nation-states, especially because it surely learns our methods by partnering with us to hack Iran.

Whoever hacked Juniper, the whole incident offers a remarkable lesson in the dangers of backdoors. Even as FBI demands a backdoor into Apple’s products, it is investigating who used a prior US-sponsored backdoor to do their own spying.

NSA Propagandist John Schindler Suggests Boston Marathon Terrorist Attack Not “Major Jihadist Attack”

NSA propagandist John Schindler has used the San Bernardino attack as an opportunity to blame Edward Snowden for the spy world’s diminished effectiveness, again.

Perhaps the most interesting detail in his column is his claim that 80% of thwarted attacks come from an NSA SIGINT hit.

Something like eighty percent of disrupted terrorism cases in the United States begin with a SIGINT “hit” by NSA.

That’s mighty curious, given that defendants in these cases aren’t getting notice of such SIGINT hits, as required by law, as ACLU’s Patrick Toomey reminded just last week. Indeed, the claim is wholly inconsistent with the claims FBI made when it tried to claim the dragnet was effective after the Snowden leaks, and inconsistent with PCLOB’s findings that the FBI generally finds such intelligence on its own. Whatever. I’m sure the discrepancy is one Schindler will be able to explain to defense attorneys when they subpoena him to explain the claim.

Then there’s Schindler’s entirely illogical claim that the shut-down of the phone dragnet just days before the attack might have helped to prevent it.

The recent Congressionally-mandated halt on NSA holding phone call information, so-called metadata, has harmed counterterrorism, though to what extent remains unclear. FBI Director James Comey has stated, “We don’t know yet” whether the curtailing of NSA’s metadata program, which went into effect just days before the San Bernardino attack, would have made a difference. Anti-intelligence activists have predictably said it’s irrelevant, while some on the Right have made opposite claims. The latter have overstated their case but are closer to the truth.

As Mike Lee patiently got Jim Comey to admit last week, if the Section 215 phone dragnet (as opposed to the EO 12333 phone dragnet, which remains in place) was going to prevent this attack, it would have.

Schindler then made an error that obscures one of the many ways the new phone dragnet will be better suited to counterterrorism. Echoing a right wing complaint that the government doesn’t currently review social media accounts as part of the visa process, he claimed “Tashfeen Malik’s social media writings [supporting jihad] could have been easily found.” Yet at least according to ABC, it would not have been so easy. “Officials said that because Malik used a pseudonym in her online messages, it is not clear that her support for terror groups would have become known even if the U.S. conducted a full review of her online traffic.” [See update.] Indeed, authorities found the Facebook post where Malik claimed allegiance to ISIS by correlating her known email with her then unknown alias on Facebook. NSA’s new phone program, because it asks providers for “connections” as well as “contacts,” is far more likely to identify multiple identities that get linked by providers than the old program (though it is less likely to correlate burner identities via bulk analysis).

Really, though, whether or not the dragnet could have prevented San Bernardino which, as far as is evident, was carried out with no international coordination, is sort of a meaningless measure of NSA’s spying. To suggest you’re going to get useful SIGINT about a couple who, after all lived together and therefore didn’t need to use electronic communications devices to plot, is silliness. A number of recent terrorist attacks have been planned by family members, including one cell of the Paris attack and the Charlie Hebdo attack, and you’re far less likely to get SIGINT from people who live together.

Which brings me to the most amazing part of Schindler’s piece. He argues that Americans have developed a sense of security in recent years (he of course ignores right wing terrorism and other gun violence) because “the NSA-FBI combination had a near-perfect track record of cutting short major jihadist attacks on Americans at home since late 2001.” Here’s how he makes that claim.

Making matters worse, most Americans felt reasonably safe from the threat of domestic jihadism in recent years, despite repeated warnings about the rise of the Islamic State and terrible attacks like the recent mass-casualty atrocity in Paris. Although the November 2009 Fort Hood massacre, perpetrated by Army Major Nidal Hasan, killed thirteen, it happened within the confines of a military base and did not involve the general public.

Two months before that, authorities rolled up a major jihadist cell in the New York City area that was plotting complex attacks that would have rivalled the 2005 London 7/7 atrocity in scope and lethality. That plot was backed by Al-Qa’ida Central in Pakistan and might have changed the debate on terrorism in the United States, but it was happily halted before execution – “left of boom” as counterterrorism professionals put it.

Jumping from the 2009 attacks (and skipping the 2009 Undiebomb and 2010 Faisal Shahzad attempts) to the Paris attack allows him to suggest any failure to find recent plots derives from Snowden’s leaks, which first started in June 2013.

However, the effectiveness of the NSA-FBI counterterrorism team has begun to erode in the last couple years, thanks in no small part to the work of such journalists-cum-activists. Since June 2013, when the former NSA IT contactor [sic] Edward Snowden defected to Moscow, leaking the biggest trove of classified material in all intelligence history, American SIGINT has been subjected to unprecedented criticism and scrutiny.

There is, of course, one enormous thing missing from Schindler’s narrative of NSA perfection: the Boston Marathon attack, committed months before the first Snowden disclosures became public. Indeed, even though the NSA was bizarrely not included in a post-Marathon Inspector General review of how the brothers got missed, it turns out NSA did have intelligence on them (Tamerlan Tsarnaev was in international contact with known extremists and also downloaded AQAP’s Inspire magazine repeatedly). Only, that intelligence got missed, even with the multiple warnings from FSB about Tamerlan.

Perhaps Schindler thinks that Snowden retroactively caused the NSA to overlook the intelligence on Tamerlan Tsarnaev? Perhaps Schindler doesn’t consider an attack that killed 3 and injured 260 people a “major jihadist attack”?

It’s very confusing, because I thought the Boston attack was a major terrorist attack, but I guess right wing propagandists trying to score points out of tragedy can ignore such things if it will spoil their tale of perfection.

Update: LAT reports that Malik’s Facebook posts were also private, on top of being written under a pseudonym. Oh, and also in Urdu, a language the NSA has too few translators in. The NSA (but definitely not the State Department) does have the ability to 1) correlate IDs to identify pseudonyms, 2) require providers to turn over private messages — they could use PRISM and 3) translate Urdu to English. But this would be very resources intensive and as soon as State made it a visa requirement, anyone trying to could probably thwart the correlation process.

Jim Comey Makes Bogus Claims about Privacy Impact of Electronic Communications Trasaction Record Requests

215 trackerOn November 30, Nicholas Merrill was permitted to unseal the NSL he received back in 2004 for the first time. That request asked for:

the names, addresses, lengths of service and electronic communication transaction records [ECTR], to include existing transaction/activity logs and all e-mail header information (not to include message content and/or subject fields) for [the target]

The unsealing of the NSL confirmed what has been public since 2010: that the FBI used to (and may still) demand ECTRs from Internet companies using NSLs.

On December 1, House Judiciary Committee held a hearing on a bill reforming ECPA that has over 300 co-sponsors in the House; on September 9, Senate Judiciary Committee had its own hearing, though some witnesses and members at it generally supported expanded access to stored records, as opposed to the new restrictions embraced by HJC.

Since then, a number of people are arguing FBI should be able to access ECTRs again, as they did in 2004, with no oversight. One of two changes to the version of Senator Tom Cotton’s surveillance bill introduced on December 2 over the version introduced on November 17 was the addition of ECTRs to NSLs (the other was making FAA permanent).

And yesterday, Chuck Grassley (who of course could shape any ECPA reform that went through SJC) invited Jim Comey to ask for ECTR authority to be added to NSLs.

Grassley: Are there any other tools that would help the FBI identify and monitor terrorists online? More specifically, can you explain what Electronic Communications Transactions Record [sic], or ECTR, I think that’s referred to, as acronym, are and how Congress accidentally limited the FBI’s ability to obtain them, with a, obtain them with a drafting error. Would fixing this problem be helpful for your counterterrorism investigations?

Comey: It’d be enormously helpful. There is essentially a typo in the law that was passed a number of years ago that requires us to get records, ordinary transaction records, that we can get in most contexts with a non-court order, because it doesn’t involve content of any kind, to go to the FISA Court to get a court order to get these records. Nobody intended that. Nobody that I’ve heard thinks that that’s necessary. It would save us a tremendous amount of work hours if we could fix that, without any compromise to anyone’s civil liberties or civil rights, everybody who has stared at this has said, “that’s actually a mistake, we should fix that.”

That’s actually an unmitigated load of bullshit on Comey’s part, and he should be ashamed to make these claims.

As a reminder, the “typo” at issue is not in fact a typo, but a 2008 interpretation from DOJ’s Office of Legal Counsel, which judged that FBI could only get what the law said it could get with NSLs. After that happened — a DOJ IG Report laid out in detail last year — a number (but not all) tech companies started refusing to comply with NSLs requesting ECTRs, starting in 2009.

The decision of these [redacted] Internet companies to discontinue producing electronic communication transactional records in response to NSLs followed public release of a legal opinion issued by the Department’s Office of Legal Counsel (OLC) regarding the application of ECPA Section 2709 to various types of information. The FBI General Counsel sought guidance from the OLC on, among other things, whether the four types of information listed in subsection (b) of Section 2709 — the subscriber’s name, address, length of service, and local and long distance toll billing records — are exhaustive or merely illustrative of the information that the FBI may request in an NSL. In a November 2008 opinion, the OLC concluded that the records identified in Section 2709(b) constitute the exclusive list of records that may be obtained through an ECPA NSL.

Although the OLC opinion did not focus on electronic communication transaction records specifically, according to the FBI, [redacted] took a legal position based on the opinion that if the records identified in Section 2709(b) constitute the exclusive list of records that may be obtained through an ECPA NSL, then the FBI does not have the authority to compel the production of electronic communication transactional records because that term does not appear in subsection (b).

Even before that, in 2007, FBI had developed a new definition of what it could get using NSLs. Then, in 2010, the Administration proposed adding ECTRs to NSLs. Contrary to Comey’s claim, plenty of people objected to such an addition, as this 2010 Julian Sanchez column, which he could re-release today verbatim, makes clear.

They’re calling it a tweak — a “technical clarification” — but make no mistake: The Obama administration and the FBI’s demand that Congress approve a huge expansion of their authority to obtain the sensitive Internet records of American citizens without a judge’s approval is a brazen attack on civil liberties.

[snip]

Congress would be wise to specify in greater detail just what are the online equivalents of “toll billing records.” But a blanket power to demand “transactional information” without a court order would plainly expose a vast range of far more detailed and sensitive information than those old toll records ever provided.

Consider that the definition of “electronic communications service providers” doesn’t just include ISPs and phone companies like Verizon or Comcast. It covers a huge range of online services, from search engines and Webmail hosts like Google, to social-networking and dating sites like Facebook and Match.com to news and activism sites like RedState and Daily Kos to online vendors like Amazon and Ebay, and possibly even cafes like Starbucks that provide WiFi access to customers. And “transactional records” potentially covers a far broader range of data than logs of e-mail addresses or websites visited, arguably extending to highly granular records of the data packets sent and received by individual users.

As the Electronic Frontier Foundation has argued, such broad authority would not only raise enormous privacy concerns but have profound implications for First Amendment speech and association interests. Consider, for instance, the implications of a request for logs revealing every visitor to a political site such as Indymedia. The constitutionally protected right to anonymous speech would be gutted for all but the most technically savvy users if chat-forum participants and blog authors could be identified at the discretion of the FBI, without the involvement of a judge.

That legislative effort didn’t go anywhere, so instead (the IG report explained)  FBI started to use Section 215 orders to obtain that data. That constituted a majority of 215 orders in 2010 and 2011 (and probably has since, creating the spike in numbers since that year, as noted in the table above).

Supervisors in the Operations Section of NSD, which submits Section 215 applications to the FISA Court, told us that the majority of Section 215 applications submitted to the FISA Court [redacted] in 2010 and [redacted] in 2011 — concerned requests for electronic communication transaction records.

The NSD supervisors told us that at first they intended the [3.5 lines redacted] They told us that when a legislative change no longer appeared imminent and [3 lines redacted] and by taking steps to better streamline the application process.

But the other reason Comey’s claim that getting this from NSL’s would not pose “any compromise to anyone’s civil liberties or civil rights” is bullshit is because the migration of ECTR requests to Section 215 orders also appears to have led the FISA Court to finally force FBI to do what the 2006 reauthorization of the PATRIOT Act required it do: minimize the data it obtains under 215 orders to protect Americans’ privacy.

By all appearances, the rubber-stamp FISC believed these ECTR requests represented a very significant compromise to people’s civil liberties and civil rights and so finally forced FBI to follow the law requiring them to minimize the data.

Which is probably what this apparently redoubled effort to let FBI obtain the online lives of Americans (remember, this must be US persons, otherwise the FBI could use PRISM to obtain the data) using secret requests that get no oversight: an attempt to bypass whatever minimization procedures — and the oversight that comes with it — the FISC imposed.

And remember: with the passage of USA Freedom Act, the FBI doesn’t have to wait to get these records (though they are probably prospective, just like the old phone dragnet was), they can obtain an emergency order and then fill out the paperwork after the fact.

For some reason — either the disclosure in Merrill’s suit that FBI believed they could do this (which has been public since 2010 or earlier), or the reality that ECPA will finally get reformed — the Intelligence Community is asserting the bogus claims they tried to make in 2010 again. Yet there’s even more evidence then there was then that FBI wants to conduct intrusive spying without real oversight.

Dianne Feinstein’s Encrypted Playstation Nightmare

I’ve complained about Dianne Feinstein’s inconsistency on cybersecurity, specifically as it relates to Sony, before. The week before the attack on Paris, cybersecurity was the biggest threat, according to her. And Sony was one of the top targets, both of criminal identity theft and — if you believe the Administration — nation-states like North Korea. If you believe that, you believe that Sony should have the ability to use encryption to protect its business and users. But, in the wake of Paris and Belgian Interior Minister Jan Jambon’s claim that terrorists are using Playstations, Feinstein changed her tune, arguing serial hacking target Sony should not be able to encrypt its systems to protect users.

Her concerns took a bizarre new twist in an FBI oversight hearing today. Now, she’s concerned that if a predator decides to target her grandkids while they’re playing on a Playstation, that will be encrypted.

I have concern about a Playstation which my grandchildren might use and a predator getting on the other end, talking to them, and it’s all encrypted.

Someone needs to explain to DiFi that her grandkids are probably at greater risk from predators hacking Sony to get personal information about them to then use that to abduct or whatever them.

Sony’s the perfect example of how security hawks like Feinstein need to choose: either her grandkids face risks because Sony doesn’t encrypt its systems, or they do because it does.

The former risk is likely the much greater risk.

The Reasons to Shut Down the (Domestic) Internet Dragnet: Purpose and Dissemination Limits, Correlations, and Functionality

Charlie Savage has a story that confirms (he linked some of my earlier reporting) something I’ve long argued: NSA was willing to shut down the Internet dragnet in 2011 because it could do what it wanted using other authorities. In it, Savage points to an NSA IG Report on its purge of the PRTT data that he obtained via FOIA. The document includes four reasons the government shut the program down, just one of which was declassified (I’ll explain what is probably one of the still-classified reasons probably in a later post). It states that SPCMA and Section 702 can fulfill the requirements that the Internet dragnet was designed to meet. The government had made (and I had noted) a similar statement in a different FOIA for PRTT materials in 2014, though this passage makes it even more clear that SPCMA — DOD’s self-authorization to conduct analysis including US persons on data collected overseas — is what made the switch possible.

It’s actually clear there are several reasons why the current plan is better for the government than the previous dragnet, in ways that are instructive for the phone dragnet, both retrospectively for the USA F-ReDux debate and prospectively as hawks like Tom Cotton and Jeb Bush and Richard Burr try to resuscitate an expanded phone dragnet. Those are:

  • Purpose and dissemination limits
  • Correlations
  • Functionality

Purpose and dissemination limits

Both the domestic Internet and phone dragnet limited their use to counterterrorism. While I believe the Internet dragnet limits were not as stringent as the phone ones (at least in pre 2009 shutdown incarnation), they both required that the information only be disseminated for a counterterrorism purpose. The phone dragnet, at least, required someone sign off that’s why information from the dragnet was being disseminated.

Admittedly, when the FISC approved the use of the phone dragnet to target Iran, it was effectively authorizing its use for a counterproliferation purpose. But the government’s stated admissions — which are almost certainly not true — in the Shantia Hassanshahi case suggest the government would still pretend it was not using the phone dragnet for counterproliferation purposes. The government now claims it busted Iranian-American Hassanshahi for proliferating with Iran using a DEA database rather than the NSA one that technically would have permitted the search but not the dissemination, and yesterday Judge Rudolph Contreras ruled that was all kosher.

But as I noted in this SPCMA piece, the only requirement for accessing EO 12333 data to track Americans is a foreign intelligence purpose.

Additionally, in what would have been true from the start but was made clear in the roll-out, NSA could use this contact chaining for any foreign intelligence purpose. Unlike the PATRIOT-authorized dragnets, it wasn’t limited to al Qaeda and Iranian targets. NSA required only a valid foreign intelligence justification for using this data for analysis.

The primary new responsibility is the requirement:

  • to enter a foreign intelligence (FI) justification for making a query or starting a chain,[emphasis original]

Now, I don’t know whether or not NSA rolled out this program because of problems with the phone and Internet dragnets. But one source of the phone dragnet problems, at least, is that NSA integrated the PATRIOT-collected data with the EO 12333 collected data and applied the protections for the latter authorities to both (particularly with regards to dissemination). NSA basically just dumped the PATRIOT-authorized data in with EO 12333 data and treated it as such. Rolling out SPCMA would allow NSA to use US person data in a dragnet that met the less-restrictive minimization procedures.

That means the government can do chaining under SPCMA for terrorism, counterproliferation, Chinese spying, cyber, or counter-narcotic purposes, among others. I would bet quite a lot of money that when the government “shut down” the DEA dragnet in 2013, they made access rules to SPCMA chaining still more liberal, which is great for the DEA because SPCMA did far more than the DEA dragnet anyway.

So one thing that happened with the Internet dragnet is that it had initial limits on purpose and who could access it. Along the way, NSA cheated those open, by arguing that people in different function areas (like drug trafficking and hacking) might need to help out on counterterrorism. By the end, though, NSA surely realized it loved this dragnet approach and wanted to apply it to all NSA’s functional areas. A key part of the FISC’s decision that such dragnets were appropriate is the special need posed by counterterrorism; while I think they might well buy off on drug trafficking and counterproliferation and hacking and Chinese spying as other special needs, they had not done so before.

The other thing that happened is that, starting in 2008, the government started putting FBI in a more central role in this process, meaning FBI’s promiscuous sharing rules would apply to anything FBI touched first. That came with two benefits. First, the FBI can do back door searches on 702 data (NSA’s ability to do so is much more limited), and it does so even at the assessment level. This basically puts data collected under the guise of foreign intelligence at the fingertips of FBI Agents even when they’re just searching for informants or doing other pre-investigative things.

In addition, the minimization procedures permit the FBI (and CIA) to copy entire metadata databases.

FBI can “transfer some or all such metadata to other FBI electronic and data storage systems,” which seems to broaden access to it still further.

Users authorized to access FBI electronic and data storage systems that contain “metadata” may query such systems to find, extract, and analyze “metadata” pertaining to communications. The FBI may also use such metadata to analyze communications and may upload or transfer some or all such metadata to other FBI electronic and data storage systems for authorized foreign intelligence or law enforcement purposes.

In this same passage, the definition of metadata is curious.

For purposes of these procedures, “metadata” is dialing, routing, addressing, or signaling information associated with a communication, but does not include information concerning the substance, purport, or meaning of the communication.

I assume this uses the very broad definition John Bates rubber stamped in 2010, which included some kinds of content. Furthermore, the SMPs elsewhere tell us they’re pulling photographs (and, presumably, videos and the like). All those will also have metadata which, so long as it is not the meaning of a communication, presumably could be tracked as well (and I’m very curious whether FBI treats location data as metadata as well).

Whereas under the old Internet dragnet the data had to stay at NSA, this basically lets FBI copy entire swaths of metadata and integrate it into their existing databases. And, as noted, the definition of metadata may well be broader than even the broadened categories approved by John Bates in 2010 when he restarted the dragnet.

So one big improvement between the old domestic Internet dragnet and SPCMA (and 702 to a lesser degree, and I of course, improvement from a dragnet-loving perspective) is that the government can use it for any foreign intelligence purpose.

At several times during the USA F-ReDux debate, surveillance hawks tried to use the “reform” to expand the acceptable uses of the dragnet. I believe controls on the new system will be looser (especially with regards to emergency searches), but it is, ostensibly at least, limited to counterterrorism.

One way USA F-ReDux will be far more liberal, however, is in dissemination. It’s quite clear that the data returned from queries will go (at least) to FBI, as well as NSA, which means FBI will serve as a means to disseminate it promiscuously from there.

Correlations

Another thing replacing the Internet dragnet with 702 access does it provide another way to correlate multiple identities, which is critically important when you’re trying to map networks and track all the communication happening within one. Under 702, the government can obtain not just Internet “call records” and the content of that Internet communication from providers, but also the kinds of thing they would obtain with a subpoena (and probably far more). As I’ve shown, here are the kinds of things you’d almost certainly get from Google (because that’s what you get with a few subpoenas) under 702 that you’d have to correlate using algorithms under the old Internet dragnet.

  • a primary gmail account
  • two secondary gmail accounts
  • a second name tied to one of those gmail accounts
  • a backup email (Yahoo) address
  • a backup phone (unknown provider) account
  • Google phone number
  • Google SMS number
  • a primary login IP
  • 4 other IP logins they were tracking
  • 3 credit card accounts
  • Respectively 40, 5, and 11 Google services tied to the primary and two secondary Google accounts, much of which would be treated as separate, correlated identifiers

Every single one of these data points provides a potentially new identity that the government can track on, whereas the old dragnet might only provide an email and IP address associated with one communication. The NSA has a great deal of ability to correlate those individual identifiers, but — as I suspect the Paris attack probably shows — that process can be thwarted somewhat by very good operational security (and by using providers, like Telegram, that won’t be as accessible to NSA collection).

This is an area where the new phone dragnet will be significantly better than the existing phone dragnet, which returns IMSI, IMEI, phone number, and a few other identifiers. But under the new system, providers will be asked to identify “connected” identities, which has some limits, but will nonetheless pull some of the same kind of data that would come back in a subpoena.

Functionality

While replacing the domestic Internet dragnet with SPCMA provides additional data with which to do correlations, much of that might fall under the category of additional functionality. There are two obvious things that distinguish the old Internet dragnet from what NSA can do under SPCMA, though really the possibilities are endless.

The first of those is content scraping. As the Intercept recently described in a piece on the breathtaking extent of metadata collection, the NSA (and GCHQ) will scrape content for metadata, in addition to collecting metadata directly in transit. This will get you to different kinds of connection data. And particularly in the wake of John Bates’ October 3, 2011 opinion on upstream collection, doing so as part of a domestic dragnet would be prohibitive.

In addition, it’s clear that at least some of the experimental implementations on geolocation incorporated SPCMA data.

I’m particularly interested that one of NSA’s pilot co-traveler programs, CHALKFUN, works with SPCMA.

Chalkfun’s Co-Travel analytic computes the date, time, and network location of a mobile phone over a given time period, and then looks for other mobile phones that were seen in the same network locations around a one hour time window. When a selector was seen at the same location (e.g., VLR) during the time window, the algorithm will reduce processing time by choosing a few events to match over the time period. Chalkfun is SPCMA enabled1.

1 (S//SI//REL) SPCMA enables the analytic to chain “from,” “through,” or “to” communications metadata fields without regard to the nationality or location of the communicants, and users may view those same communications metadata fields in an unmasked form. [my emphasis]

Now, aside from what this says about the dragnet database generally (because this makes it clear there is location data in the EO 12333 data available under SPCMA, though that was already clear), it makes it clear there is a way to geolocate US persons — because the entire point of SPCMA is to be able to analyze data including US persons, without even any limits on their location (meaning they could be in the US).

That means, in addition to tracking who emails and talks with whom, SPCMA has permitted (and probably still does) permit NSA to track who is traveling with whom using location data.

Finally, one thing we know SPCMA allows is tracking on cookies. I’m of mixed opinion on whether the domestic Internet ever permitted this, but tracking cookies is not only nice for understanding someone’s browsing history, it’s probably critical for tracking who is hanging out in Internet forums, which is obviously key (or at least used to be) to tracking aspiring terrorists.

Most of these things shouldn’t be available via the new phone dragnet — indeed, the House explicitly prohibited not just the return of location data, but the use of it by providers to do analysis to find new identifiers (though that is something AT&T does now under Hemisphere). But I would suspect NSA either already plans or will decide to use things like Supercookies in the years ahead, and that’s clearly something Verizon, at least, does keep in the course of doing business.

All of which is to say it’s not just that the domestic Internet dragnet wasn’t all that useful in its current form (which is also true of the phone dragnet in its current form now), it’s also that the alternatives provided far more than the domestic Internet did.

Jim Comey recently said he expects to get more information under the new dragnet — and the apparent addition of another provider already suggests that the government will get more kinds of data (including all cell calls) from more kinds of providers (including VOIP). But there are also probably some functionalities that will work far better under the new system. When the hawks say they want a return of the dragnet, they actually want both things: mandates on providers to obtain richer data, but also the inclusion of all Americans.

How the Government Uses Location Data from Mobile Apps

Screen shot 2015-11-19 at 9.24.26 AMThe other day I looked at an exchange between Ron Wyden and Jim Comey that took place in January 2014, as well as the response FBI gave Wyden afterwards. I want to return to the reason I was originally interested in the exchange: because it reveals that FBI, in addition to obtaining cell location data directly from a phone company or a Stingray, will sometimes get location data from a mobile app provider.

I asked Magistrate Judge Stephen Smith from Houston whether he had seen any such requests — he’s one of a group of magistrates who have pushed for more transparency on these issues. He explained he had had several hybrid pen/trap/2703(d) requests for location and other data targeting WhatsApp accounts. And he had one fugitive probation violation case where the government asked for the location data of those in contact with the fugitive’s Snapchat account, based on the logic that he might be hiding out with one of the people who had interacted with him on Snapchat. The providers would basically be asked to to turn over the cell site location information they had obtained from the users’ phone along with other metadata about those interactions. To be clear, this is not location data the app provider generates, it would be the location data the phone company generates, which the app accesses in the normal course of operation.

The point of getting location data like this is not to evade standards for a particular jurisdiction on CSLI. Smith explained, “The FBI apparently considers CSLI from smart phone apps the same as CSLI from the phone companies, so the same legal authorities apply to both, the only difference being that the ‘target device’ identifier is a WhatsApp/Snapchat account number instead of a phone number.” So in jurisdictions where you can get location data with an order, that’s what it takes, in jurisdictions where you need a probable cause warrant, that’s what it will take. The map above, which ACLU makes a great effort to keep up to date here, shows how jurisdictions differ on the standards for retrospective and prospective location information, which is what (as far as we know) will dictate what it would take to get, say, CSLI data tied to WhatsApp interactions.

Rather than serving as a way to get around legal standards, the reason to get CSLI from the app provider rather than the phone company that originally produces it is to get location data from both sides of a conversation, rather than just the target phone. That is, the app provides valuable context to the location data that you wouldn’t get just from the target’s cell location data.

The fact that the government is getting location data from mobile app providers — and the fact that they comply with the same standard for CSLI obtained from phones in any given jurisdiction — may help to explain a puzzle some have been pondering for the last week or so: why Facebook’s transparency report shows a big spike in wiretap warrants last year.

[T]he latest government requests report from Facebook revealed an unexpected and dramatic rise in real-time interceptions, or wiretaps. In the first six months of 2015, US law enforcement agencies sent Facebook 201 wiretap requests (referred to as “Title III” in the report) for 279 users or accounts. In all of 2014, on the other hand, Facebook only received 9 requests for 16 users or accounts.

Based on my understanding of what is required, this access of location data via WhatsApp should appear in several different categories of Facebook’s transparency report, including 2703(d), trap and trace, emergency request, and search warrant. That may include wiretap warrants, because this is, after all, prospective interception, and not just of the target, but also of the people with whom the target communicates. That may be why Facebook told Motherboard “we are not able to speculate about the types of legal process law enforcement chooses to serve,” because it really would vary from jurisdiction to jurisdiction and possibly even judge to judge.

In any case, we can be sure such requests are happening both on the criminal and the intelligence side, and perhaps most productively under PRISM (which could capture foreign to domestic communications at a much lower standard of review). Which, again, is why any legislation covering location data should cover the act of obtaining location data, whether via the phone company, a Stingray, or a mobile app provider.

It’s Harder for FBI to Get Location Data from Phone Companies Under FISA than Other Ways

I was looking for something else on Ron Wyden’s website yesterday and noticed this exchange between Wyden and Jim Comey from January 29, 2014 (see my transcription below). At first it seemed to be another of Wyden’s persistent questions about how the government collects location data — which we generally assume to be via telephone provider or Stingray — but then realized he was asking something somewhat different. After asking about Cell Site Location Information from phone companies, Wyden then asked whether the FBI uses the same (order, presumably a Pen Register) standard when collecting location from a smart phone app.

Oh yeah! The government can collect location information via apps (and thereby from Google or WhatsApp other providers) as well.

Here’s the FBI’s response, which hasn’t been published before.

The response is interesting for several reasons, some of which may explain why the government hasn’t been getting all the information from cell phones that it wanted under the Section 215 phone dragnet.

First, when the FBI is getting prospective CSLI, it gets a full FISA order, based on a showing of probable cause (it can get historical data using just an order). The response to Wyden notes that while some jurisdictions permit obtaining location data with just an order, because others require warrants, “the FBI elects to seek prospective CSLI pursuant to a full content FISA order, thus matching the higher standard imposed in some U.S. districts.”

Some of this FISA discussed in 2006 in response to some magistrates’ rulings that you needed more than an order to get location, though there are obviously more recent precedents that are stricter about needing a warrant.

This means it is actually harder right now to get prospective CSLI under FISA than it is under Title III in some states. (The letter also notes sometimes the FBI “will use criminal legal authorities in national security investigations,” which probably means FBI will do so in those states with a lower standard).

The FBI’s answer about smart phone apps was far squirrelier. It did say that when obtaining information from the phone itself, it gets a full-content FISA order, absent any exception to the Fourth Amendment (such as the border exception, which is one of many reasons FBI loves to search phones at the border and therefore hates Apple’s encryption); note this March 6, 2014 response was before the June 24, 2014 Riley v. CA decision that required a warrant to search a cell phone, which says FISA was on a higher standard there, too, until SCOTUS caught up.

But as to getting information from smartphone apps itself, here’s what FBI answered.

Which legal authority we would use is very much dependent upon the type of information we are seeking and how we intend to obtain that information. Questions considered include whether or not the information sought would target an individual in an area in which that person has a reasonable expectation of privacy, what type of data we intend to obtain (GPS or other similarly precise location information), and how we intend to obtain the data (via a request for records from the service provider or from the mobile device itself).

In other words, after having thought about how to answer Wyden for five weeks rather than the one they had promised, they didn’t entirely answer the question, which was what it would take for the FBI to get information from apps, rather than cell phone providers, though I think that may be the same standard as a CSLI from a cell phone company.

But this seems to say that, in the FISA context, it may well be easier — and a lower standard of evidence — for the FBI to get location data from a Stingray.

This explains why Wyden’s location bill — which he was pushing just the other day, after the Supreme Court refused to take Quartavious Davis’ appeal — talks about location collection generally, rather than using (for example) a Stingray.


Wyden: I’d like to ask you about the government’s authority to track individuals using things like cell site location information and smart phone applications. Last fall the NSA Director testified that “we–the NSA–identify a number we can give that to the FBI. When they get their probable cause then they can get the locational information they need.”

I’ve been asking the NSA to publicly clarify these remarks but it hasn’t happened yet. So, is the FBI required to have probable cause in order to acquire Americans’ cell site location information for intelligence purposes?

Comey: I don’t believe so Senator. We — in almost all circumstances — we have to obtain a court order but the showing is “a reasonable basis to believe it’s relevant to the investigation.”

Wyden: So, you don’t have to show probable cause. You have cited another standard. Is that standard different if the government is collecting the location information from a smart phone app rather than a cell phone tower?

Comey: I don’t think I know, I probably ought to ask someone who’s a little smarter what the standard is that governs those. I don’t know the answer sitting here.

Wyden: My time is up. Can I have an answer to that within a week?

Comey: You sure can.

Comey Sending Out His Allies as Ferguson Effect Truthers

When Chuck Rosenberg, the Acting EPA Chief, echoed Jim Comey’s suggestion that increased surveillance of cops had led to a chilling effect leading them to stop doing their jobs …

Chuck Rosenberg, head of the U.S. Drug Enforcement Agency, said Wednesday that he agrees with FBI Director James Comey that police officers are reluctant to aggressively enforce laws in the post-Ferguson era of capturing police activity on smartphones and YouTube.

“I think there’s something to it,” Rosenberg said during a press briefing on drug statistics at DEA headquarters in Arlington. “I think he’s spot on. I’ve heard the same thing.”

… I reminded that Rosenberg is also Comey’s former Chief of Staff, from when Comey was Deputy Attorney General in the Bush Administration.

Which is why I find it interesting that the White House has suggested President Obama raised the issue with Comey in a meeting this week.

Asked whether Mr. Obama would call in the two men to discuss the issue privately, Mr. Earnest noted that Mr. Comey met with the president last week, and he strongly hinted that the president chided his F.B.I. director on the subject.

“The president is certainly counting on Director Comey to play a role in the ongoing debate about criminal justice reform,” Mr. Earnest said, suggesting that Mr. Obama expected Mr. Comey to uphold the president’s view on the matter.

While he was Comey’s CoS, remember, Comey made sure he was in the loop on torture discussions he otherwise wouldn’t be, as Comey made an effort to limit some of what got approved in the May 2005 torture memos. That was partly to make sure the torturers didn’t use his absence to push through the memo, but also partly (it seems clear now) to lay out his own record of events.

Given the timing (and the distinct possibility Rosenberg endorsed Comey’s Ferguson Effect views after Comey got chewed out by the President), this feels like a concerted bureaucratic stand. Of course, these two allies’ role atop aggressive law enforcement agencies, Comey just 2 years into a 10-year term, stubbornly repeating police claims, is a pretty powerful bureaucratic stand for cops who want to avoid oversight.

Jim Comey Describes the Dangerous Chilling Effect of Surveillance (But Only for Cops)

For at least the second time, Jim Comey has presented himself as a Ferguson Effect believer, someone who accepts data that has been cherry picked to suggest a related rise in violent crime in cities across the country (I believe that in Ferguson itself, violent crime dropped last month, but whatever).

I have spoken of 2014 in this speech because something has changed in 2015. Far more people are being killed in America’s cities this year than in many years. And let’s be clear: far more people of color are being killed in America’s cities this year.

And it’s not the cops doing the killing.

We are right to focus on violent encounters between law enforcement and civilians. Those incidents can teach all of us to be better.

But something much bigger is happening.

Most of America’s 50 largest cities have seen an increase in homicides and shootings this year, and many of them have seen a huge increase. These are cities with little in common except being American cities—places like Chicago, Tampa, Minneapolis, Sacramento, Orlando, Cleveland, and Dallas.

In Washington, D.C., we’ve seen an increase in homicides of more than 20 percent in neighborhoods across the city. Baltimore, a city of 600,000 souls, is averaging more than one homicide a day—a rate higher than that of New York City, which has 13 times the people. Milwaukee’s murder rate has nearly doubled over the past year.

Yesterday, Comey flew to Chicago and repeated something its embattled Mayor recently floated (even while Bill Bratton, who is a lot more experienced at policing than Rahm Emanuel, has publicly disputed it): that cops are not doing their job because people have started taking videos of police interactions.

I’ve also heard another explanation, in conversations all over the country. Nobody says it on the record, nobody says it in public, but police and elected officials are quietly saying it to themselves. And they’re saying it to me, and I’m going to say it to you. And it is the one explanation that does explain the calendar and the map and that makes the most sense to me.

Maybe something in policing has changed.

In today’s YouTube world, are officers reluctant to get out of their cars and do the work that controls violent crime? Are officers answering 911 calls but avoiding the informal contact that keeps bad guys from standing around, especially with guns?

I spoke to officers privately in one big city precinct who described being surrounded by young people with mobile phone cameras held high, taunting them the moment they get out of their cars. They told me, “We feel like we’re under siege and we don’t feel much like getting out of our cars.”

I’ve been told about a senior police leader who urged his force to remember that their political leadership has no tolerance for a viral video.

So the suggestion, the question that has been asked of me, is whether these kinds of things are changing police behavior all over the country.

And the answer is, I don’t know. I don’t know whether this explains it entirely, but I do have a strong sense that some part of the explanation is a chill wind blowing through American law enforcement over the last year. And that wind is surely changing behavior.

Let’s, for the moment, assume Comey’s anecdote-driven impression, both of the Ferguson Effect and of the role of cameras, is correct (to his credit, in this speech he called for more data; he would do well to heed his own call on that front). Let’s assume that all these cops (and mayors, given that Comey decided to make this claim in Rahm’s own city) are correct, and cops have stopped doing the job we’re all paying them to do because they’re under rather imperfect but nevertheless increased surveillance.

We’ll take you at your word, Director Comey.

If Comey’s right, what he’s describing is the chilling effect of surveillance, the way in which people change their behavior because they know they will be seen by a camera. That Comey is making such a claim is all the more striking given that the surveillance cops are undergoing is targeted surveillance, not the kind of dragnet surveillance (such as the use of planes to surveil the Baltimore and Ferguson protests, which he acknowledged this week) his agency and the NSA subject Americans to.

Sorry, sir! Judge after judge has ruled such claims to be speculative and therefore invalid in a court of law, most recently when T.S. Ellis threw out the ACLU’s latest challenge to the dragnet yesterday!

I actually do think there’s something to the chilling effect of surveillance (though, again, what’s happening to cops is targeted, not dragnet). But if Comey has a problem with that, he can’t have it both ways, he needs to consider the way in which the surveillance of young Muslim and African-American men leads them to do things they might not otherwise do, the way in which it makes targets of surveillance feel under siege, he needs to consider how the surveillance his Agents undertake actually makes it less likely people will engage in the things they’re supposed to do, like enjoy free speech, a robust criminal defense unrestricted by spying on lawyers, like enjoy privacy.

Comey adheres to a lot of theories, including the Ferguson Effect.

But as of yesterday, he is also on the record as claiming that surveillance has a chilling effect. Maybe he should consider the implications of what he is saying for the surveillance his own agency has us under? If the targeted surveillance of cops is a problem, isn’t the far less targeted surveillance he authorizes a bigger problem?