In One of His First Major Legislative Acts, Paul Ryan Trying to Deputize Comcast to Narc You Out to the Feds

Screen Shot 2015-12-07 at 7.53.31 PMAs the Hill reports, Speaker Paul Ryan is preparing to add a worsened version of the Cybersecurity Information Sharing Act to the omnibus budget bill, bypassing the jurisdictional interests of Homeland Security Chair Mike McCaul in order to push through the most privacy-invasive version of the bill.

But several people tracking the negotiations believe McCaul is under significant pressure from House Speaker Paul Ryan (R-Wis.) and other congressional leaders to not oppose the compromise text.

They said lawmakers are aiming to vote on the final cyber bill as part of an omnibus budget deal that is expected before the end of the year.

As I laid out in October, it appears CISA — even in the form that got voted out of the Senate — would serve as a domestic “upstream” spying authority, providing the government a way to spy domestically without a warrant.

CISA permits the telecoms to do the kinds of scans they currently do for foreign intelligence purposes for cybersecurity purposes in ways that (unlike the upstream 702 usage we know about) would not be required to have a foreign nexus. CISA permits the people currently scanning the backbone to continue to do so, only it can be turned over to and used by the government without consideration of whether the signature has a foreign tie or not. Unlike FISA, CISA permits the government to collect entirely domestic data.

We recently got an idea of how this might work. Comcast is basically hacking its own users to find out if they’re downloading copyrighted material.

[Comcast] has been accused of tapping into unencrypted browser sessions and displaying warnings that accuse the user of infringing copyrighted material — such as sharing movies or downloading from a file-sharing site.

That could put users at risk, says the developer who discovered it.

Jarred Sumner, a San Francisco, Calif.-based developer who published the alert banner’s code on his GitHub page, told ZDNet in an email that this could cause major privacy problems.

Sumner explained that Comcast injects the code into a user’s browser as they are browsing the web, performing a so-called “man-in-the-middle” attack. (Comcast has been known to alert users when they have surpassed their data caps.) This means Comcast intercepts the traffic between a user’s computer and their servers, instead of installing software on the user’s computer.

[snip]

“This probably means that Comcast is using [deep packet inspection] on subscriber’s internet and/or proxying subscriber internet when they want to send messages to subscribers,” he said. “That would let Comcast modify unencrypted traffic in both directions.”

In other words, Comcast is already doing the same kind of deep packet inspection of its users’ unencrypted activity as the telecoms use in upstream collection for the NSA. Under CISA, they’d be permitted — and Comcast sure seems willing — to do such searches for the Feds.

Some methods of downloading copyrighted content might already be considered a cyberthreat indicator that Comcast could report directly to the Federal government (and possibly, under this latest version, directly to the FBI). And there are reports that the new version will adopt an expanded list of crimes, to include the Computer Fraud and Abuse Act.

In other words, it’s really easy to see how under this version of CISA, the government would ask Comcast to hack you to find out if you’re doing one of the long list of things considered hacking — a CFAA violation — by the Feds.

How’s that for Paul Ryan’s idea of conservatism, putting the government right inside your Internet router as one of his first major legislative acts?

Internet of Things: Now, with ‘Breachable’ Kids Connect and ‘Hackable’ Barbie

HelloBarbie

[graphic: Hello Barbie via Mattel’s website]

The Internet of Things (IoT) already includes refrigerators, televisions, slow cookers, automobiles, you name it. Most of these items have already experienced security problems, whether personal information leaks, or manipulative hacking.

Now the IoT includes toys — and wow, what a surprise! They’re riddled with privacy and security problems, too.

Like VTech’s privacy breach, exposing data for more than 6 million children and parents including facial photos and chat logs through its Kids Connect technology. The company’s privacy policy (last archived copy) indicated communications would be encrypted, but the encryption proved whisper thin.

Or Mattel’s Hello Barbie, its Wi-Fi enabled communications at risk for hacking and unauthorized surveillance. The flaws include this doll’s ability to connect to any Wi-Fi network named “Barbie” — it was absolutely brain-dead easy to spoof and begin snooping on anything this doll could “hear.”

It’s amazing these manufacturers ever thought these toys were appropriate for the marketplace, given their target audience. In VTech’s case, it appears to be nearly all ages (its Android app on Google Play is unrated), and in the case of Mattel’s Hello Barbie, it’s primarily girls ages 6-15.

These devices are especially iffy since they tippy-toe along the edge of the Children’s Online Privacy Protection Act of 1998 (a.k.a. COPPA, 15 U.S.C. 6501–6505).

Parents share much of the blame, too. Most have no clue what or how federal law covers children’s internet use under COPPA, or requirements under the Children’s Internet Protection Act (a.k.a. CIPA, 47 CFR 54.520). Nor do the parents who buy these devices appear to grasp this basic fact: any network-mediated or Wi-Fi toy, apart from the obvious cellphone/tablet/PC, is at implicit risk for leaking personal data or hackable. How are these devices risking exposure of children’s data, including their activities and location, age-appropriate toys?

This piece at Computerworld has a few helpful suggestions. In my opinion, the IoT doesn’t belong in your kids’ toybox until your kids are old enough to understand and manage personal digital information security to use the internet safely.

Frankly, many parents aren’t ready for safe internet use.

Dianne Feinstein Inadvertently Calls to Expose America’s Critical Infrastructure to Hackers

For days now, surveillance hawks have been complaining that terrorists probably used encryption in their attack on Paris last Friday. That, in spite of the news that authorities used a phone one of the attackers threw in a trash can to identify a hideout in St. Denis (this phone in fact might have been encrypted and brute force decrypted, but given the absence of such a claim and the quick turnaround on it, most people have assumed both it and the pre-attack chats on it were not encrypted).

I suspect we’ll learn attackers did use encryption (and a great deal of operational security that has nothing to do with encryption) at some point in planning their attack — though the entire network appears to have been visible through metadata and other intelligence. Thus far, however, there’s only one way we know of that the terrorists used encryption leading up to the attack: when one of them paid for things like a hotel online, the processing of his credit card (which was in his own name) presumably took place over HTTPS (hat tip to William Ockham for first making that observation). So if we’re going to blindly demand we prohibit the encryption the attackers used, we’re going to commit ourselves to far far more hacking of online financial transactions.

I’m more interested in the concerns about terrorists’ claimed use of PlayStation 4. Three days before the attack, Belgium’s Interior Minister, said all countries were having problem with PlayStation 4s, which led to a frenzy mistakenly claiming the Paris terrorists had used it (there’s far more reason to believe they used Telegram).

One of those alternatives was highlighted on Nov. 11, when Belgium’s federal home affairs minister, Jan Jambon, said that a PlayStation 4 (PS4) console could be used by ISIS to communicate with their operatives abroad.

“PlayStation 4 is even more difficult to keep track of than WhatsApp,” said Jambon, referencing to the secure messaging platform.

Earlier this year, Reuters reported that a 14-year-old boy from Austria was sentenced to a two-year jail term after he downloaded instructions on bomb-building onto his Playstation games console, and was in contact with ISIS.

It remains unclear, however, how ISIS would have used PS4s, though options range from the relatively direct methods of sending messages to players or voice-chatting, to more elaborate methods cooked up by those who play games regularly. Players, for instance, can use their weapons during a game to send a spray of bullets onto a wall, spelling out whole sentences to each other.

This has DiFi complaining that Playstation is encrypted.

Even Playstation is encrypted. It’s very hard to get the data you need because it’s encrypted

Thus far, it’s not actually clear most communications on Playstation are encrypted (though players may be able to pass encrypted objects about); most people I’ve asked think the communications are not encrypted, though Sony isn’t telling. What is likely is that there’s not an easy way to collect metadata tracking the communications within games, which would make it hard to collect on whether or not some parts of the communications data are encrypted.

But at least one kind of data on Playstations — probably two — is encrypted: Credit cards and (probably) user data. That’s because 4 years ago, Playstation got badly hacked.

“The entire credit card table was encrypted and we have no evidence that credit card data was taken,” said Sony.

This is the slimmest amount of good news for PlayStation Network users, but it alone raises very serious concerns, since Sony has yet to provide any details on what sort of encryption has been used to protect that credit card information.

As a result, PlayStation Network users have absolutely no idea how safe their credit card information may be.

But the bad news keeps rolling in:

“The personal data table, which is a separate data set, was not encrypted,” Sony notes, “but was, of course, behind a very sophisticated security system that was breached in a malicious attack.”

A very sophisticated security system that ultimately failed, making it useless.

Why Sony failed to encrypt user account data is a question that security experts have already begun to ask. Along with politicians both in the United States and abroad.

Chances are Sony’s not going to have an answer that’s going to please anyone.

After one in a series of really embarrassing hacks, I assume Sony has locked things down more since. Three years after that Playstation hack, of course, Sony’s movie studio would be declared critical infrastructure after it also got hacked.

Here’s the thing: Sony is the kind of serially negligent company that we need to embrace good security if the US is going to keep itself secure. We should be saying, “Encrypt away, Sony! Please keep yourself safe because hackers love to hack you and they’ve had spectacular success doing so! Jolly good!”

But we can’t, at the same time, be complaining that Sony offers some level of encryption as if that makes the company a material supporter of terrorism. Sony is a perfect example of how you can’t have it both ways, secure against hackers but not against wiretappers.

Amid the uproar about terrorists maybe using encryption, the ways they may have — to secure online financial transactions and game player data — should be a warning about condemning encryption broadly.

Because next week, when hackers attack us, we’ll be wishing our companies had better encryption to keep us safe.

DOJ Still Gets a Failing Grade on Strong Authentication

In DOJ’s Inspector General’s annual report on challenges facing the department, Michael Horowitz revealed how well DOJ is complying with the Office of Management and Budget’s directive in the wake of the OPM hack that agencies improve their own cybersecurity, including by adopting strong authentication for both privileged and unprivileged users.

DOJ’s still getting a failing grade on that front — just 64% of users are in compliance with requirements they use strong authentication.

Following OMB’s directive, the White House reported that federal civilian agencies increased their use of strong authentication (such as smartcards) for privileged and unprivileged users from 42 percent to 72 percent. The Justice Department, however, had among the worst overall compliance records for the percentage of employees using smartcards during the third quarter of FY 2015 – though it has since made significant improvements, increasing to 64 percent of privileged and unprivileged users in compliance by the fourth quarter. Given both the very sensitive nature of the information that it controls, and its role at the forefront of the effort to combat cyber threats, the Department must continue to make progress to be a leader in these critical areas.

Ho hum. These are only the databases protecting FBI’s investigations into mobs, terrorists, and hackers. No reason to keep those safe.

In any case, it may be too late, as the Crackas with Attitude already broke into the portal for some of those databases.

Ah well, we’ll just dump more information into those databases under CISA and see if that prevents hackers.

Government (and Its Expensive Contractors) Really Need to Secure Their Data Collections

Given two recent high profile hacks, the government needs to either do a better job of securing its data collection and sharing process, or presume people will get hurt because of it.

After the hackers Crackas With Attitude hacked John Brennan, they went onto hack FBI’s Deputy Director Mark Giuliano as well as a law enforcement portal run by the FBI. The hack of the latter hasn’t gotten as much attention — thus far, WikiLeaks has not claimed to have the data, but upon closer examination of the data obtained, it appears it might provide clues and contact information about people working undercover for the FBI.

Then, the hackers showed Wired’s Kim Zetter what the portal they had accessed included. Here’s a partial list:

Enterprise File Transfer Service—a web interface to securely share and transmit files.

Cyber Shield Alliance—an FBI Cybersecurity partnership initiative “developed by Law Enforcement for Law Enforcement to proactively defend and counter cyber threats against LE networks and critical technologies,” the portal reads. “The FBI stewards an array of cybersecurity resources and intelligence, much of which is now accessible to LEA’s through the Cyber Shield Alliance.”

IC3—“a vehicle to receive, develop, and refer criminal complaints regarding the rapidly expanding arena of cyber crime.”

Intelink—a “secure portal for integrated intelligence dissemination and collaboration efforts”

National Gang Intelligence Center—a “multi-agency effort that integrates gang information from local, state, and federal law enforcement entities to serve as a centralized intelligence resource for gang information and analytical support.”

RISSNET—which provides “timely access to a variety of law enforcement sensitive, officer safety, and public safety resources”

Malware Investigator—an automated tool that “analyzes suspected malware samples and quickly returns technical information about the samples to its users so they can understand the samples’ functionality.”

eGuardian—a “system that allows Law Enforcement, Law Enforcement support and force protection personnel the ability to report, track and share threats, events and suspicious activities with a potential nexus to terrorism, cyber or other criminal activity.”

While the hackers haven’t said whether they’ve gotten into these information sharing sites, they clearly got as far as the portal to the tools that let investigators share information on large networked investigations, targeting things like gangs, other organized crime, terrorists, and hackers. If hackers were to access those information sharing networks, they might be able to both monitor investigations into such networked crime groups, but also (using credentials they already hacked) to make false entries. And all that’s before CISA will vastly expand this info sharing.

Meanwhile, the Intercept reported receiving 2.5 years of recorded phone calls — amounting to 70 million recorded calls — from one of the nation’s largest jail phone providers, Securus. Its report focuses on proving that Securus is not defeat-listing calls to attorneys, meaning it has breached attorney-client privilege. As Scott Greenfield notes, that’s horrible but not at all surprising.

But on top of that, the Intercept’s source reportedly obtained these recorded calls by hacking Securus. While we don’t have details of how that happened, that does mean all those calls were accessible to be stolen. If Intercept’s civil liberties-motivated hacker can obtain the calls, so can a hacker employed by organized crime.

The Intercept notes that even calls to prosecutors were online (which might include discussions from informants). But it would seem just calls to friends and associates would prove of interest to certain criminal organizations, especially if they could pinpoint the calls (which is, after all, the point). As Greenfield notes, defendants don’t usually listen to their lawyers’ warnings — or those of the signs by the phones saying all calls will be recorded — and so they say stupid stuff to everyone.

So we tell our clients that they cannot talk about anything on the phone. We tell our clients, “all calls are recorded, including this one.”  So don’t say anything on the phone that you don’t want your prosecutor to hear.

Some listen to our advice. Most don’t. They just can’t stop themselves from talking.  And if it’s not about talking to us, it’s about talking to their spouses, their friends, their co-conspirators. And they say the most remarkable things, in the sense of “remarkable” meaning “really damaging.”  Lawyers only know the stupid stuff they say to us. We learn the stupid stuff they say to others at trial. Fun times.

Again, such calls might be of acute interest to rival gangs (for example) or co-conspirators who have figured out someone has flipped.

It’s bad enough the government left OPM’s databases insecure, and with it sensitive data on 21 million clearance holders.

But it looks like key law enforcement data collections are not much more secure.

Defining Stingray Emergencies … or Not

A couple of weeks ago, ACLU NoCal released more documents on the use of Stingray. While much of the attention focused on the admission that innocent people get sucked up in Stingray usage, I was at least as interested in the definition of an emergency during which a Stingray could be used with retroactive authorization:
Screen Shot 2015-11-08 at 9.27.59 AM

I was interested both in the invocation of organized crime (which would implicate drug dealing), but also the suggestion the government would get a Stingray to pursue a hacker under the CFAA. Equally curiously, the definition here leaves out part of the definition of “protected computer” under CFAA, one used in interstate communication.

(2) the term “protected computer” means a computer—
(A) exclusively for the use of a financial institution or the United States Government, or, in the case of a computer not exclusively for such use, used by or for a financial institution or the United States Government and the conduct constituting the offense affects that use by or for the financial institution or the Government; or
(B) which is used in or affecting interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States;

Does the existing definition of an emergency describe how DOJ has most often used Stingrays to pursue CFAA violations (which of course, as far as we know, have never been noticed to defendants).

Now compare the definition Jason Chaffetz used in his Stingray Privacy Act, a worthwhile bill limiting the use of Stingrays, though this emergency section is the one I and others have most concerns about. Chaffetz doesn’t have anything that explicitly invokes the CFAA definition, and collapses the “threat to national security” and, potentially, the CFAA one into “conspiratorial activities threatening the national security interest.”

(A) such governmental entity reasonably determines an emergency exists that—

(i) involves—

(I) immediate danger of death or serious physical injury to any person;

(II) conspiratorial activities threatening the national security interest; or

(III) conspiratorial activities characteristic of organized crime;

Presumably, requiring conspiratorial activities threatening the national security interest might raise the bar — but would still permit — the use of Stingrays against low level terrorism wannabes. Likewise, while it would likely permit the use of Stingrays against hackers (who are generally treated as counterinteligence threats among NatSec investigators), it might require some conspiracy between hackers.

All that said, there’s a whole lot of flux in what even someone who is often decent on civil liberties like Chaffetz considers a national security threat.

And, of course, in the FISA context, the notion of what might be regarded as an immediate danger of physical injury continues to grow.

These definitions are both far too broad, and far too vague.

It’s Not Just the FISA Court, It’s the Game of Surveillance Whack-a-Mole

In response to this post from Chelsea Manning, the other day I did the first in what seems to have become a series of posts arguing that we should eliminate the FISA Court, but that the question is not simple. In that post, I laid out the tools the FISC has used, with varying degrees of success, in reining in Executive branch spying, especially in times of abuse.

In this post, I want to lay out how reining in surveillance isn’t just about whether the secret approval of warrants and orders would be better done by the FISC or a district court. It’s about whack-a-mole.

That’s because, right now, there are four ways the government gives itself legal cover for expansive surveillance:

  • FISC, increasingly including programs
  • EO 12333, including SPCMA
  • Magistrate warrants and orders without proper briefing
  • Administrative orders and/or voluntary cooperation

FISA Court

The government uses the FISA court to get individualized orders for surveillance in this country and, to a less clear extent, surveillance of Americans overseas. That’s the old-fashioned stuff that could be done by a district court. But it’s also one point where egregious source information — be it a foreign partner using dubious spying techniques, or, as John Brennan admitted in his confirmation hearing, torture — gets hidden. No defendant has ever been able to challenge the basis for the FISA warrant used against them, which is clearly not what Congress said it intended in passing FISA. But given that’s the case, it means a lot of prosecutions that might not pass constitutional muster, because of that egregious source information, get a virgin rebirth in the FISC.

In addition, starting 2004, the government started using the FISA Court to coerce corporations to continue domestic collection programs they had previously done voluntarily. As I noted, while I think the FISC’s oversight of these programs has been mixed, the FISC has forced the government to hew closer (though not at) the law.

EO 12333, including SPCMA

The executive branch considers FISA just a subset of EO 12333, the Reagan Executive Order governing the intelligence community — a carve out of collection requiring more stringent rules. At times, the Intelligence Community have operated as if EO 12333 is the only set of rules they need to follow — and they’ve even secretly rewritten it at least once to change the rules. The government will always assert the right to conduct spying under EO 12333 if it has a technical means to bypass that carve out. That’s what the Bush Administration claimed Stellar Wind operated under. And at precisely the time the FISC was imposing limits on the Internet dragnet, the Executive Brach was authorizing analysis of Americans’ Internet metadata collected overseas under SPCMA.

EO 12333 derived data does get used against defendants in the US, though it appears to be laundered through the FISC and/or parallel constructed, so defendants never get the opportunity to challenge this collection.

Magistrate warrants and orders

Even when the government goes to a Title III court — usually a magistrate judge — to get an order or warrant for surveillance, that surveillance often escapes real scrutiny. We’ve seen this happen with Stingrays and other location collection, as well as FBI hacking; in those cases, the government often didn’t fully brief magistrates about what they’re approving, so the judges didn’t consider the constitutional implications of it. There are exceptions, however (James Orenstein, the judge letting Apple challenge the use of an All Writs Act to force it to unlock a phone, is a notable one), and that has provided periodic checks on collection that should require more scrutiny, as well as public notice of those methods. That’s how, a decade after magistrates first started to question the collection of location data using orders, we’re finally getting circuit courts to review the issue. Significantly, these more exotic spying techniques are often repurposed foreign intelligence methods, meaning you’ll have magistrates and other TIII judges weighing in on surveillance techniques being used in parallel programs under FISA. At least in the case of Internet data, that may even result in a higher standard of scrutiny and minimization being applied to the FISA collection than the criminal investigation collection.

Administrative orders and/or voluntary cooperation

Up until 2006, telecoms willing turned over metadata on Americans’ calls to the government under Stellar Wind. Under Hemisphere, AT&T provides the government call record information — including results of location-based analysis, on all the calls that used its networks, not just AT&T customers — sometimes without an order. For months after Congress was starting to find a way to rein in the NSA phone dragnet with USA Freedom Act, the DEA continued to operate its own dragnet of international calls that operated entirely on administrative orders. Under CISA, the government will obtain and disseminate information on cybersecurity threats that it wouldn’t be able to do under upstream 702 collection; no judge will review that collection. Until 2009, the government was using NSLs to get all the information an ISP had on a user or website, including traffic information. AT&T still provides enhanced information, including the call records of friends and family co-subscribers and (less often than in the past) communities of interest.

These six examples make it clear that, even with Americans, even entirely within the US, the government conducts a lot of spying via administrative orders and/or voluntary cooperation. It’s not clear this surveillance had any but internal agency oversight, and what is known about these programs (the onsite collaboration that was probably one precursor to Hemisphere, the early NSL usage) makes it clear there have been significant abuses. Moreover, a number of these programs represent individual (the times when FBI used an NSL to get something the FISC had repeatedly refused to authorize under a Section 215 order) or programmatic collection (I suspect, CISA) that couldn’t be approved under the auspices of the FISC.

All of which is to say the question of what to do to bring better oversight over expansive surveillance is not limited to the short-comings of the FISC.  It also must contend with the way the government tends to move collection programs when one method proves less than optimal. Where technologically possible, it has moved spying offshore and conducted it under EO 12333. Where it could pay or otherwise bribe and legally shield providers, it moved to voluntary collection. Where it needed to use traditional courts, it often just obfuscated about what it was doing. The primary limits here are not legal, except insofar as legal niceties and the very remote possibility of transparency raise corporate partner concerns.

We need to fix or eliminate the FISC. But we need to do so while staying ahead of the game of whack-a-mole.

Could Corporations Include CISA Non-Participation in Transparency Reports? Would It Even Mean Anything?

I confess I don’t know the answer to this question, but I’m going to pose it anyway. Could companies report non-participation in CISA — or whatever the voluntary cyber information sharing program that will soon roll out is eventually called — in their transparency reports?

I ask in part because there’s great uncertainty about whether tech companies support or oppose the measure. The Business Software Alliance suggested they supported a data sharing bill, until Fight for the Future made a stink, when at least some of them pulled off (while a number of other BSA members, like Adobe, IBM, and Siemens, will surely embrace the bill). A number of companies have opposed CISA, either directly (like Apple) or via the Computer and Communications Industry Association. But even Google, which is a CCIA member, still wants a way to share information even if they express concerns about CISA’s current form. Plus, there some indication that some of the companies claiming to oppose CISA — most notably, Facebook — are secretly lobbying in favor of it.

In the wake of CISA passing, activists are wondering if companies would agree not to participate (because participation is, as Richard Burr reminded over and over, voluntary, even if the key voluntary participants will also be bidding on a $50 billion contract as CISA rolls out). But I’m not sure what that would even mean.

So, first, would companies legally be permitted to claim in their transparency reports that they did not voluntarily participate in CISA? There are a lot of measures that prohibit the involuntary release of information about companies’ voluntary participation in CISA. But nothing in the bill that seems to prohibit the voluntary release of information about companies’ voluntary non-participation.

But even if a company made such a claim — or claimed that they only share cyber indicators with legal process — would it even be meaningful? Consider: Most of the companies that might make such a claim get hacked. Even Apple, the company that has taken the lead on pushing back against the government, has faced a series of attacks and/or vulnerabilities of late, both in its code and its app store. Both any disclosures it made to the Federal government and to its app vendors would be covered by CISA unless Apple deliberately disclosed that information outside the terms of CISA — for example, by deliberately leaving personally identifiable information in any code it shared, which it’s not about to do. Apple will enjoy the protections in CISA whether it asked for them or not. I can think of just two ways to avoid triggering the protections of CISA: either to only report such vulnerabilities as a crime report to FBI (which, because it bypassed the DHS, would not get full protection, and which would be inappropriate for most kinds of vulnerability disclosures), or to publicly disclose everything to the public. And that’s assuming there aren’t more specific disclosures — such as attempts to attack specific iCloud accounts — that would legitimately be intelligence reports. Google tells users if they think state actors are trying to compromise their accounts; is this appropriate to share with the government without process? Moreover, most of the companies that would voluntarily not participate already have people with clearance who can and do receive classified intelligence from the government. Plus, these companies can’t choose not to let their own traffic that transits communications backbone be scanned by the backbone owners.

In other words, I’m not sure how a company can claim not to participate in CISA once it goes into effect unless it doesn’t share any information. And most of the big tech companies are already sharing this information among themselves, they want to continue to do that sharing, and that sharing would get CISA protections.

The problem is, there are a number of kinds of information sharing that will get the permission of CISA, all of which would count as “participating in it.” Anything Apple shared with the government or other companies would get CISA protection. But that’s far different than taking a signature the government shares and scanning all backbone traffic for instances of it, which is what Verizon and AT&T will almost certainly be doing under CISA. That is, there are activities that shouldn’t require legal process, and activities that currently do but will not under CISA. And to get a meaningful sense of whether someone is “participating” in CISA by performing activities that otherwise would require legal process, you’d need a whole lot of details about what they were doing, details that not even criminal defendants will ever get. You’d even need to distinguish activities companies would do on their own accord (Apple’s own scans of its systems for known vulnerabilities) from things that came pursuant to information received from the federal government (a scan on a vulnerability Apple learned about from the government).

We’re never going to get that kind of information from a transparency report, except insofar as companies detail the kinds of things they require legal process for in spite of CISA protection for doing them without legal process. That would not be the same thing as non-participation in CISA — because, again, most of the companies that have raised objections already share information at least with industry partners. But that’s about all we’d get short of really detailed descriptions of any scrubbing that goes on during such information sharing.

CISA Overwhelmingly Passes, 74-21

Update: Thought I’d put a list of Senators people should thank for voting against CISA.

GOP: Crapo, Daines, Heller, Lee, Risch, and Sullivan. (Paul voted against cloture but did not vote today.)

Dems: Baldwin, Booker, Brown, Cardin, Coons, Franken, Leahy, Markey, Menendez, Merkley, Sanders, Tester, Udall, Warren, Wyden


Just now, the Senate voted to pass the Cyber Information Sharing Act by a vote of 74 to 21. While 7 more people voted against the bill than had voted against cloture last week (Update: the new votes were Cardin and Tester, Crapo, Daines, Heller, Lee, Risch, and Sullivan, with Paul not voting), this is still a resounding vote for a bill that will authorize domestic spying with no court review in this country.

The amendment voting process was interesting of its own accord. Most appallingly, just after Patrick Leahy cast his 15,000th vote on another amendment — which led to a break to talk about what a wonderful person he is, as well as a speech from him about how the Senate is the conscience of the country — Leahy’s colleagues voted 57 to 39 against his amendment that would have stopped the creation of a new FOIA exemption for CISA. So right after honoring Leahy, his colleagues kicked one of his key issues, FOIA, in the ass.

More telling, though, were the votes on the Wyden and Heller amendments, the first two that came up today.

Wyden’s amendment would have required more stringent scrubbing of personal data before sharing it with the federal government. The amendment failed by a vote of 55-41 — still a big margin, but enough to sustain a filibuster. Particularly given that Harry Reid switched votes at the last minute, I believe that vote was designed to show enough support for a better bill to strengthen the hand of those pushing for that in conference (the House bills are better on this point). The amendment had the support of a number of Republicans — Crapo, Daines, Gardner, Heller, Lee, Murkowksi, and Sullivan — some of whom would vote against passage. Most of the Democrats who voted against Wyden’s amendment — Carper, Feinstein, Heitkamp, Kaine, King, Manchin, McCaskill, Mikulski, Nelson, Warner, Whitehouse — consistently voted against any amendment that would improve the bill (and Whitehouse even voted for Tom Cotton’s bad amendment).

The vote on Heller’s amendment looked almost nothing like Wyden’s. Sure, the amendment would have changed just two words in the bill, requiring the government to have a higher standard for information it shared internally. But it got a very different crowd supporting it, with a range of authoritarian Republicans like Barrasso, Cassidy, Enzi, Ernst, and Hoeven — voting in favor. That made the vote on the bill much closer. So Reid, along with at least 7 other Democrats who voted for Wyden’s amendment, including Brown, Klobuchar, Murphy, Schatz, Schumer, Shaheen, and Stabenow, voted against Heller’s weaker amendment. While some of these Democrats — Klobuchar, Schumer, and probably Shaheen and Stabenow — are affirmatively pro-unconstitutional spying anyway, the swing, especially from Sherrod Brown, who voted against the bill as a whole, makes it clear that these are opportunistic votes to achieve an outcome. Heller’s vote fell just short 49-47, and would have passed had some of those Dems voted in favor (the GOP Presidential candidates were not present, but that probably would have been at best a wash and possibly a one vote net against, since Cruz voted for cloture last week). Ultimately, I think Reid and these other Dems are moving to try to deliver something closer to what the White House wants, which is still unconstitutional domestic spying.

Richard Burr seemed certain that this will go to conference, which means people like he, DiFi, and Tom Carper will try to make this worse as people from the House point out that there are far more people who oppose this kind of unfettered spying in the House. We shall see.

For now, however, the Senate has embraced a truly awful bill.

Update, all amendment roll calls

Wyden: 41-55-4

Heller: 47-49-4

Leahy: 37-59-4

Franken: 35-60-5

Coons: 41-54-5

Cotton amendment: 22-73-5

Final passage: 74-21-5

Richard Burr Wants to Prevent Congress from Learning if CISA Is a Domestic Spying Bill

As I noted in my argument that CISA is designed to do what NSA and FBI wanted an upstream cybersecurity certificate to do, but couldn’t get FISA to approve, there’s almost no independent oversight of the new scheme. There are just IG reports — mostly assessing the efficacy of the information sharing and the protection of classified information shared with the private sector — and a PCLOB review. As I noted, history shows that even when both are well-intentioned and diligent, that doesn’t ensure they can demand fixes to abuses.

So I’m interested in what Richard Burr and Dianne Feinstein did with Jon Tester’s attempt to improve the oversight mandated in the bill.

The bill mandates three different kinds of biennial reports on the program: detailed IG Reports from all agencies to Congress, which will be unclassified with a classified appendix, a less detailed PCLOB report that will be unclassified with a classified appendix, and a less detailed unclassified IG summary of the first two. Note, this scheme already means that House members will have to go out of their way and ask nicely to get the classified appendices, because those are routinely shared only with the Intelligence Committee.

Tester had proposed adding a series of transparency measures to the first, more detailed IG Reports to obtain more information about the program. Last week, Burr and DiFi rolled some transparency procedures loosely resembling Tester’s into the Manager’s amendment — adding transparency to the base bill, but ensuring Tester’s stronger measures could not get a vote. I’ve placed the three versions of transparency provisions below, with italicized annotations, to show the original language, Tester’s proposed changes, and what Burr and DiFi adopted instead.

Comparing them reveals Burr and DiFi’s priorities — and what they want to hide about the implementation of the bill, even from Congress.

Prevent Congress from learning how often CISA data is used for law enforcement

Tester proposed a measure that would require reporting on how often CISA data gets used for law enforcement. There were two important aspects to his proposal: it required reporting not just on how often CISA data was used to prosecute someone, but also how often it was used to investigate them. That would require FBI to track lead sourcing in a way they currently refuse to. It would also create a record of investigative source that — in the unlikely even that a defendant actually got a judge to support demands for discovery on such things — would make it very difficult to use parallel construction to hide CISA sourced data.

In addition, Tester would have required some granularity to the reporting, splitting out fraud, espionage, and trade secrets from terrorism (see clauses VII and VIII). Effectively, this would have required FBI to report how often it uses data obtained pursuant to an anti-hacking law to prosecute crimes that involve the Internet that aren’t hacking; it would have required some measure of how much this is really about bypassing Title III warrant requirements.

Burr and DiFi replaced that with a count of how many prosecutions derived from CISA data. Not only does this not distinguish between hacking crimes (what this bill is supposed to be about) and crimes that use the Internet (what it is probably about), but it also would invite FBI to simply disappear this number, from both Congress and defendants, by using parallel construction to hide the CISA source of this data.

Prevent Congress from learning how often CISA sharing falls short of the current NSA minimization standard

Tester also asked for reporting (see clause V) on how often personal information or information identifying a specific person was shared when it was not “necessary to describe or mitigate a cybersecurity threat or security vulnerability.” The “necessary to describe or mitigate” is quite close to the standard NSA currently has to meet before it can share US person identities (the NSA can share that data if it’s necessary to understand the intelligence; though Tester’s amendment would apply to all people, not just US persons).

But Tester’s standard is different than the standard of sharing adopted by CISA. CISA only requires agencies to strip personal data if the agency if it is “not directly related to a cybersecurity threat.” Of course, any data collected with a cybersecurity threat — even victim data, including the data a hacker was trying to steal — is “related to” that threat.

Burr and DiFi changed Tester’s amendment by first adopting a form of a Wyden amendment requiring notice to people whose data got shared in ways not permitted by the bill (which implicitly adopts that “related to” standard), and then requiring reporting on how many people got notices, which will only come if the government affirmatively learns that a notice went out that such data wasn’t related but got shared anyway. Those notices are almost never going to happen. So the number will be close to zero, instead of the probably 10s of thousands, at least, that would have shown under Tester’s measure.

So in adopting this change, Burr and DiFi are hiding the fact that under CISA, US person data will get shared far more promiscuously than it would under the current NSA regime.

Prevent Congress from learning how well the privacy strips — at both private sector and government — are working

Tester also would have required the government to report how much person data got stripped by DHS (see clause IV). This would have measured how often private companies were handing over data that had personal data that probably should have been stripped. Combined with Tester’s proposed measure of how often data gets shared that’s not necessary to understanding the indicator, it would have shown at each stage of the data sharing how much personal data was getting shared.

Burr and DiFi stripped that entirely.

Prevent Congress from learning how often “defensive measures” cause damage

Tester would also have required reporting on how often defensive measures (the bill’s euphemism for countermeasures) cause known harm (see clause VI). This would have alerted Congress if one of the foreseeable harms from this bill — that “defensive measures” will cause damage to the Internet infrastructure or other companies — had taken place.

Burr and DiFi stripped that really critical measure.

Prevent Congress from learning whether companies are bypassing the preferred sharing method

Finally, Tester would have required reporting on how many indicators came in through DHS (clause I), how many came in through civilian agencies like FBI (clause II), and how many came in through military agencies, aka NSA (clause III). That would have provided a measure of how much data was getting shared in ways that might bypass what few privacy and oversight mechanisms this bill has.

Burr and DiFi replaced that with a measure solely of how many indicators get shared through DHS, which effectively sanctions alternative sharing.

That Burr and DiFi watered down Tester’s measures so much makes two things clear. First, they don’t want to count some of the things that will be most important to count to see whether corporations and agencies are abusing this bill. They don’t want to count measures that will reveal if this bill does harm.

Most importantly, though, they want to keep this information from Congress. This information would almost certainly not show up to us in unclassified form, it would just be shared with some members of Congress (and on the House side, just be shared with the Intelligence Committee unless someone asks nicely for it).

But Richard Burr and Dianne Feinstein want to ensure that Congress doesn’t get that information. Which would suggest they know the information would reveal things Congress might not approve of.

Read more