What’s a Little (or a Lot) Cooperation Among Spies?

Screen Shot 2015-08-15 at 8.33.46 PMA key point in the ProPublica/NYT piece on AT&T’s close cooperation with the NSA (and, though not stated explicitly, other agencies) on spying is that AT&T was the telecom that helped NSA spy on the UN.

It provided technical assistance in carrying out a secret court order permitting the wiretapping of all Internet communications at the United Nations headquarters, a customer of AT&T.

If you read the underlying document, it actually shows that NSA had a traditional FISA order requiring the cooperation (remember, “agents of foreign powers,” as diplomats are, are among the legal wiretap targets under FISA, no matter what we might think about NSA spying on UN in our own country) — meaning whatever telecom serviced the UN legally had to turn over the data. And a big part of AT&T’s cooperation, in addition to technically improving data quality, involved filtering the data to help NSA avoid overload.

BLARNEY began intermittent enablement  of DNI traffic for TOPI assessment and feedback. This feedback is being used by the BLARNEY target development team to support an ongoing filtering and throttling of data volumes. While BLARNEY is authorized full-take access under the NSA FISA, collected data volumes would flood PINWALE allocations within hours without a robust filtering mechanism.

In other words, AT&T helped NSA, ironically, by helping it limit what data it took in. Arguably, that’s an analytical role (who builds the algorithms in the filter?), but it’s one that limits how much actually gets turned over to the government.

That doesn’t mean the cooperation was any less valued, nor does it mean it didn’t go beyond what AT&T was legally obliged to do under the FISA order. But it’s not evidence AT&T would wiretap a non-legal (private corporation) target as a favor for NSA. That evidence may exist, somewhere, but it’s not in this story, except insofar as it mentions Stellar Wind, where AT&T was doing such things.

To be fair, AT&T’s UN cooperation is actually emphasized in this story because it was a key data point in the worthwhile ProPublica piece explaining how they proved Fairview was AT&T.

In April 2012, an internal NSA newsletter boasted about a successful operation in which NSA spied on the United Nations headquarters in New York City with the help of its Fairview and Blarney programs. Blarney is a program that undertakes surveillance that is authorized by the Foreign Intelligence Surveillance Court.

FAIRVIEW and BLARNEY engineers collaborated to enable the delivery of 700Mbps of paired packet switched traffic (DNI) traffic from access to an OC192 ring serving the United Nations mission in New York … FAIRVIEW engineers and the partner worked to provide the correct mapping, and BLARNEY worked with the partner to correct data quality issues so the data could be handed off to BLARNEY engineers to enable processing of the DNI traffic.

We found historical records showing that AT&T was paid $1 million a year to operate the U.N.’s fiber optic provider in 2011 and 2012. A spokesman for the U.N. secretary general confirmed that the organization “has a current contract with AT&T” to operate the fiber optic network at the U.N. headquarters in New York.

That is, the UN story is important largely because there are public records proving that AT&T was the provider in question, not because it’s the most egregious example of AT&T’s solicitous relationship with the nation’s spies.

Also in that story proving how they determined Fairview was AT&T and Stormbrew included Verizon was the slide above, bragging that the Comprehensive National Cybersecurity Initiative 100% subsidized Verizon’s Breckenridge site at a new cable landing carrying traffic from China.

It’s not entirely clear what that means — it might just refer to the SCIF, power supply, and servers needed to run the TURMOIL (that is, passive filtering) deployments the NSA wanted to track international traffic with China. But as ProPublica lays out, the NSA was involved the entire time Verizon was planning this cable landing. Another document on CNCI shows that in FY2010 — while significantly less than AT&T’s Fairview — NSA was dumping over $100M into Stormbrew and five times as much money into “cyber” than on FISA (in spite of the fact that they admit they’re really doing all this cybering to catch attacks on the US, meaning it has to ostensibly be conducted under FISA, even if FISC had not yet and may never have approved a cyber certificate for upstream 702). And those numbers date to the year after the Breckenridge project was put on line, and at a time when Verizon was backing off an earlier closer relationship with the Feds.

How much did Verizon really get for that cable landing, what did they provide in exchange, and given that this was purpose-built to focus on Chinese hacking 6 years ago, why is China still eating our lunch via hacking? And if taxpayers are already subsidizing Verizon 100% for capital investments, why are we still paying our cell phone bills?

Particularly given the clear focus on cyber at this cable landing, I recall the emphasis on Department of Commerce when discussing the government’s partnership with industry in PPD-20, covering authorizations for various cyber activities, including offensive cyberwar (note the warning I gave for how Americans would start to care about this Snowden disclosure once our rivals, like China, retaliate). That is, the government has Commerce use carrots and sticks to get cooperation from corporations, especially on cybersecurity.

None of this changes the fact that AT&T has long been all too happy to spy on its customers for the government. It just points to how little we know about these relationships, and how much quid pro quo there really is. We know from PRISM discussions that the providers could negotiate how they accomplished an order (as AT&T likely could with the order to wiretap the UN), and that’s one measure of “cooperation.” But there’s a whole lot else to this kind of cooperation.

Update: Credo released a statement in response to the story.

As a telecom that can be compelled to participate in unconstitutional surveillance, we know how important it is to fight for our customers’ privacy and only hand over information related to private communications when required by law,” said CREDO Mobile Vice President Becky Bond. “It’s beyond disturbing though sadly not surprising what’s being reported about a secret government relationship with AT&T that NSA documents describe as ‘highly collaborative’ and a ‘partnership, not a contractual relationship,’

CREDO Mobile supports full repeal of the illegal surveillance state as the only way to protect Americans from illegal government spying,” Bond continued, “and we challenge AT&T to demonstrate concern for its customers’ constitutional rights by joining us in public support of repealing both the Patriot Act and FISA Amendments Act.

On the Apple Back Door Rumors … Remember Lavabit

During the July 1 Senate Judiciary Committee hearing on back doors, Deputy Attorney General Sally Yates claimed that the government doesn’t want the government to have back doors into encrypted communications. Rather, they wanted corporations to retain the back doors to be able to access communications if the government had legal process to do so. (After 1:43.)

We’re not going to ask the companies for any keys to the data. Instead, what we’re going to ask is that the companies have an ability to access it and then with lawful process we be able to get the information. That’s very different from what some other countries — other repressive regimes — from the way that they’re trying to get access to the information.

The claim was bizarre enough, especially as she went on to talk about other countries not having the same lawful process we have (as if that makes a difference to software code).

More importantly, that’s not true.

Remember what happened with Lavabit, when the FBI was in search of what is presumed to be Edward Snowden’s email. Lavabit owner Ladar Levison had a discussion with FBI about whether it was technically feasible to put a pen register on the targeted account. After which the FBI got a court order to do it. Levison tried to get the government to let him write a script that would provide them access to just the targeted account or, barring that, provide for some kind of audit to ensure the government wasn’t obtaining other customer data.

The unsealed documents describe a meeting on June 28th between the F.B.I. and Levison at Levison’s home in Dallas. There, according to the documents, Levison told the F.B.I. that he would not comply with the pen-register order and wanted to speak to an attorney. As the U.S. Attorney for the Eastern District of Virginia, Neil MacBride, described it, “It was unclear whether Mr. Levison would not comply with the order because it was technically not feasible or difficult, or because it was not consistent with his business practice in providing secure, encrypted e-mail service for his customers.” The meeting must have gone poorly for the F.B.I. because McBride filed a motion to compel Lavabit to comply with the pen-register and trap-and-trace order that very same day.

Magistrate Judge Theresa Carroll Buchanan granted the motion, inserting in her own handwriting that Lavabit was subject to “the possibility of criminal contempt of Court” if it failed to comply. When Levison didn’t comply, the government issued a summons, “United States of America v. Ladar Levison,” ordering him to explain himself on July 16th. The newly unsealed documents reveal tense talks between Levison and the F.B.I. in July. Levison wanted additional assurances that any device installed in the Lavabit system would capture only narrowly targeted data, and no more. He refused to provide real-time access to Lavabit data; he refused to go to court unless the government paid for his travel; and he refused to work with the F.B.I.’s technology unless the government paid him for “developmental time and equipment.” He instead offered to write an intercept code for the account’s metadata—for thirty-five hundred dollars. He asked Judge Hilton whether there could be “some sort of external audit” to make sure that the government did not take additional data. (The government plan did not include any oversight to which Levison would have access, he said.)

Most important, he refused to turn over the S.S.L. encryption keys that scrambled the messages of Lavabit’s customers, and which prevent third parties from reading them even if they obtain the messages.

The discussions disintegrated because the FBI refused to let Levison do what Yates now says they want to do: ensure that providers can hand over the data tailored to meet a specific request. That’s when Levison tried to give FBI his key in what it claimed (even though it has done the same for FOIAs and/or criminal discovery) was in a type too small to read.

On August 1st, Lavabit’s counsel, Jesse Binnall, reiterated Levison’s proposal that the government engage Levison to extract the information from the account himself rather than force him to turn over the S.S.L. keys.

THE COURT: You want to do it in a way that the government has to trust you—
BINNALL: Yes, Your Honor.
THE COURT: —to come up with the right data.
BINNALL: That’s correct, Your Honor.
THE COURT: And you won’t trust the government. So why would the government trust you?
Ultimately, the court ordered Levison to turn over the encryption key within twenty-four hours. Had the government taken Levison up on his offer, he may have provided it with Snowden’s data. Instead, by demanding the keys that unlocked all of Lavabit, the government provoked Levison to make a last stand. According to the U.S. Attorney MacBride’s motion for sanctions,
At approximately 1:30 p.m. CDT on August 2, 2013, Mr. Levison gave the F.B.I. a printout of what he represented to be the encryption keys needed to operate the pen register. This printout, in what appears to be four-point type, consists of eleven pages of largely illegible characters. To make use of these keys, the F.B.I. would have to manually input all two thousand five hundred and sixty characters, and one incorrect keystroke in this laborious process would render the F.B.I. collection system incapable of collecting decrypted data.
The U.S. Attorneys’ office called Lavabit’s lawyer, who responded that Levison “thinks” he could have an electronic version of the keys produced by August 5th.

Levison came away from the debacle believing that the FBI didn’t understand what it was asking for when they asked for his keys.

One result of this newfound expertise, however, is that Levison believes there is a knowledge gap between the Department of Justice and law-enforcement agencies; the former did not grasp the implications of what the F.B.I. was asking for when it demanded his S.S.L. keys.

I raise all this because of the rumor — which Bruce Schneier inserted into his excerpt of this Nicholas Weaver post — that FBI is already fighting before FISC with Apple for a back door.

There’s a persistent rumor going around that Apple is in the secret FISA Court, fighting a government order to make its platform more surveillance-friendly — and they’re losing. This might explain Apple CEO Tim Cook’s somewhat sudden vehemence about privacy. I have not found any confirmation of the rumor.

Weaver’s post describes how, because of the need to allow users to access their iMessage account from multiple devices (think desktop, laptop, iPad, and phone), Apple technically could give FBI a key.

In iMessage, each device has its own key, but its important that the sent messages also show up on all of Alice’s devices.  The process of Alice requesting her own keys also acts as a way for Alice’s phone to discover that there are new devices associated with Alice, effectively enabling Alice to check that her keys are correct and nobody has compromised her iCloud account to surreptitiously add another device.

But there remains a critical flaw: there is no user interface for Alice to discover (and therefore independently confirm) Bob’s keys.  Without this feature, there is no way for Alice to detect that an Apple keyserver gave her a different set of keys for Bob.  Without such an interface, iMessage is “backdoor enabled” by design: the keyserver itself provides the backdoor.

So to tap Alice, it is straightforward to modify the keyserver to present an additional FBI key for Alice to everyone but Alice.  Now the FBI (but not Apple) can decrypt all iMessages sent to Alice in the future.

Admittedly, as heroic as Levison’s decision to shut down Lavabit rather than renege on a promise he made to his customers, Apple has a lot more to lose here strictly because of the scale involved. And in spite of the heated rhetoric, FBI likely still trusts Apple more than they trusted Levison.

Still, it’s worth noting that Yates’ claim that FBI doesn’t want keys to communications isn’t true — or at least wasn’t before her tenure at DAG. Because a provider, Levison, insisted on providing his customers what he had promised, the FBI grew so distrustful of him they did demand a key.

This Surveillance Bill Brought to You by the US Chamber of Commerce — To Stave Off Something More Effective

Screen Shot 2015-08-11 at 10.45.57 AMThe Chamber of Commerce has a blog post pitching CISA today.

It’s mostly full of lies (see OTI’s @Robyn_Greene‘s timeline for an explication of those lies).

But given Sheldon Whitehouse’s admission the other day that the Chamber exercises pre-clearance vetoes over this bill, I’d like to consider what the Chamber gets out of CISA. It claims the bill, ” would help businesses achieve timely and actionable situational awareness to improve theirs and the nation’s detection, mitigation, and response capabilities against cyber threats.” At least according to the Chamber, this is about keeping businesses safe. Perhaps it pitches the bill in those terms because of its audience, other businesses. But given the gross asymmetry of the bill — where actual humans can be policed based on data turned over, but corporate people cannot be — I’m not so sure.

Screen Shot 2015-08-11 at 10.46.57 AMAnd therein lies the key.

Particularly given increasing calls for effective cybersecurity legislation — something with actual teeth — at least for cars and critical infrastructure, this bill should be considered a corporatist effort to stave off more effective measures that would have a greater impact on cybersecurity.

That is borne out by the Chamber’s recent 5 reasons to support CISA post. It emphasizes two things that have nothing to do with efficacy: the voluntary nature of it, and the immunity, secrecy, and anti-trust provisions in the bill.

That is, the Chamber, which increasingly seems to be the biggest cheerleader for this bill, isn’t aiming to anything more than “situational awareness” to combat the scourge of hacking. But it wants that — the entire point of this bill — within a context that provides corporations with maximal flexibility while giving them protection they have to do nothing to earn.

CISA is about immunizing corporations to spy on their customers. That’s neither necessary nor the most effective means to combat hacking. Which ought to raise serious questions about the Chamber’s commitment to keeping America safe.

 

Cy Vance Calls It In Dumbly on Smart Phones

There are two things Cy Vance (writing with Paris’ Chief Prosecutor, the City of London Policy Commissioner, and Javier Zaragoza, Spain’s High Court) doesn’t mention in his op-ed calling for back doors in Apple and Google phones.

iPhone theft and bankster crime.

The former is a huge problem in NYC, with 8,465 iPhone thefts in 2013, which made up 18% of the grand larcenies in the city. The number came down 25% (and the crime started switching to Samsung products) last year, largely due to Apple’s implementation of a Kill Switch, but that still leaves 6,000 thefts a year — as compared to the 74 iPhones Vance says NYPD wasn’t able to access (he’s silent about how many investigations, besides the 3 he describes, that actually thwarted; Vance ignores default cloud storage completely in his op-ed). The numbers will come down still further now that Apple has made the Kill Switch (like encryption) a default setting on the iPhone 6. But there are still a lot of thefts, which can not only result in a phone being wiped and resold, but also an identity stolen. Default encryption will protect against both kinds of crime. In other words, Vance just ignores how encryption can help to prevent a crime that has been rampant in NYC in recent years.

Bankster crime is an even bigger problem in NYC, with a number of the worlds most sophisticated Transnational Crime Organizations, doing trillions of dollars of damage, headquartered in the city. These TCOs are even rolling out their very own encrypted communication system, which Elizabeth Warren fears may eliminate the last means of holding them accountable for their crimes. But Vance — one of the prosecutors that should be cracking down on this crime — not only doesn’t mention their special encrypted communication system, but he doesn’t mention their crimes at all.

There are other silences and blind spots in Vance’s op-ed, too. The example he starts with — a murder in Evanston, not any of the signees’ jurisdiction — describes two phones that couldn’t be accessed. He remains silent about the other evidence available by other means, such as via the cloud. Moreover, he assumes the evidence will be in the smart phone, which may not be the case. Moreover, it’s notable that Vance focuses on a black murder victim, because racial disparities in policing, not encryption, are often a better explanation for why murders of black men remain unsolved 2 months later. Given NYPD’s own crummy record at investigating and solving the murders of black and Latino victims, you’d think Vance might worry more about having NYPD reassign its detectives accordingly than stripping the privacy of hundreds of thousands.

Then Vance goes on to describe how much smart phone data they’re still getting.

In France, smartphone data was vital to the swift investigation of the Charlie Hebdo terrorist attacks in January, and the deadly attack on a gas facility at Saint-Quentin-Fallavier, near Lyon, in June. And on a daily basis, our agencies rely on evidence lawfully retrieved from smartphones to fight sex crimes, child abuse, cybercrime, robberies or homicides.

Again, Vance is silent about whether this data is coming off the phone itself, or off the cloud. But it is better proof that investigators are still getting the data (perhaps via the cloud storage he doesn’t want to talk about?), not that they’re being thwarted.

Like Jim Comey, Vance claims to want to have a discussion weighing the “marginal benefits of full-disk encryption and the need for local law enforcement to solve and prosecute crimes.” But his op-ed is so dishonest, so riven with obvious holes, it raises real questions about both his honesty and basic logic.

Gang of Transnational Crime Organizations Roll Out Own Encrypted Communication System

When Michael Chertoff made the case against back doors, he noted that if the government moved to require back doors, it would leave just the bad guys with encrypted communications.

The second thing is that the really bad people are going to find apps and tools that are going to allow them to encrypt everything without a back door. These apps are multiplying all the time. The idea that you’re going to be able to stop this, particularly given the global environment, I think is a pipe dream. So what would wind up happening is people who are legitimate actors will be taking somewhat less secure communications and the bad guys will still not be able to be decrypted.

I doubt he had the Transnational Crime Organizations on Wall Street in mind when he talked about the bad guys “still not be able to be decrypted.”

But HSBC, JP Morgan Chase, Citi, Deutsche Bank, Goldman Sachs and the other big banks supporting Symphony Communications — a secure cloud based communications system about to roll out — are surely among the world’s most hard-core recidivists, and their crime does untold amount of damage to ordinary people around the globe.

Which is why I’m so amused that Elizabeth Warren has made a stink about the imminent rollout of Symphony and whether it will affect banks’ ability to evade what scant law enforcement might be thrown their way.

I have [] concerns about how the biggest banks use of this new communications tool will impact compliance and enforcement by the Department of Justice [Warren sent versions of the letter to 6 regulators] at the federal level.

My concerns are exacerbated by Symphony’s publicly available descriptions of the new communications system, which appear to put companies on notice — with a wink and a nod — that they can use Symphony to reduce compliance and enforcement concerns. Symphony claims that “[i]n the past, communication tools designed for the financial services sector were limited in reach and effectiveness by strict regulatory compliance … We’re changing the communications paradigm” The company’s website boasts that it has special tools to “prevent government spying,” and “there are no backdoors,” and that “Symphony has designed a specific set of procedures to guarantee that data deletion is permanent.”

Warren is right to be concerned. These are serial conspiracists on a global scale, and (as Warren notes elsewhere) they’ve only been caught — to the extent that hand slaps count as being caught — when DOJ found the chat rooms in which they’ve colluded.

That said, the banks, too, have real reason to be concerned. The stated reason they give for pushing this project is Bloomberg’s spying on them — when they were using Bloomberg chat — for reporting reasons, which was revealed two years ago. The reference to government spying goes beyond US adversaries, though I’m sure both real adversaries, like China, and competitors, like the EU, are keeping watch on the banks to the extent they can. But the US has spied on the banks, too. As the spy agency did with Google, the NSA spied on SWIFT even though it also had a legal means to get that data. I wouldn’t be surprised if the rise in bank sanctions violations in recent years stemmed from completely necessary spying if you’re going to impose sanctions, but spying that would compromise the banks, too. Remember, too, that the Treasury Department has, at least as of recently, never complied with EO 12333’s requirement for minimization procedures to protect US persons, which would include banks.

And there have even been cases of hacker-insider trader schemes of late.

So banks are right to want secure communications. And while these banks are proven crooks — and should be every bit the worry to Jim Comey as ISIS’s crappier encryption program, if Comey believes in hunting crime — the banks should be reined in via other means, not by making them insecure.

If we’re going to pretend — and it is no more than make-believe — that the banks operate with integrity, then they need to have secure communications. But without that make-believe, a lot of the important fictions America tells itself about capitalism start to fall apart.

Which is my way of saying that the 6 regulators need to think through how they can continue to regulate recidivist crooks who have their own secure messaging system, but that the recidivist crooks probably need a secure messaging system (though having their own might be a stretch).

If Jim Comey is going to bitch and moan about criminals potentially exploiting access to encrypted communications, then he should start his conversation with the banks, not Apple. If he remains silent about this gang and their secure communications, then he needs to concede, once and for all, that actual humans need to have access to the same privilege of secure communications.

On this topic, see also District Sentinel’s piece on this.

 

Several Supporters of CISA Admit Its Inadequacy

In recent days, there have been reports that the same (presumed Chinese) hackers who stole vast amounts of data from the Office of Personnel Management have also hacked at least United Airlines and American. (Presuming the Chinese attribution is correct — and I believe it — I would be surprised if Chinese hackers hadn’t also tried to hack Delta, given that it has a huge footprint in Asia, including China; if that’s right and Delta managed to withstand the attack, we should find out how and why.)

Those hacks — and the presumption that the Chinese are stealing the data to flesh out their already detailed map of the activities of US intelligence personnel — have led a bunch of Cyber Information Sharing Act supporters (Susan Collins and Barb Mikulski have already voted for it, and Bill Nelson almost surely will, because he loves surveillance) to admit its inadequacy.

In recent months, hackers have infiltrated the U.S. air traffic control system, forced airlines to ground planes and potentially stolen detailed travel records on millions of people.

Yet the industry lacks strict requirements to report these cyber incidents, or even adhere to specific cybersecurity standards.

“There should be a requirement for immediate reporting to the federal government,” Sen. Susan Collins (R-Maine), who chairs the Appropriations subcommittee that oversees the Federal Aviation Administration (FAA), told The Hill.

“We need to address that,” agreed Sen. Bill Nelson (D-Fla.), the top Democrat on the Senate Commerce Committee.

[snip]

“We need a two-way exchange of information so that when a threat is identified by the private sector, it’s shared with the government, and vice versa,” Collins added. “That’s the only way that we have any hope of stopping further breaches.”

[snip]

That’s why, Nelson said, the airline industry needs mandatory, immediate reporting requirements.

“All the more reason for a cybersecurity bill,” he said.

But for years, Congress has been unsuccessful in its efforts.

Sen. Barbara Mikulski (D-Md.), the Senate Appropriations Committee’s top Democrat, tried three years ago to move a cyber bill that would have included rigid breach reporting requirements for critical infrastructure sectors, including aviation.

“We were blocked,” she told The Hill recently. “So it’s time for not looking at an individual bill, but one that’s overall for critical infrastructure.”

So now we have some Senators calling for heightened cybersecurity standards for cars, and different, hawkish Senators calling for heightened cybersecurity sharing (though they don’t mention security standards) for airlines. Bank regulators are already demanding higher standards from them.

And someday soon someone will start talking about mandating response time for operating system fixes, given the problems with Android updates.

Maybe the recognition that one after another industry requires not immunity, but an approach to cybersecurity that actually requires some minimal actions from the companies in question, ought to lead Congress to halt before passing CISA and giving corporations immunity and think more seriously about what a serious approach to our cyber problems might look like.

That said, note that the hawks in this story are still adopting what is probably an approach of limited use here. Indeed, the story is notable in that it cites a cyber contractor, JAS Global Advisors Jeff Schmidt, actually raising questions whether mandated info-sharing (with the government, not the public) would be all that effective.

If OPM has finally demonstrated the real impact of cyberattacks, then maybe it’s time to have a real discussion of what might help to keep this country safe — because simply immunizing corporations is not going to do it.

Tesla Patches Faster than Chrysler … and than Android [UPDATED]

Wired’s hack-of-the-day story reports that researchers hacked a Tesla (unlike the Chrysler hack, it required access to the vehicle once, though the Tesla also has a browser vulnerability that might not require direct access).

Two researchers have found that they could plug their laptop into a network cable behind a Model S’ driver’s-side dashboard, start the car with a software command, and drive it. They could also plant a remote-access Trojan on the Model S’ network while they had physical access, then later remotely cut its engine while someone else was driving.

The story notes how much more proactive Tesla was in patching this problem than Chrysler was.

The researchers found six vulnerabilities in the Tesla car and worked with the company for several weeks to develop fixes for some of them. Tesla distributed a patch to every Model S on the road on Wednesday. Unlike Fiat Chrysler, which recently had to issue a recall for 1.4 million cars and mail updates to users on a USB stick to fix vulnerabilities found in its cars, Tesla has the ability to quickly and remotely deliver software updates to its vehicles. Car owners only have to click “yes” when they see a prompt asking if they want to install the upgrade.

In my understanding, Tesla was able to do this both because it responded right away to implement the fix, and because it had the technical ability to distribute the update in such a way that was usable for end users. Chrysler deserves criticism for the former (though at least according to Chrysler, it did start to work on a fix right away, it just didn’t implement it), but the latter is a problem that will take some effort to fix.

Which is one reason I think a better comparison with Tesla’s quick fix is Google’s delayed fix for the Stagefright vulnerability. As the researcher who found it explained, Google address the vulnerability internally immediately, just like Tesla did.

Google has moved quickly to reassure Android users following the announcement of a number of serious vulnerabilities.

The Google Stagefright Media Playback Engine Multiple Remote Code Execution Vulnerabilitiesallow an attacker to send a media file over a MMS message targeting the device’s media playback engine, Stagefright, which is responsible for processing several popular media formats.

Attackers can steal data from infected phones, as well as hijacking the microphone and camera.

Android is currently the most popular mobile operating system in the world — meaning that hundreds of millions of people with a smartphone running Android 2.2 or newer could be at risk.

Joshua Drake, mobile security expert with Zimperium, reports

A fully weaponized successful attack could even delete the message before you see it. You will only see the notification…Unlike spear-phishing, where the victim needs to open a PDF file or a link sent by the attacker, this vulnerability can be triggered while you sleep. Before you wake up, the attacker will remove any signs of the device being compromised and you will continue your day as usual – with a trojaned phone.

Zimperium say that “Google acted promptly and applied the patches to internal code branches within 48 hours, but unfortunately that’s only the beginning of what will be a very lengthy process of update deployment.”

But with Android the updates need to go through manufacturers, which creates a delay — especially given fairly crummy updating regimes by a number of top manufacturers.

The experience with this particular vulnerability may finally be pushing Android-based manufacturers to fix their update process.

It’s been 10 days since Zimperium’s Joshua Drake revealed a new Android vulnerabilitycalled Stagefright — and Android is just starting to recover. The bug allows an attacker to remotely execute code through a phony multimedia text message, in many cases without the user even seeing the message itself. Google has had months to write a patch and already had one ready when the bug was announced, but as expected, getting the patch through manufacturers and carriers was complicated and difficult.

But then, something unexpected happened: the much-maligned Android update system started to work. Samsung, HTC, LG, Sony and Android One have already announced pending patches for the bug, along with a device-specific patch for the Alcatel Idol 3. In Samsung’s case, the shift has kicked off an aggressive new security policy that will deploy patches month by month, an example that’s expected to inspire other manufacturers to follow suit. Google has announced a similar program for its own Nexus phones. Stagefright seems to have scared manufacturers and carriers into action, and as it turns out, this fragmented ecosystem still has lots of ways to protect itself.

I make this comparison for two reasons. One, if Google — the customers of which have the hypothetical ability to send out remote patches, even if they’ve long neglected that ability — still doesn’t have this fixed, it’s unsurprising that Chrysler doesn’t yet.

But some of the additional challenges that Chrysler has that Tesla has fewer of stem from the fragmented industry. Chrysler’s own timeline of its vulnerability describes a “third party” discovering the vulnerability (not the hackers), and a “supplier” fixing it.

In January 2014, through a penetration test conducted by a third party, FCA US LLC (“FCA US”) identified a potential security vulnerability pertaining to certain vehicles equipped with RA3 or RA4 radios.

A communications port was unintentionally left in an open condition allowing it to listen to and accept commands from unauthenticated sources. Additionally, the radio firewall rules were widely open by default which allowed external devices to communicate with the radio. To date, no instances related to this vulnerability have been reported or observed, except in a research setting.

The supplier began to work on security improvements immediately after the penetration testing results were known in January 2014.

But it’s completely unclear whether that “third party” is the “supplier” in question. Which means it’s unclear whether this was found in the supplier’s normal testing process or in something else.

One reason cars are particularly difficult to test are because so many different suppliers provide parts which don’t get tested (or even adequately specced) in an integrated fashion.

Then, if you need to fix something you can’t send out over a satellite or Internet network, you’re dealing with the — in many cases — archaic relationships car makers have with dealers, not to mention the limitations of dealer staff and equipment to make the fix.

I don’t mean to excuse the automotive industry — they’re going to have to fix these problems (and the same problems lie behind fixing some of the defects tied to code that doesn’t stem from hacks, too, such as Toyota’s sudden acceleration problem).

It’s worth noting, however, how simplified supply and delivery chains make fixing a problem a lot easier for Tesla than it is for a number of other entities, both in and outside of the tech industry.

UPDATE — 4:30 PM EDT —

Hey, it’s Rayne here, adding my countervailing two cents (bitcoins?) to the topic after Marcy and I exchanged a few emails about this topic. I have a slightly different take on the situation since I’ve done competitive intelligence work in software, including open source models like Android.

Comparing Fiat Chrysler’s and Google’s Android risks, the size and scale of the exposures are a hell of a lot different. There are far more Android devices exposed than Chrysler car models at risk — +1 billion Android devices shipped annually around the globe as of 4Q2014.

Hell, daily activations of Android devices in 2013 were 1.2 million devices per day — roughly the same number as all the exposed Chrysler vehicles on the road, subject to recall.

Google should have a much greater sense of urgency here due to the size of the problem.

Yet chances of a malware attack on an Android device actually causing immediate mortal threat to one or more persons is very low, compared to severity of Chrysler hack. Could a hacker tinker with household appliances attached via Android? It’s possible — but any outcome now is very different from a hacker taking over and shutting down a vehicle operating at high speed in heavy traffic, versus shutting off a Phillips remote-controlled Hue lamp or a Google Nest thermostat, operating in the Internet of Things. The disparity in annoyance versus potential lethality may explain why Google hasn’t acted as fast as Tesla — but it doesn’t explain at all why Chrysler didn’t handle announcing their vulnerability differently. Why did they wait nearly a year to discuss it in public? Read more

GM Supports Obtaining Cybersecurity Immunity Just after Hack Vulnerability Revealed

Dianne Feinstein just gave a long speech on the Senate floor supporting the Cyber Information Sharing Act.

She listed off a list of shocking hacks that happened in the last year or so — though made no effort (or even claim) that CISA would have prevented any of them.

She listed some of the 56 corporations and business organizations that support the bill.

Most interestingly, she boasted that yesterday she received a letter from GM supporting the bill. We should pass CISA, Feinstein suggests, because General Motors, on August 4, 2015, decided to support the bill.

I actually think that’s reason to oppose the bill.

As I have written elsewhere — most recently this column at the DailyDot — one of my concerns about the bill is the possibility that by sharing data under the immunity afforded by the bill, corporations might dodge liability where it otherwise might serve as necessary safety and security leverage.

Immunizing corporations may make it harder for the government to push companies to improve their security. As Wyden explained, while the bill would let the government use data shared to prosecute crimes, the government couldn’t use it to demand security improvements at those companies. “The bill creates what I consider to be a double standard—really a bizarre double standard in that private information that is shared about individuals can be used for a variety of non-cyber security purposes, including law enforcement action against these individuals,” Wyden said, “but information about the companies supplying that information generally may not be used to police those companies.”

Financial information-sharing laws may illustrate why Wyden is concerned. Under that model, banks and other financial institutions are obligated to report suspicious transactions to the Treasury Department, but, as in CISA, they receive in return immunity from civil suits as well as consideration in case of sanctions, for self-reporting. “Consideration,” meaning that enforcement authorities take into account a financial institution’s cooperation with the legally mandated disclosures when considering whether to sanction them for any revealed wrongdoing. Perhaps as a result, in spite of abundant evidence that banks have facilitated crimes—such as money laundering for drug cartels and terrorists—the Department of Justice has not managed to prosecute them. When asked during her confirmation hearing why she had not prosecuted HSBC for facilitating money laundering when she presided over an investigation of the company as U.S. Attorney for the Eastern District of New York, Attorney General Loretta Lynch said there was not sufficient “admissible” evidence to indict, suggesting they had information they could not use.

In the same column, I pointed out the different approach to cybersecurity — for cars at least — of the SPY Act — introduced by Ed Markey and Richard Blumenthal — which affirmatively requires certain cybersecurity and privacy protections.

Increased attention on the susceptibility of networked cars—heightened by but not actually precipitated by the report of a successful remote hack of a Jeep Cherokee—led two other senators, Ed Markey and Richard Blumenthal, to adopt a different approach. They introduced the Security and Privacy in Your Car Act, which would require privacy disclosures, adequate cybersecurity defenses, and additional reporting from companies making networked cars and also require that customers be allowed to opt out of letting the companies collect data from their cars.

The SPY Car Act adopts a radically different approach to cybersecurity than CISA in that it requires basic defenses from corporations selling networked products. Whereas CISA supersedes privacy protections for consumers like the Electronic Communications Privacy Act, the SPY Car Act would enhance privacy for those using networked cars. Additionally, while CISA gives corporations immunity so long as they share information, SPY Car emphasizes corporate liability and regulatory compliance.

I’m actually not sure how you could have both CISA and SPY Act, because the former’s immunity would undercut the regulatory limits on the latter. (And I asked both Markey and Blumenthal’s offices, but they blew off repeated requests for an answer on this point.)

Which brings me back to GM’s decision — yesterday!!! — to support CISA.

The hackers that remotely hacked a car used a Jeep Cherokee. But analysis they did last year found the Cadillac Escalade to be the second most hackable car among those they reviewed (and I have reason to believe there are other GM products that are probably even more hackable).

So … hackers reveal they can remotely hack cars on July 21; Markey introduced his bill on the same day. And then on August 4, GM for the first time signs up for a bill that would give them immunity if they start sharing data with the government in the name of cybersecurity.

Now maybe I’m wrong in my suspicion that CISA’s immunity would provide corporations a way to limit their other liability for cybersecurity so long as they had handed over a bunch of data to the government, even if it incriminated them.

But we sure ought to answer that question before we go immunizing corporations whose negligence might leave us more open to attack.

The US Chamber of Commerce Is Pre-Clearing What It Is Willing to Do for Our National Security on CISA

Screen Shot 2015-08-04 at 4.11.21 PMSheldon Whitehouse just attempted (after 1:44) to rebut an epic rant from John McCain (at 1:14) in which the Arizona Senator suggested anyone who wanted to amend the flawed Cyber Intelligence Sharing Act wasn’t serious about national security.

Whitehouse defended his two amendments first by pointing out that McCain likes and respects the national security credentials of both his co-sponsors (Lindsey Graham and Max Blunt).

Then Whitehouse said,  “I believe both of the bills [sic] have now been cleared by the US Chamber of Commerce, so they don’t have a business community objection.”

Perhaps John McCain would be better served turning himself purple (really! watch his rant!) attacking the very notion that the Chamber of Commerce gets pre-veto power over a bill that (according to John McCain) is utterly vital for national security.

Even better, maybe John McCain could turn himself purple suggesting that the Chamber needs to step up to the plate and accept real responsibility for making this country’s networks safer, rather than just using our cybersecurity problems as an opportunity to demand immunity for yet more business conduct.

If this thing is vital for national security — this particular bill is not, but McCain turned himself awfully purple — then the Chamber should just suck it up and meet the requirements to protect the country decided on by the elected representatives of this country.

Yet instead, the Chamber apparently gets to pre-clear a bill designed to spy on the Chamber’s customers.

Consider CISA a Six-Month Distraction from Shoring Up Government Security

Most outlets that commented on DHS’ response to Al Franken’s questions about CISA focused on their concerns about privacy.

The authorization to share cyber threat indicators and defensive measures with “any other entity or the Federal Government,” “notwithstanding any other provision of law” could sweep away important privacy protections, particularly the provisions in the Stored Communications Act limiting the disclosure of the content of electronic communications to the government by certain providers. (This concern is heightened by the expansive definitions of cyber threat indicators and defensive measures in the bill. Unlike the President’s proposal, the Senate bill includes “any other attribute of a cybersecurity threat” within its definition of cyber threat indicator and authorizes entities to employ defensive measures.)

[snip]

To require sharing in “real time” and “not subject to any delay [or] modification” raises concerns relating to operational analysis and privacy.

First, it is important for the NCCIC to be able to apply a privacy scrub to incoming data, to ensure that personally identifiable information unrelated to a cyber threat has not been included. If DHS distributes information that is not scrubbed for privacy concerns, DHS would fail to mitigate and in fact would contribute to the compromise of personally identifiable information by spreading it further. While DHS aims to conduct a privacy scrub quickly so that data can be shared in close to real time, the language as currently written would complicate efforts to do so. DHS needs to apply business rules, workflows and data labeling (potentially masking data depending on the receiver) to avoid this problem.

None of those outlets noted that DOJ’s Inspector General cited privacy concerns among the reasons why private sector partners are reluctant to share data with FBI.

So the limited privacy protections in CISA are actually a real problem with it — one changes in a manager’s amendment (the most significant being a limit on uses of that data to cyber crimes rather than a broad range of felonies currently in the bill) don’t entirely address.

But I think this part of DHS’ response is far more important to the immediate debate.

Finally the 90-day timeline for DHS’s deployment of a process and capability to receive cyber threat indicators is too ambitious, in light of the need to fully evaluate the requirements pertaining to that capability once legislation passes and build and deploy the technology. At a minimum, the timeframe should be doubled to 180 days.

DHS says the bill is overly optimistic about how quickly a new cybersharing infrastructure can be put in place. I’m sympathetic with their complaint, too. After all, if it takes NSA 6 months to set up an info-sharing infrastructure for the new phone dragnet created by USA Freedom Act, why do we think DHS can do the reverse in half the time?

Especially when you consider DHS’ concerns about the complexity added because CISA permits private sector entities to share with any of a number of government agencies.

Equally important, if cyber threat indicators are distributed amongst multiple agencies rather than initially provided through one entity, the complexity–for both government and businesses–and inefficiency of any information sharing program will markedly increase; developing a single, comprehensive picture of the range of cyber threats faced daily will become more difficult. This will limit the ability of DHS to connect the dots and proactively recognize emerging risks and help private and public organizations implement effective mitigations to reduce the likelihood of damaging incidents.

DHS recommends limiting the provision in the Cybersecurity Information Sharing Act regarding authorization to share information, notwithstanding any other provision of law, to sharing through the DHS capability housed in the NCCIC.

Admittedly, some of this might be attributed to bureaucratic turf wars — albeit turf wars that those who’d prefer DHS do a privacy scrub before FBI or NSA get the data ought to support. But DHS is also making a point about building complexity into a data sharing portal that recreates one that already exists that has less complexity (as well as some anonymizing and minimization that might be lost under the new system). That complexity is going to make the whole thing less secure, just as we’re coming to grips with how insecure government networks are. It’s not clear, at all, why a new portal needs to be created, one that is more complex and involves agencies like the Department of Energy — which is cybersprinting backwards on its own security — at the front end of that complexity, one that lacks some safeguards that are in the DHS’ current portal.

More importantly, that complexity, that recreation of something that already exists — that’s going to take six months of DHS’s time, when it should instead be focusing on shoring up government security in the wake of the OPM hack.

Until such time as Congress wants to give the agencies unlimited resources to focus on cyberdefense, it will face limited resources and with those limited resources some real choices about what should be the top priority. And while DHS didn’t say it, it sure seems to me that CISA would require reinventing some wheels, and making them more complex along the way, at a time when DHS (and everyone in government focused on cybersecurity) have better things to be doing.

Congress is already cranky that the Administration took a month two months to cybersprint middle distance run in the wake of the OPM hack. Why are they demanding DHS spend 6 more months recreating wheels before fixing core vulnerabilities?