Why Apple Should Pay Particular Attention to Wired’s New Car Hacking Story

This morning, Wired reports that the hackers who two years ago hacked an Escape and a Prius via physical access have hacked a Jeep Cherokee via remote (mobile phone) access. They accessed the vehicle’s Electronic Control Unit and from that were able to get to ECUs controlling the transmission and brakes, as well as a number of less critical items. The hackers are releasing a report [correction: this is Markey’s report], page 86 of which explains why cars have gotten so much more vulnerable (generally, a combination of being accessible via external communication networks, having more internal networks, and having far more ECUs that might have a vulnerability). It includes a list of the most and least hackable cars among the 14 they reviewed.

Screen Shot 2015-07-21 at 8.37.22 AM

Today Ed Markey and Richard Blumenthal are releasing a bill meant to address some of these security vulnerabilities in cars.

Meanwhile — in a remarkably poorly timed announcement — Apple announced yesterday that it had hired Fiat Chrysler’s former quality guy, the guy who would have overseen development of both the hackable Jeep Cherokee and the safer Dodge Viper.

Doug Betts, who led global quality at Fiat Chrysler Automobiles NV until last year, is now working for the Cupertino, Calif.-based electronics giant but declined to comment on the position when reached Monday. Mr. Betts’ LinkedIn profile says he joined Apple in July and describes his title as “Operations-Apple Inc.” with a location in the San Francisco Bay Area but no further specifics.

[snip]

Along with Mr. Betts, whose expertise points to a desire to know how to build a car, Apple recently recruited one of the leading autonomous-vehicle researchers in Europe and is building a team to work on those systems.

[snip]

In 2009, when Fiat SpA took over Chrysler, CEO Sergio Marchionne tapped Mr. Betts to lead the company’s quality turnaround, giving him far-reaching authority over the company’s brands and even the final say on key production launches.

Mr. Betts abruptly left Fiat Chrysler last year to pursue other interests. The move came less than a day after the car maker’s brands ranked poorly in an influential reliability study.

Note, the poor quality ratings that preceded Betts’ departure from Fiat Chrysler pertained especially to infotainment systems, which points to electronics vulnerabilities generally.

As they get into the auto business, Apple and Google will have the luxury that struggling combustion engine companies don’t have — that they’re not limited by tight margins as they try to introduce bells and whistles to compete on the marketplace. But they’d do well to get this quality and security issue right from the start, because the kind of errors tech companies can tolerate — largely because they can remotely fix bugs and because an iPhone that prioritized design over engineering can’t kill you — will produce much bigger problems in cars (though remote patching will be easier in electric cars).

So let’s hope Apple’s new employee takes this hacking report seriously.

Sheldon Whitehouse’s Hot and Cold Corporate Cybersecurity Liability

Ben Wittes has a summary of last Wednesday’s “Going Dark” hearings. He engages in a really amusing straw man — comparing a hypothetically perfectly secure Internet with ungoverned Somalia.

Consider the conceptual question first. Would it be a good idea to have a world-wide communications infrastructure that is, as Bruce Schneier has aptly put it, secure from all attackers? That is, if we could snap our fingers and make all device-to-device communications perfectly secure against interception from the Chinese, from hackers, from the FSB but also from the FBI even wielding lawful process, would that be desireable? Or, in the alternative, do we want to create an internet as secure as possible from everyone except government investigators exercising their legal authorities with the understanding that other countries may do the same?

Conceptually speaking, I am with Comey on this question—and the matter does not seem to me an especially close call. The belief in principle in creating a giant world-wide network on which surveillance is technically impossible is really an argument for the creation of the world’s largest ungoverned space. I understand why techno-anarchists find this idea so appealing. I can’t imagine for moment, however, why anyone else would.

Consider the comparable argument in physical space: the creation of a city in which authorities are entirely dependent on citizen reporting of bad conduct but have no direct visibility onto what happens on the streets and no ability to conduct search warrants (even with court orders) or to patrol parks or street corners. Would you want to live in that city? The idea that ungoverned spaces really suck is not controversial when you’re talking about Yemen or Somalia. I see nothing more attractive about the creation of a worldwide architecture in which it is technically impossible to intercept and read ISIS communications with followers or to follow child predators into chatrooms where they go after kids.

This gets the issue precisely backwards, attributing all possible security and governance to policing alone, and none to prevention, and as a result envisioning chaos in a possibility that would, in fact, have less or at least different kinds chaos. Wittes simply dismisses the benefits of a perfectly secure Internet (which is what all the pro-backdoor witnesses at the hearings did too, ignoring, for example, the effect that encrypting phones would have on a really terrible iPhone theft problem). But Wittes’ straw man isn’t central to his argument, just a tell about his biases.

Wittes, like Comey, also suggests the technologists are wrong when they say back doors will be bad.

There is some reason, in my view, to suspect that the picture may not be quite as stark as the computer scientists make it seem. After all, the big tech companies increase the complexity of their software products all the time, and they generally regard the increased attack surface of the software they create as a result as a mitigatable problem. Similarly, there are lots of high-value intelligence targets that we have to secure and would have big security implications if we could not do so successfully. And when it really counts, that task is not hopeless. Google and Apple and Facebook are not without tools in the cybersecurity department.

Wittes appears unaware that the US has failed miserably at securing its high value intelligence targets, so it’s not a great counterexample.

But I’m primarily interested in Wittes’ fondness for an idea floated by Sheldon Whitehouse: that the government force providers to better weigh the risk of security by ensuring it bears liability if the cops can’t access communications.

Another, perhaps softer, possibility is to rely on the possibility of civil liability to incentivize companies to focus on these issues. At the Senate Judiciary Committee hearing this past week, the always interesting Senator Sheldon Whitehouse posed a question to Deputy Attorney General Sally Yates about which I’ve been thinking as well: “A girl goes missing. A neighbor reports that they saw her being taken into a van out in front of the house. The police are called. They come to the home. The parents are frantic. The girl’s phone is still at home.” The phone, however, is encrypted:

WHITEHOUSE: It strikes me that one of the balances that we have in these circumstances where a company may wish to privatize value by saying, “Gosh, we’re secure now. We got a really good product. You’re going to love it.” That’s to their benefit. But for the family of the girl that disappeared in the van, that’s a pretty big cost. And when we see corporations privatizing value and socializing cost so that other people have to bear the cost, one of the ways that we get back to that and try to put some balance into it, is through the civil courts, through a liability system.

If you’re a polluter and you’re dumping poisonous waste into the water rather than treating it properly, somebody downstream can bring an action and can get damages for the harm that they sustain, can get an order telling you to knock it off. I’d be interested in whether or not the Department of Justice has done any analysis as to what role the civil-liability system might be playing now to support these companies in drawing the correct balance, or if they’ve immunized themselves from the cost entirely and are enjoying the benefits. I think in terms of our determination as to what, if anything, we should do, knowing where the Department of Justice believes the civil liability system leaves us might be a helpful piece of information. So I don’t know if you’ve undertaken that, but if you have, I’d appreciate it if you’d share that with us, and if you’d consider doing it, I think that might be helpful to us.

YATES: We would be glad to look at that. It’s not something that we have done any kind of detailed analysis. We’ve been working hard on trying to figure out what the solution on the front end might be so that we’re not in a situation where there could potentially be corporate liability or the inability to be able to access the device.

WHITEHOUSE: But in terms of just looking at this situation, does it not appear that it looks like a situation where value is being privatized and costs are being socialized onto the rest of us?

YATES: That’s certainly one way to look at it. And perhaps the companies have done greater analysis on that than we have. But it’s certainly something we can look at.

I’m not sure what that lawsuit looks like under current law. I, like the Justice Department, have not done the analysis, and I would be very interested in hearing from anyone who has. Whitehouse, however, seems to me to be onto something here. Might a victim of an ISIS attack domestically committed by someone who communicated and plotted using communications architecture specifically designed to be immune, and specifically marketed as immune, from law enforcement surveillance have a claim against the provider who offered that service even after the director of the FBI began specifically warning that ISIS was using such infrastructure to plan attacks? To the extent such companies have no liability in such circumstances, is that the distribution of risk that we as a society want? And might the possibility of civil liability, either under current law or under some hypothetical change to current law, incentivize the development of secure systems that are nonetheless subject to surveillance under limited circumstances?

Why don’t we make the corporations liable, these two security hawks ask!!!

This, at a time when the cybersecurity solution on the table (CISA and other cybersecurity bills) gives corporations overly broad immunity from liability.

Think about that.

While Wittes hasn’t said whether he supports the immunity bills on the table, Paul Rosenzweig and other Lawfare writers are loudly in favor of expansive immunity. And Sheldon Whitehouse, whose idea this is, has been talking about building in immunity for corporations in cybersecurity plans since 2010.

I get there is a need for limited protection for corporations that help the Federal government spy (especially if they’re required to help), which is what liability is always about. I also get that every time we award it, it keeps getting bigger, and years later we discover that immunity covers fairly audacious spying far beyond the ostensible intent of the bill. Though CISA doesn’t even hide that this data will be used for purposes far beyond cybersecurity.

Far, far more importantly, however, one of the problems with the cyber bills on the table is by awarding this immunity, they’re creating a risk calculation for corporations to be sloppy. Sure, there will still be reputational damage every time a corporation exposes its customers’ data to hackers. But we’ve seen in the financial sector — where at least bank regulators require certain levels of hygiene and reporting — bank immunity tied to these reporting requirements appears to have made it impossible to prosecute egregious bank crime.

The banks have learned (and they will be key participants in CISA) that they can obtain impunity by sharing promiscuously (or even not so promiscuously) with the government.

And unlike those bank reporting laws, CISA doesn’t require hygiene. It doesn’t require that corporations deploy basic defenses before obtaining their immunity for information sharing.

If liability is such a great idea, then why aren’t these men pushing the use of liability as a tool to improve our cyberdefenses, rather than (on Whitehouse’s part, at least) calling for the opposite?

Indeed, if this is about appropriately balancing risk, there is no way you can use liability to get corporations to weigh the value of back doors for law enforcement, without at the same time ensuring all corporations also bear full liability for any insecurity in their system, because otherwise corporations won’t be weighing the two sides.

Using liability as a tool might be a clever idea. But using it only for law enforcement back doors does nothing to identify the appropriate balance.

Three Congressional Responses to the OPM Hack

After acknowledging that as more than 20 million people have been affected by the hack of the Office of Personnel Management, OPM head Katherine Archuleta “resigned” today.

In announcing that Office of Budget and Management Deputy Director of Management Beth Cobert would serve as acting Director, Josh Earnest played up her experience at McKinsey Consulting. So we may see the same kind of management claptrap as OPM PR in the coming days that we got from CIA’s reorganization when McKinsey took that project on. Over 20 minutes into his press conference, Earnest also revealed there was 90 day review of the security implications of the hack being led by OMB.

Happily, in spite of the easy way Archuleta’s firing has served as a proxy for real solutions to the government’s insecurity, at least some in Congress are pushing other “solutions.” Given Congress’ responsibility for failing to fund better IT purchasing, consider agency weaknesses during confirmation, and demand accountability from the intelligence community going back at least to the WikiLeaks leaks, these are worth examining.

Perhaps most predictably, Susan Collins called for passage of cybersecurity legislation.

It is time for Congress to pass a cybersecurity law that will strengthen our defenses and improve critical communication and cooperation between the private sector and government. We must do more to combat these dangerous threats in both government and the private sector.

Of course, nothing in CISA (or any other cybersecurity legislation being debated by Congress) would have done a damn thing to prevent the OPM hack. In other words, Collins’ response is just an example of Congress doing the wrong thing in response to a real need.

Giving corporations immunity is not the answer to most problems facing this country. And those who embrace it as a real solution should be held accountable for the next government hack.

Freshman Nebraska Senator Ben Sasse — both before and after Archuleta’s resignation — has appropriately laid out the implications of this hack (rebutting a comparison repeated by Earnest in his press conference, that this hack compares at all with the Target hack).

OPM’s announcement today gives the impression that these breaches are just like some of the losses by Target or Home Depot that we’ve seen in the news. The analogy is nonsense. This is quite different—this is much scarier than identity theft or ruined credit scores. Government and industry need to understand this and be ready. That’s not going to happen as long as Washington keeps treating this like just another routine PR crisis.

But one of his proposed responses is to turn this example of intelligence collection targeting legitimate targets into an act of war.

Some in the defense and intelligence communities think the attacks on OPM constitute an act of war. The rules of engagement in cyber warfare are still being written. And with them, we need to send a clear message: these types of intrusions will not be tolerated. We must ensure our attackers suffer the full consequences of their actions.

Starting now, government needs to stop the bleeding—every sensitive database in every government agency must be immediately secured or pulled offline. But playing defense is a losing game. Naming and shaming until the news cycle shifts is not enough.

Our government must completely reevaluate its cyber doctrine. We have to deter attacks from ever happening in the first place while also building resiliency.

We’re collecting the same kind of information as China — in methods that are both more efficient (because we have the luxury of being able to take off the Internet) but less so (because we are not, as far as we know, targeting China’s own records of its spooks). If this is an act of war than we gave reason for war well before China got into OPM’s servers.

Meanwhile, veterans Ted Lieu and Steve Russell (who, because they’ve had clearance, probably have been affected) are pushing reforms that will affect the kind of bureaucracy we should have to perform what is a core counterintelligence function.

Congressman Russell’s statement:

“It is bad enough that the dereliction displayed by OPM led to 25 million Americans’ records being compromised, but to continue to deflect responsibility and accountability is sad. In her testimony a few weeks ago, OPM Director Katherine Archuleta said that they did not encrypt their files for fear they could be decrypted. This is no excuse for a cyber-breach, and is akin to gross negligence. We have spent over a half a trillion dollars in information technology, and are effectively throwing it all away when we do not protect our assets. OPM has proven they are not up to the task of safeguarding our information, a responsibility that allows for no error. I look forward to working with Congressman Lieu on accountability and reform of this grave problem.”

Congressman Lieu’s statement:

“The failure by the Office of Personnel Management to prevent hackers from stealing security clearance forms containing the most private information of 25 million Americans significantly imperils our national security. Tragically, this cyber breach was likely preventable. The Inspector General identified multiple vulnerabilities in OPM’s security clearance system–year after year–that OPM failed to address. Even now, OPM still does not prioritize cybersecurity. The IG testified just yesterday that OPM ‘has not historically, and still does not, prioritize IT security.’ The IG further testified that there is a ‘high risk’ of failure on a going forward basis at OPM. The security clearance system was previously housed at the Department of Defense. In hindsight, it was a mistake to move the security clearance system to OPM in 2004. We need to correct that mistake. Congressman Steve Russell and I are working on bipartisan legislation to move the security clearance database out of OPM into another agency that has a better grasp of cyber threats. Steve and I have previously submitted SF-86 security clearance forms. We personally understand the national security crisis this cyber breach has caused. Every American affected by the OPM security clearance breach deserves and demands a new way forward in protecting their most private information and advancing the vital security interests of the United States.”

A number of people online have suggested that seeing Archuleta get ousted (whether she was forced or recognized she had lost Obama’s support) will lead other agency heads to take cybersecurity more seriously. I’m skeptical. In part, because some of the other key agencies — starting with DHS — have far to much work to do before the inevitable will happen and they’ll be hacked. But in part because the other agencies involved have long had impunity in the face of gross cyberintelligence inadequacies. No one at DOD or State got held responsible for Chelsea Manning’s leaks (even though they came 2 years after DOD had prohibited removable media on DOD computers), nor did anyone at DOD get held responsible for Edward Snowden’s leaks (which happened 5 years after the ban on removable media). Neither the President nor Congress has done anything but extend deadlines for these agencies to address CI vulnerabilities.

Perhaps this 90 day review of the NatSec implications of the hack is doing real work (though I worry it’ll produce McKinsey slop).  But this hack should be treated with the kind of seriousness as the 9/11 attack, with the consequent attention on real cybersecurity fixes, not the “do something” effort to give corporations immunity.

“Technical Difficulties”: United Airlines Grounded, NYSE Halted, What’s Next?

[graphic: WSJ.com's July 8th error message]

[graphic: WSJ.com’s July 8th error message]

This is a working post for discussion of today’s outages. United Airlines grounded its flights for roughly two hours this morning; the FAA’s advisory indicated an automation-related issue, and subsequent communications from United said it was a “network connectivity” problem.

UAL also briefly grounded flights on June 2nd, due to “automation issues.”

Now the New York Stock Exchange has halted all trading shortly before noon, cancelling all open orders, due to “technical difficulties.”

There are reports that CNBC and WSJ websites are down, but they could simply be swamped by traffic.

Who’s or what’s next?

UPDATE — 12:55 pm EDT —

Looks like CNBC may only have had a brief burp due to high traffic as there are no further complaints about service interruption. WSJ’s website has been slowly working its way back to normal service; the media outlet posted an abbreviated versionfor 15-20 minutes once its technical problems had been resolved. No indication yet that anything apart from high traffic volume may have spiked the site.

UPDATE — 1:35 pm EDT —

You know what cracks me up, in a ha-ha-ouch kind of way? FBI Director Jim Comey puling about the need for back doors into technology in front of Congress today, while a major airline and the most important stock market in the world demonstrate exactly how ugly it could get if hackers with malicious intent used the back doors he demands for evil rather than good. The “technical difficulties” both UAL and NYSE experienced today could be duplicated by hackers using back doors.

The U.S. Government is an aircraft carrier, very slow to turn even when under fire. Hackers are speedboats. Asking for back doors across all technology while facing myriad fleet-footed nemesis is like chasing 38-foot Cigarette Top Gun speedboats with a carrier. Unless the carrier can see Cigarettes coming from a distance and train gun on them, Cigarettes will fly up its backside. The U.S. Government has already proven it can’t see very far ahead, stuck in a defensive posture while using its offense in ways that only ensure more attacks.

UPDATE — 2:20 pm EDT —

Fortune reports the NYSE halt was due to a “failed systems upgrade.”

Right. Upgrade. Let’s roll out an upgrade in the middle of the week, in the middle of the month, when both China’s stock market and Europe’s banksters are freaking out. Let’s not manage traders expectations in advance of the day’s trading, either.

Somebody needs to retake a course in Change Management 101 — or there’s some additional explaining required.

Reuters assures us, too:

The U.S. Department of Homeland Security said there were no signs” that the problems at NYSE and United Airlines stemmed from “malicious activity,” CNN reported.

Good to know, huh? Can’t believe they went to CNN for that.

UPDATE — 3:30 pm EDT —

The buzz since 2:00-ish pm is that Anonymous *might* be to blame for the NYSE “glitch.” The Hill, Salon, and a few other outlets reported about a cryptic tweet from @YourAnonNews late last evening:

Untitled

But another Anonymous affiliate laughed it off, saying:

NYSE_TechGlitches_Tweet_237PM_08JUL2015

Timing is incredible, though; the NYSE, WSJ, and UAL outages all happened concurrent with a Congressional hearing at which FBI Director Jim Comey discussed the need for back doors into everything. What an incredible series of coincidences today.

UPDATE — 3:55 pm EDT —

Best take by far on today’s NYSE “technical difficulties”, gonzo reporting with a feminine touch from Molly Crabapple:

I was met by fires in the streets, the screams of the dying tourists and the shouts of former traders offering sacrifices to their new gods

UPDATE — 5:00 pm EDT —

NYSE re-opened again around 3:00 pm EDT, with trading a bit jittery. Financial news outlets speculated the market closed at 17,515.42, down -261.49 (-1.47%) due to concerns over China’s tanked stock market and Greece’s EU debt woes. The Shanghai market had closed the previous day at 3,507.19 down -219.93 (-5.90%).

Feeling iffy over the Shanghai index, Hong Kong’s Hang Seng Index closed at 23,516.56 down -1,458.75 (-5.84%); Japan’s Nikkei 225 closed at 19,737.64 down -638.95 (-3.14%).

But these Asian markets weren’t affected by the NYSE’s technical difficulties today. Wonder how they will open on July 9th their local time — flat or down? I wouldn’t put my money on an uptick, but I’m not a financial adviser, either.

I imagine the bars and pubs around Wall Street saw greater-than-average action. I might put money on that.

The Dangers of Crying Wolf

In the wake of yet another in a string of 40 terrorist panics that came to naught, two terrorism experts have posts commenting on crying wolf. Ali Soufan’s consultant firm treats the over-response to the Fourth of July warnings as justifiable, though notes the general sense of unease serves ISIL’s purpose.

While calls for the public to remain vigilant are common sense, they need not become an incessant drumbeat, as fears of lone wolf and known wolf attackers can too easily give way to cries of wolf that are taxing and counterproductive.

[snip]

That neither false alarm was terrorism related did little to blunt the worry that both could have been; indeed both were assumed to have been terrorism by a public told to expect the worst but not told why. The spectacle of massive law enforcement responses, which make sense given the history of ill-advised moderation and hesitation during active shooter situations, plays into the propaganda playbook of the Islamic State. Unspecific warnings to be on the lookout for an attack further add to the false but easily repeated sense that the national security situation is out of control. The nation is actually relatively safe, thanks to a decade of intense efforts by law enforcement and intelligence agencies. No one feels safe, however, given the attacks, tweets, and taunts of a terrorist group long active in Iraq and now in Syria.

This unease stems in part from the way the Islamic State has changed the landscape of terrorism, moving away from spectacular attacks that topple a society’s skyscrapers to banal but brutal attacks that destroy a society’s sense of security. A sound misheard as a gunshot at the premier military hospital in the United States can be assumed to be the start of a Tunisia-style terror attack precisely because such an attack is so easy to pull off. Shooting tourists on a beach in Tunisia or in an office in Paris means no one feels safe, even if no one is actually threatened beforehand.

[snip]

The group will gladly accept people crying wolf in its name as much as it accepts lone wolves acting in its name. A persistent level of perceived threat allows this approach to succeed where it should fail.

Peter Bergen weighs the costs of repeated panics more critically.

Since there was virtually no downside for U.S. national security officials to issue terrorism alerts, the American public has been regularly warned that some kind of serious terrorist attack is in the offing.

Crying wolf, however, does have repercussions. There are significant costs to these terror alerts, both economic and social.

This weekend, local governments and businesses spent significant sums putting temporary security upgrades in place. Some Americans made alternative vacation plans. In the past, many flights have been canceled and commerce impeded.

More fundamentally, the issuing of alerts undermines the essential purpose of counterterrorism — to prevent terrorist attacks, yes, but also to guarantee American citizens’ right to live outside the realm of fear that terrorists want to impose on us. Inflated, ineffectual warnings do not serve the purpose of effective counterterrorism; they contradict it.

We seem to have inverted President Franklin D. Roosevelt’s famous admonition “The only thing we have to fear is fear itself” so that our motto today is closer to “We will continually live in a state of self-imposed fear.”

When this happens, we are doing the job of terrorists for them.

I’d add two things.

First, don’t forget that sustained panics has helped the security state demand new authorities in the past, as when in 2004 an election year threat the CIA early discounted nevertheless served as the excuse to restart torture and the dragnet. Jim Comey was a part of that (though Comey seems to have served more as a willful dupe to the CIA and Cheney types than the instigator). So it should stand as a warning, especially when Comey is using the ISIL threat to demand encryption back doors.

But this discussion also needs some perspective. After all, as the national security state was panicking over loud noises, there was a slew of gun violence in Chicago.

After a relatively quiet start to the Fourth of July weekend in Chicago, a burst of gun violence overnight left three dead and 27 people wounded in just eight hours, including a 7-year-old boy killed after returning from a celebration.

“It’s crazy,” said Vedia Hailey, the grandmother of the boy, Amari Brown. “Who would shoot a 7-year-old in the chest? Who would do that to a baby? When is it going to stop?”

From 9:20 p.m Saturday until 4:45 a.m. Sunday, 30 people were shot across Chicago, three of them fatally, including Amari.

Even when casualties from senseless gun violence rival that of any terror attack in the US since 9/11, CNN doesn’t run it 24/7, nor do people seem all that concerned about the destruction of Chicago’s South Side’s sense of security.

Moreover, the costs go far beyond those Bergen lays out.

After all, if national security remains defined as counterterrorism (or maybe gets expanded to include hackers), we will ignore two bigger threats to our country and the globe: climate change and bankster havoc.

Every time we spend a holiday weekend hiding from manufactured fears, we will lose focus on bigger threats.

Over the weekend we celebrated the brave audacity of a bunch of men who dared to take risks to demand their autonomy (while denying it to their non-white and female chattel). Our country has since allowed itself to be dominated by fears.

Jim Comey May Not Be a Maniac, But He Has a Poor Understanding of Evidence

Apparently, Jim Comey wasn’t happy with his stenographer, Ben Wittes. After having Ben write up Comey’s concerns on encryption last week, Comey has written his own explanation of his concerns about encryption at Ben’s blog.

Here are the 3 key paragraphs.

2. There are many benefits to this. Universal strong encryption will protect all of us—our innovation, our private thoughts, and so many other things of value—from thieves of all kinds. We will all have lock-boxes in our lives that only we can open and in which we can store all that is valuable to us. There are lots of good things about this.

3. There are many costs to this. Public safety in the United States has relied for a couple centuries on the ability of the government, with predication, to obtain permission from a court to access the “papers and effects” and communications of Americans. The Fourth Amendment reflects a trade-off inherent in ordered liberty: To protect the public, the government sometimes needs to be able to see an individual’s stuff, but only under appropriate circumstances and with appropriate oversight.

4. These two things are in tension in many contexts. When the government’s ability—with appropriate predication and court oversight—to see an individual’s stuff goes away, it will affect public safety. That tension is vividly illustrated by the current ISIL threat, which involves ISIL operators in Syria recruiting and tasking dozens of troubled Americans to kill people, a process that increasingly takes part through mobile messaging apps that are end-to-end encrypted, communications that may not be intercepted, despite judicial orders under the Fourth Amendment. But the tension could as well be illustrated in criminal investigations all over the country. There is simply no doubt that bad people can communicate with impunity in a world of universal strong encryption.

Comey admits encryption lets people lock stuff away from criminals (and supports innovation), and admits “there are lots of good things about this.” He then introduces “costs,” without enumerating them. In a paragraph purportedly explaining how the “good things” and “costs” are in tension, he raises the ISIL threat as well as — as an afterthought — “criminal investigations all over the country.”

Without providing any evidence about that tension.

As I have noted, the recent wiretap report raises real questions, at least about the “criminal investigations all over the country,” which in fact are not being thwarted. On that ledger, at least, there is no question: the “good things” (AKA, benefits) are huge, especially with the million or so iPhones that get stolen every year, and the “costs” are negligible, just a few wiretaps law enforcement can’t break.

I conceded we can’t make the same conclusions about FISA orders — or the FBI generally — because Comey’s agency’s record keeping is so bad (which is consistent with all the rest of its record-keeping). It may well be that we’re not able to access ISIL communications with US recruits because of encryption, but simply invoking the existence of ISIL using end-to-end encrypted mobile messaging apps is not evidence (especially because so much evidence indicates that sloppy end-user behavior makes it possible for FBI to crack this).

Especially after the FBI’s 0-for-40 record about making claims about terrorists since 9/11.

It may be that the FBI is facing increasing problems tracking ISIL. It may even be — though I’m skeptical — that those problems would outweigh the value of making stealing iPhones less useful.

But even as he calls for a real debate, Comey offers not one bit of real evidence to counter the crappy FBI reporting in the official reports to suggest this is not more FBI fearmongering.

To Talk of Many Things: Of Vandals, and Cuts, and Cables, and Pings

The time has come,’ the Walrus said,
To talk of many things:
Of shoes — and ships — and sealing-wax —
Of cabbages — and kings —
And why the sea is boiling hot —
And whether pigs have wings.’

(Excerpt, Lewis Carroll’s The Walrus and the Carpenter)

Here’s an open information security topic worth examining more closely: the recent vandalization of yet another fiber optic cable on the west coast.

A total of eleven cuts have been made since last July on fiber optic cables in the greater San Francisco/Oakland area. The most recent cut occurred on June 30th. The FBI had already asked the public for help with information about the first ten cuts, made in these general locations at the time and date indicated here:

1) July 6, 2014, 9:44 p.m. near 7th St. and Grayson St. in Berkeley
2) July 6, 2014, 11:39 p.m. near Niles Canyon Blvd. and Mission Blvd. in Fremont
3) July 7, 2014, 12:24 a.m. near Jones Road and Iron Horse Trail in Walnut Creek
4) July 7, 2014, 12:51 a.m. near Niles Canyon Blvd. and Alameda Creek in Fremont
5) July 7, 2014, 2:13 a.m. near Stockton Ave. and University Ave. in San Jose
__________
6) February 24, 2015, 11:30 p.m. near Niles Canyon Blvd. and Mission Blvd. in Fremont
7) February 24, 2015 11:30 p.m. near Niles Canyon Blvd. and Alameda Creek in Fremont
__________
8) June 8, 2015, 11:00 p.m. near Danville Blvd. and Rudgear Road in Alamo
9) June 8, 2015, 11:40 p.m. near Overacker Ave and Mowry Ave in Fremont
__________
10) June 9, 2015, 1:38 p.m. near Jones Road and Parkside Dr. in Walnut Creek

The FBI presented these first ten cuts as a single, undivided list. After looking at the dates and times, one can see these cuts may have occurred not as discrete events, but as three separate clusters of cuts. The first cluster occurred within a five-hour span; the second occurred nearly simultaneously at two points; and the third cluster occurred within three hours. The three clusters took place after dark, during the same evening. The tenth cut may be a one-off, or it may be connected to the third cluster as it took place within 14 hours of the eighth and ninth cuts.

The most recent cable cut, occurring this week, did not fit a pattern like the previous ten cuts. Reports indicate the cut was near Livemore — a new location much farther to the south and east in comparison, and only one cut reported rather than two or more.

Is this latest cut an outlier, or were perpetrators interrupted before they could cut again?

Taking a closer look at the previous cut events, we can see there must have been more than one individual involved in the cuts, and they may have been coordinated. Read more

CryptoWars, the Obfuscation

The US Courts released its semiannual Wiretap Report the other day, which reported that very few of the attempted wiretaps last year were encrypted, with even fewer thwarting law enforcement.

The number of state wiretaps in which encryption was encountered decreased from 41 in 2013 to 22 in 2014. In two of these wiretaps, officials were unable to decipher the plain text of the messages. Three federal wiretaps were reported as being encrypted in 2014, of which two could not be decrypted. Encryption was also reported for five federal wiretaps that were conducted during previous years, but reported to the AO for the first time in 2014. Officials were able to decipher the plain text of the communications in four of the five intercepts.

Motherboard has taken this data and concluded it means the Feds have been overstating their claim they’re “going dark.”

[N]ew numbers released by the US government seem to contradict this doomsday scenario.

[snip]

“They’re blowing it out of proportion,” Hanni Fahkoury, an attorney at the digital rights group Electronic Frontier Foundation (EFF), told Motherboard. “[Encryption] was only a problem in five cases of the more than 3,500 wiretaps they had up. Second, the presence of encryption was down by almost 50 percent from the previous year.

“So this is on a downward trend, not upward,” he wrote in an email.

Much as I’d like to, I’m not sure I agree with Motherboard’s (or Hanni Fahkoury’s) conclusion.

Here’s what the data show since 2012, which was the first year jurisdictions reported being unable to break encryption (2012; 2013):

Screen Shot 2015-07-02 at 11.07.09 AM

You’ll see lots of parenthetical entries and NRs. That’s because this data is not being reported systematically. Parenthetical references are to encrypted feeds not reported until years after they get set, and usually those have been decrypted by the time they’re reported. NRs show that we have not getting these numbers, if they exist, from federal law enforcement (and the numbers can’t be zero, as reported here, because FBI has been taking down targets like Silk Road). The reporting on this ought to raise real questions about the quality of the data being reported and perhaps might spark some interest in mandating better reporting of this data so it can be tracked. But it also suggests that — at a time when law enforcement are just beginning to find encryption they can’t break (immediately) — there’s a lot of noise in the data. Does 2013’s 2% of encrypted targets and half-percent that couldn’t be broken represent a big problem? It depends on who the target is — a point I’ll come back to.

Congress will soon have that opportunity (but won’t avail themselves of it).

Even as US Courts were reporting still very low levels of encryption challenges faced by law enforcement, both the Senate Judiciary Committee and the Senate Intelligence Committee announced hearings next Wednesday where Jim Comey will have yet another opportunity to try to present a compelling argument that he should have back doors into our communication. SJC even saw fit to invite witnesses with opposing viewpoints, which the “intelligence” committee saw no need to do.

In an apparent attempt to regain some credibility before these hearings (Jim Comey is nothing if not superb at working the media), Comey went to Ben Wittes to suggest his claimed concern with increasing use of encryption has to do with ISIS’ increasing use of encryption. Ben quotes from Comey’s earlier comments to CNN then riffs on that in light of what Comey just told him in a conversation.

“Our job is to find needles in a nationwide haystack, needles that are increasingly invisible to us because of end-to-end encryption,” Comey said. “This is the ‘going dark’ problem in high definition.”

Comey said ISIS is increasingly communicating with Americans via mobile apps that are difficult for the FBI to decrypt. He also explained that he had to balance the desire to intercept the communication with broader privacy concerns.

“It is a really, really hard problem, but the collision that’s going on between important privacy concerns and public safety is significant enough that we have to figure out a way to solve it,” Comey said.

Let’s unpack this.

As has been widely reported, the FBI has been busy recently dealing with ISIS threats. There have been a bunch of arrests, both because ISIS has gotten extremely good at the inducing self-radicalization in disaffected souls worldwide using Twitter and because of the convergence of Ramadan and the run-up to the July 4 holiday.

As has also been widely reported, the FBI is concerned about the effect of end-to-end encryption on its ability to conduct counterterrorism operations and other law enforcement functions. The concern is two-fold: It’s about data at rest on devices, data that is now being encrypted in a fashion that can’t easily be cracked when those devices are lawfully seized. And it’s also about data in transit between devices, data encrypted such that when captured with a lawful court-ordered wiretap, the signal intercepted is undecipherable.

[snip]

What was not clear to me until today, however, was the extent to which the ISIS concerns and the “going dark” concerns have converged. In his Brookings speech, Comey did not focus on counterterrorism in the examples he gave of the going dark problem. In the remarks quoted by CNN, and in his conversation with me today, however, he made clear that the landscape is changing fast. Initial recruitment may take place on Twitter, but the promising ISIS candidate quickly gets moved onto messaging platforms that are encrypted end to end. As a practical matter, that means there are people in the United States whom authorities reasonably believe to be in contact with ISIS for whom surveillance is lawful and appropriate but for whom useful signals interception is not technically feasible.

Now, Ben incorrectly blurs the several roles of FBI here. FBI’s interception of ISIS communiques may be both intelligence and law enforcement. To the extent they’re the former — to the extent they’re conducted under FISA — they won’t show up in US Courts’ annual report.

But they probably should, if Comey is to have any credibility on this front.

Moreover, Ben simply states that “there are people in the United States whom authorities reasonably believe to be in contact with ISIS for whom surveillance is lawful and appropriate.” But there’s no evidence presented to support this. Indeed, most of the so-called ISIS prosecutions have shown 1) where probable cause existed, it largely existed in the clear, in Twitter conversations and other online postings and 2) there may not have been probable cause before FBI ginned it up.

It ought to raise real questions about whether Comey’s going dark problem is a law enforcement one — with FBI being unable to to access evidence on real criminals — or is an intelligence one — with FBI being unable to access First Amendment protected speech that nevertheless may be important for an understanding of the threat ISIS poses domestically. Again, the data is not there, one way or another, but given the law enforcement data, we ought to demand real numbers for intelligence intercepts. Another pertinent question is whether this encrypted data is easily accessible to NSA (ISIS recruiters are almost entirely going to be legitimate NSA targets located overseas), but not to FBI?

And all this presumes that Comey is telling the truth about ISIS and not — as he and just about every member of the Intelligence Community has done routinely — used terror threats to be able to get authorities to wield against other kinds of threats, especially hackers (which is not to say hackers aren’t a target, just that the IC likes to pretend its authorities serve an exclusively CT purpose when they clearly do not). The law enforcement data, at least, show that even members of very sophisticated drug distribution networks are using encryption at a really low level. Is ISIS’ ability to coach potential recruits into using encrypted products on Twitter really that much better, or is Comey really talking about hackers who more obviously have the technical skills to encrypt their communications?

Thus far, Comey would have you believe that intelligence — counterterrorism — targets encrypt at a much higher rate than even drug targets. But the data also suggest even federal law enforcement (that is, Comey’s agency, among others) aren’t tracking this very effectively, and so can’t present reliable numbers.

Before we go any further in this cryptowar debate, we ought to be able to get real numbers on how serious the problem is.

FBI’s 26-Day Old OPM FLASH Notice

Shane Harris, who has been closely tracking the bureaucratic implications of the OPM hack, has an update describing a “FLASH” notice FBI just sent out to the private sector.

Or rather, FBI just re-sent the FLASH notice they sent on June 5, 26 days earlier, because they realized some recipients (including government contractors working on classified projects) did not have their filters set to accept such notices from the FBI.

The FBI is warning U.S. companies to be on the lookout for a malicious computer program that has been linked to the hack of the Office of Personnel Management. Security experts say the malware is known to be used by hackers in China, including those believed to be behind the OPM breach.

The FBI warning, which was sent to companies Wednesday, includes so-called hash values for the malware, called Sakula, that can be used to search a company’s systems to see if they’ve been affected.

The warning, known as an FBI Liaison Alert System, or FLASH, contains technical details of the malware and describes how it works. While the message doesn’t mention the OPM hack, the Sakula malware is used by Chinese hacker groups, according to security experts. And the FBI message is identical to one the bureau sent companies on June 5, a day after the Obama administration said the OPM had been hacked, exposing millions of government employees’ personal information. Among the recipients of both alerts are government contractors working on sensitive and classified projects.

[snip]

In an email obtained by The Daily Beast, the FBI said it was sending the alert again because of concerns that not all companies had received it the first time. Apparently, some of their email filters weren’t configured to let the FBI message through.

Consider the implications of this.

It is unsurprising that the initial FLASH got stuck in companies’ email filters if the hashes included with the notice were treated as suspicious code by the companies’ anti-malware screens. The message likely looked like malware because it is. (Of course, this story may now have alerted those trying to hack recipients of FBI’s FLASH notices that the FBI wasn’t previously whitelisted by recipients, but probably just got whitelisted, but that’s a matter for another day.)

The delayed FLASH receipt says far more about the current state of data-sharing, just as the Senate sets to debate the Cybersecurity Information Sharing Act, which (Senate boosters claim) companies ostensibly need before they’re willing to share data with the government.

First, it suggests that FBI either did not send out such a FLASH in response to what it learned from Anthem hack, which presumably would have gone out at least by February (which, if even OPM had acted on the alert, might have identified its hack 2 months before it did get identified), or if it did it also got stuck in companies’ — and OPM’s — malware filter.

But it also seems to suggest that the private sector — including sensitive government contractors — haven’t been receiving other FBI FLASHes (presuming the filter settings have been set to exclude any such notice including something that looked like malware). They either never noticed they weren’t getting them or never bothered to set their filters to receive them.

That may reflect a larger issue, though. As Jennifer Granick has repeatedly noted, key researchers and corporations have not, up to now anyway, seen much value in sharing with the government.

I’ve been told by many entities, corporate and academic, that they don’t share with the government because the government doesn’t share back. Silicon Valley engineers have wondered aloud what value DHS has to offer in their efforts to secure their employer’s services. It’s not like DHS is setting a great security example for anyone to follow. OPM’s Inspector General warned the government about security problems that, left unaddressed, led to the OPM breach.

Perhaps recipients didn’t have their filters set to accept notices from FBI because none of them have ever been useful?

Another factor behind reluctance to share with the government is an unwillingness to get personnel security clearances, though that should not be a factor here.

The implication appears to be, though, that the government was unable — because of recipient behavior and predispositions — to share information on the most important hack of recent years.

We’re about to have a debate about immunizing corporations further, as if that’s the problem. But this delayed FLASH strongly suggests it is not.

Sony Pictures Postmortem Reveals Death by Stupid

FORTUNE_SonyHack-GovtAV_25JUN2015We already knew Sony Pictures Entertainment’s (SPE) hack was bad. We knew that the parent, Sony Group, had been exposed to cyber attacks of all kinds for years across its subsidiaries, and slow to effect real changes to prevent future attacks.

And we knew both Sony Group and SPE shot themselves in the feet, literally asking for trouble by way of bad decisions. Sony Electronics’ 2005 copy protection rootkit scandal and SPE’s utter lack of disregard for geopolitics opened the businesses to risk.

But FORTUNE magazine’s expose about the hacking of SPE — of which only two of three parts have yet been published — reveals a floundering conglomerate unable to do anything but flail ineffectively.

It’s impossible to imagine any Fortune 500 corporation willing to tolerate working with 1990s technology for any length of time, let alone one which had no fail-over redundancies or backup strategies, no emergency business continuity plan to which they could revert in the event of a catastrophe. But FORTUNE reports SPE had been reduced to using fax machines to distribute information, in large part because many of its computers had been completely wiped by malware used in the attack.

Pause here and imagine what you would do (or perhaps, have done) if your computer was completely wiped, taking even the BIOS. What would you do to get back in business? You’ve given more thought about this continuity challenge than it appears most of SPE’s management invested prior to last November’s hack, based on reporting to date.

A mind-boggling part of FORTUNE’s expose is the U.S. government’s reaction to SPE’s hack. The graphic above offers the biggest guffaw, a quote by the FBI’s then-assistant director of its cyber division. Knowing what we know now about the Office of Personnel Management hack, the U.S. government is a less-than-credible expert on hacking prevention. While the U.S. government maintains North Korea was responsible, it’s hard to take them seriously when they’ve failed so egregiously to protect their own turf. Read more