[W]e’re in favor of strong encryption, robust encryption. The country needs it, industry needs it. We just want to make sure we have a trap door and key under some judge’s authority where we can get there if somebody is planning a crime.
– FBI Director Louis Freeh, May 11, 1995

They can promise strong encryption. They just need to figure out how they can provide us plain text.
– FBI General Counsel Valerie Caproni, September 27, 2010

Encryption backdoors were declared dead in 2001. Unfortunately, the proposal has raised it’s ugly head again. EFF published a reminder about why it was a bad idea then and is still a bad idea now. It’s important enough to quote in it’s entirety. With elections coming, please vote to protect your privacy rights.


For those who weren’t following digital civil liberties issues in 1995, or for those who have forgotten, here’s a refresher list of why forcing companies to break their own privacy and security measures by installing a back door was a bad idea 15 years ago. We’ll be posting more analysis when more details on the “new” proposal emerge, but this list is a start:

  1. It will create security risks. Don’t take our word for it. Computer security expert Steven Bellovin has explained some of the problems. First, it’s hard to secure communications properly even between two parties. Cryptography with a back door adds a third party, requiring a more complex protocol, and as Bellovin puts it: “Many previous attempts to add such features have resulted in new, easily exploited security flaws rather than better law enforcement access.”

    It doesn’t end there. Bellovin notes:

    Complexity in the protocols isn’t the only problem; protocols require computer programs to implement them, and more complex code generally creates more exploitable bugs. In the most notorious incident of this type, a cell phone switch in Greece was hacked by an unknown party. The so-called ‘lawful intercept’ mechanisms in the switch — that is, the features designed to permit the police to wiretap calls easily — was abused by the attacker to monitor at least a hundred cell phones, up to and including the prime minister’s. This attack would not have been possible if the vendor hadn’t written the lawful intercept code.

    More recently, as security researcher Susan Landau explains, “an IBM researcher found that a Cisco wiretapping architecture designed to accommodate law-enforcement requirements — a system already in use by major carriers — had numerous security holes in its design. This would have made it easy to break into the communications network and surreptitiously wiretap private communications.”

    The same is true for Google, which had its “compliance” technologies hacked by China.

    This isn’t just a problem for you and me and millions of companies that need secure communications. What will the government itself use for secure communications? The FBI and other government agencies currently use many commercial products — the same ones they want to force to have a back door. How will the FBI stop people from un-backdooring their deployments? Or does the government plan to stop using commercial communications technologies altogether?

  2. It won’t stop the bad guys. Users who want strong encryption will be able to get it — from Germany, Finland, Israel, and many other places in the world where it’s offered for sale and for free. In 1996, the National Research Council did a study called “Cryptography’s Role in Securing the Information Society,” nicknamed CRISIS. Here’s what they said:

    Products using unescrowed encryption are in use today by millions of users, and such products are available from many difficult-to-censor Internet sites abroad. Users could pre-encrypt their data, using whatever means were available, before their data were accepted by an escrowed encryption device or system. Users could store their data on remote computers, accessible through the click of a mouse but otherwise unknown to anyone but the data owner, such practices could occur quite legally even with a ban on the use of unescrowed encryption. Knowledge of strong encryption techniques is available from official U.S. government publications and other sources worldwide, and experts understanding how to use such knowledge might well be in high demand from criminal elements. — CRISIS Report at 303

    None of that has changed. And of course, more encryption technology is more readily available today than it was in 1996.

  3. It will harm innovation. In order to ensure that no “untappable” technology exists, we’ll likely see a technology mandate and a draconian regulatory framework. The implications of this for America’s leadership in innovation are dire. Could Mark Zuckerberg have built Facebook in his dorm room if he’d had to build in surveillance capabilities before launch in order to avoid government fines? Would Skype have ever happened if it had been forced to include an artificial bottleneck to allow government easy access to all of your peer-to-peer communications?

    This has especially serious implications for the open source community and small innovators. Some open source developers have already taken a stand against building back doors into software.

  4. It will harm US business. If, thanks to this proposal, US businesses cannot innovate and cannot offer truly secure products, we’re just handing business over to foreign companies who don’t have such limitations. Nokia, Siemens, and Ericsson would all be happy to take a heaping share of the communications technology business from US companies. And it’s not just telecom carriers and VOIP providers at risk. Many game consoles that people can use to play over the Internet, such as the Xbox, allow gamers to chat with each other while they play. They’d have to be tappable, too.
  5. It will cost consumers. Any additional mandates on service providers will require them to spend millions of dollars making their technologies compliant with the new rules. And there’s no real question about who will foot the bill: the providers will pass those costs onto their customers. (And of course, if the government were to pay for it, they would be using taxpayer dollars.)
  6. It will be unconstitutional.. Of course, we wouldn’t be EFF if we didn’t point out the myriad constitutional problems. The details of how a cryptography regulation or mandate will be unconstitutional may vary, but there are serious problems with nearly every iteration of a “no encryption allowed” proposal that we’ve seen so far. Some likely problems:
    • The First Amendment would likely be violated by a ban on all fully encrypted speech.
    • The First Amendment would likely not allow a ban of any software that can allow untappable secrecy. Software is speech, after all, and this is one of the key ways we defeated this bad idea last time.
    • The Fourth Amendment would not allow requiring disclosure of a key to the backdoor into our houses so the government can read our “papers” in advance of a showing of probable cause, and our digital communications shouldn’t be treated any differently.
    • The Fifth Amendment would be implicated by required disclosure of a private papers and the forced utterance of incriminating testimony.
    • Right to privacy. Both the right to be left alone and informational privacy rights would be implicated.
  7. It will be a huge outlay of tax dollars. As noted below, wiretapping is still a relatively rare tool of government. Yet the tax dollars needed to create a huge regulatory infrastructure staffed with government bureaucrats who can enforce the mandates will be very high. So, the taxpayers would end up paying for more expensive technology, higher taxes, and lost privacy, all for the relatively rare chance that motivated criminals will act “in the clear” by not using encryption readily available from a German or Israeli company or for free online.
  8. The government hasn’t shown that encryption is a problem. How many investigations have been thwarted or significantly harmed by encryption that could not be broken? In 2009, the government reported only one instance of encryption that they needed to break out of 2,376 court-approved wiretaps, and it ultimately didn’t prevent investigators from obtaining the communications they were after.

    The New York Times reports that the government officials pushing for this have only come up with a few examples (and it’s not clear that all of the examples actually involve encryption) and no real facts that would allow independent investigation or confirmation. More examples will undoubtedly surface in the FBI’s PR campaign, but we’ll be watching closely to see if underneath all the scary hype there’s actually a real problem demanding this expensive, intrusive solution.

The real issue with encryption may simply be that the FBI has to use more resources when they encounter it than when they don’t. Indeed, Bellovin argues: “Time has also shown that the government has almost always managed to go around encryption.” (One circumvention that’s worked before: keyloggers.) But if the FBI’s burden is the real issue here, then the words of the CRISIS Report are even truer today than they were in 1996:

It is true that the spread of encryption technologies will add to the burden of those in government who are charged with carrying out certain law enforcement and intelligence activities. But the many benefits to society of widespread commercial and private use of cryptography outweigh the disadvantages.

This is a bit off the path of information security but I wanted to share an excellent article on why you should distrust 90% of what you read (including, unfortunately, much of the computer security advice out there).

The Atlantic published this interview with Dr John Ioannidis, a medical researcher who has dedicated his career to showing that “much of what medical researchers conclude in their studies is misleading, exaggerated, or flat-out wrong.” This is true even in the ‘gold-standard’ peer-reviewed studies. The biases of funding and publication pressure are too much to overcome. Worse, even when the studies have been overturned, the medical community continues to rely on the old, disproven theories.

While his study and his research are based on medical journals and medical research, his findings are applicable to everything from physics to economics to computer science.

You can also read Dr Ioannidis’ original paper at PLoS Medicine. He lays out a detailed mathematical proof that, “assuming modest levels of researcher bias, typically imperfect research techniques, and the well-known tendency to focus on exciting rather than highly plausible theories, researchers will come up with wrong findings most of the time.” He wrote a follow-up article here specifically discussing the distortion caused by publication practices. I recommend both for anyone with an interest in the scientific method and/or an interest in sorting truth from rumor among the deluge of “good advice” on the internet.

For several years now, I have smugly been talking about the weak privacy standards of Google and Facebook, confident that my providers were better than that. Well, it turns out that Yahoo is guilty of the same things. Yes, I use the Yahoo webmail service and I’ve been very happy with it. And, yes, I strongly recommend that everyone have a personal webmail account that is unconnected to your current work email.

Anyway, about three months ago, Yahoo launched several information sharing services. If you use the Yahoo Contacts feature, other people in your address book would be able to see what you’ve been up to – postings, connections and other activities within the Yahoo sites. And you can see information about them.

In principal, I have nothing against features that let us share information with others. My problem is the underhanded way that these companies roll the new features out. I never received any announcement about them and certainly got no training on my options to control the information they would be sharing. Worse, the default settings are “share all”. You have to know to look for and then take deliberate action to restrict the sharing. I didn’t even notice the change for months. If these companies really cared about security, the defaults would be rolled out the other way.

If you are a Yahoo user and you use their Contacts feature, here’s how to lock the program back down:

  1. Log onto your Yahoo Mail account.
  2. Click the Contacts tab at top left.
  3. Click the Tools dropdown and select ‘Seeing Updates from …’
  4. For a full lockdown, uncheck both the master settings at the top of the screen (‘Share my Updates’ and ‘See Updates in Yahoo Mail’)

If you like the sharing but want to restrict it to the people you are actually close with (rather than every random business contact that you’ve ever added to your Blackberry), go through the list and select the ‘Stop Getting Updates’ at the right of the contact’s name. You can also get a little more granular control using the ‘Manage my Updates’ link near the top left of the page. But blocking everything is easier.

The Yahoo Calendar also has some Sharing settings but since I don’t use their calendar feature, I don’t have good advice for how to lock it down. Any suggestions from people who do use it?

Senator Patrick Leahy just introduced the ‘Combating Online Infringement and Counterfeits Act’ (COICA). As the Electronic Frontier Foundation notes in their press release, this is an egregious power grab by the government. This bill would allow state Attorney Generals to arbitrarily designate entire internet domains as “infringing” and require domain registrars/registries, ISPs, DNS providers, and others to block Internet users from reaching those domains. Worse, the bill allows the US Justice Department to create its own blacklist with even more intrusive restrictions and fear-inducing penalties, all without any judicial review, much less an actual conviction that something illegal really happened.

The thinly veiled excuse of “copyright protection” ignores the massive potential for abuse on the part of overzealous prosecutors and bureaucrats. It tramples on the First Amendment rights of other potential users of the domain, requiring not merely that the specific infringing content be taken down but that everything else on the site, all the blogs, images and any legitimate content be made inaccessible as well.

The US is supposed to be the leader in freedom. This bill sends a message to the rest of the world that we don’t really believe what we say – that censorship is acceptable. This is a very dangerous and patently unconstitutional bill.

Please take a minute to read EFF’s article. But more important, WRITE YOUR SENATOR opposing this bill.

Joshua Gilliland writes an excellent blog on many legal issues. Today’s posting about a recent court case in California is a disturbing story. Please go read the full version.

The issue at hand is the government’s right to track you as you go about your business. The case involved a suspected drug dealer. The police planted a GPS tracking unit on his car and compiled full records of his movements over several days. They found evidence of illegal activity and convicted him. He appealed, arguing that the way the police collected the evidence violate the 4th Amendment.

At the risk of defending a convicted drug dealer, there are some very disturbing aspects of this case.

First is the Court’s determination that bugging your car with a GPS is fundamentally the same as bugging it with an older “beeper” technology. GPS is far more intrusive and more capable. It is not limited to proximity, it’s always on and it is far more precise in the location reported. And while my location at any one store may be a public action, there is no easily public way to aggregate that information. So even if an individual trip out of the house is public, I still retain an expectation of privacy for the pattern of trips.

Second is this Court’s determination that your driveway is “public” – that you have no expectation of privacy on a car on your own property. From the available reports, the police invaded the suspect’s property to plant the bug. Their argument was that the gas meter reader and postman have rights to come to your front door, therefore the police have a right to come onto your property, too. Their argument for doing so is, in my opinion, weak. The limited right to come onto my property for a defined purpose (and in compliance with an implicit contract for service) does not equate to an unlimited right of access. I do not, for example, sacrifice my rights to allege trespassing by vandals just because the postman delivers mail.

The most worrisome point, though is that both these concerns could have been made moot if the police simply asked for a warrant before attaching the bug. The government’s assertion of a right to do this without a warrant is what makes this such a very concerning precedent. Like Josh, I hope that the Supreme Court accepts the appeal and overturns this standard, preferrably sooner than later.