Archive for the ‘privacy’ Category

If you haven’t heard by now, a number of Google executives were convicted in absentia by a court in Italy for failing to police some videos posted by users. In this case, the video was a home movie of several teenagers bullying a peer with Downs Syndrome. The video was anonymously posted to Google Videos where it stayed for several months. Eventually, some adults noticed it and contacted the police who investigated and then asked Google to take the video down. By all reports, Google did so within two hours of receiving the notification.

The Italian prosecutors felt that this was not fast enough and argued in court that Google had an affirmative responsibility for the content even though it was posted by others and even though Google does not exercise any control over the content. One self-appointed consumer advocate is proclaiming this a “victory for individual privacy over corporate interest”.

I am an avid privacy activist but I’m not buying it here for several reasons. First, it’s not possible to evaluate all the content that users are posting. About twenty hours of video content are posted to YouTube alone every minute. Add in all the other Web 2.0 sites and you’d need literally armies of people doing nothing but watching what other people are posting. Nobody could afford that. And even if you tried, that many people just couldn’t do the job without making mistakes. Second, there’s no easy way to tell inappropriate content (like real bullying) from certain types of performance art. That kind of stuff is not to my taste but other people … well, I won’t say they necessarily enjoy it, but they do it. And heaven help you if you censor their artistic content. Third, which set of standards will you apply? Granted, beating up a kid with Downs Syndrome is bad in pretty much every culture but there’s nothing philosophically different about this case and the Chinese suppression of political dissent. There is no way to draw the line about what is or is not acceptable.

Some commentators on this case have argued that other users added comments to the site that the video was inappropriate and that should have been enough to require Google to act. Again, I don’t buy it. User feedback and ratings can have a place but they are remarkably susceptible to abuse. False reports are rampant, either as pranks or as retribution for negative ratings on other users’ content. Remember that the Internet is an inherently pseudonymous environment. That is, even if you have to create a username to use a site, you can still create as many usernames as you want and they don’t necessarily have to have any connection to your real identity. If you want to tank a site or skew a vote, just create a thousand or so accounts (often called “sockpuppets”) and have them all paraphrase your original opinion. If you are careful to change your tone and word choice a bit, it’s very difficult to identify this kind of abuse.

It seems to me that the real culprits are the bullies who 1) abused the victim and then 2) posted the video. Google appears to have been a good corporate citizen, acting quickly and responsibly once notified of a problem by the proper authorities. Attempting to require Google or any other host to actively police ever bit of content on their site would kill the very idea of user-generated content. YouTube, Twitter, Facebook, MySpace, Wikipedia, … all would be run out of business by this social policy. And we would all be much poorer as a result.

I hope this case gets overturned on appeal. It’s hard to predict, though. European law is far less deferential to the idea of free speech than we are used to in the US. They also have not been very successful at grappling with the implications of applying local standards to global operations. If you expect others to kowtow to your local foibles, you have to be equally ready to defer to all of theirs – a standard that very few communities will tolerate in practice.

As a closing thought, I can’t help wondering if this court case was a smoke-screen. It is suspicious that this case comes right as Google is being sued by the state-run media companies for alleged tolerance of copyright violations on the same site. I feel for the kid who was being bullied but this smells to me more of political grandstanding and strong-arm negotiations than it does of a legitimate privacy case.

I don’t know what’s happening today but suddenly there are multiple stories about airport security “breaches” that aren’t and, more worrying, massive over-reactions on the part of the authorities.

In the first story, a lovesick schmuck walked in the exit path and ducked under a rope at Newark Int’l Airport in order to give his significant other a hug before she got on her plane. The guard who should have prevented this was not at his post. TSA isn’t saying why. They are, however, trying to find the man who gave the hug and threatening criminal charges.

Admittedly, the breach resulted in a huge disruption not only of airtraffic at Newark but also cascading throughout the world as connecting flights were delayed. This was an expensive mistake. But it’s not the fault of the man who jumped the rope. The disruption is directly attirbutable to the pointless security theater practiced by the TSA. These threats to press charges are a transparent attempt to deflect attention from the fact that their security protocols are expensive, intrusive and, worst, inherently ineffective. It might be different if we were actually getting some increased security in exchange for our sacrificed civil liberties but this is just pointless.

The second story is an internal test gone wrong. Slovakian security experts were testing the effectiveness of the bomb-sniffing dogs. To make the test as realistic as possible, they snuck some high explosive into a passenger’s bags after check-in but before the bags went onto the plane. There was no detonator or other means to set it off, just the raw material. The dog successfully found the explosive but the handler apparently got distracted and forget to take it out before the bags were loaded. The mistake wasn’t found until the plane was in the air toward Ireland. They radioed the pilot, though, who decided that there was no risk (no detonator, remember?). They also notified the folks at Dublin Airport.

That didn’t stop the Irish security from arresting the innocent man whose bags were used in the test. He was later released (we hope with some kind of apology). The Irish government has focused not on their overreaction but on the “riskiness” of the test, calling it “unprecedented”. Realistic tests are not only accepted but are best practice. Do you really want to train your dog using only fake materials? How will you know whether she’s actually reacting to the right triggers? An explosive-sniffing dog that only reacts to Play-Doh (which looks and feels like C4 and might even smell like it to a human) won’t do any of us much good. Despite the Irish government’s spokesperson’s claims, tests with real materials are normal. Again, deflecting.

The third story is a domestic traveler who wanted to bring home some honey. Knowing that there are new restrictions, he called TSA who confirmed that honey, like other foodstuffs, can be checked in your baggage (though it may not currently be taken as carry-on). TSA claims that the plastic bottles of honey tested positive for TNT and TATP and that two of their screeners had to be “rushed to the hospital” after opening the bottles. Subsequent tests showed no explosives – the two screeners are now being described as “just nervous”. That didn’t stop TSA from yanking the victim off the plane and disrupting travel for hours. All of it pointless, though at least this time TSA is taking at least a little bit of ownership for their mistake.

NPR ran a report a few days ago talking about the inherent difficulties of looking for bombs instead of looking for terrorists. On any given flight, there are only about a hundred suspects. There are, however, literally tens of thousands of hiding locations for bombs. And new security protocols always address the last threat, never the next threat. Terrorists adapt. Their tactics are not static. Make us take off our shoes – the explosives go in the coffee cup. Ban all liquids – try the underwear.

Next up, carry the explosives in a body cavity. Actually, that’s not even novel – it’s already been used in an Al Qaeda’s assassination attempt against one of the Saudi princes. And all those fancy whole-body scanners can’t do a thing to stop it.

As a society, we keep hoping that by sacrificing “just this one more” bit of our personal dignity and liberty, we will finally be safe. That’s not and never will be true. The recent failures highlight not tactical failures in the implementation of our security but a wholesale failure in the underlying security strategy. It’s time to rewrite our approach from the ground up.

Two interesting privacy positions came out today, one from the Ohio Supreme Court and one from the Australian Ministry of Communication.

In the Ohio case, the Supreme Court ruled that the police need a warrant to search the contents of your phone. The case comes from a drug bust. From the available evidence, the guy was guilty as sin. Unfortunately, when the police arrested him, they confiscated and then, without either a warrant or his consent, searched the phone. The trial court allowed the evidence from the warrantless search citing a 2007 federal court decision that considered a cell phone similar to a “closed container”. (The closed container rule is what lets the police look in your pockets when they arrest you.) For physical items, the closed container rule makes some sense – you need to be sure your prisoner is not still in possession of something that could be used as a weapon. And if you happen to see other evidence while checking for physical threats, at least you had a reasonable justification to be looking.

Now, you could argue that a phone is an “information container”. The trial court did and an appeals court agreed. And so did three of the seven Ohio Supreme Court justices. But four of the justices were unable to make that stretch and I agree with them. A phone or a hard drive may be an information container but the information within it can’t be used as an immediate weapon to threaten the safety of the arresting officers. The justification for a warrantless search is missing. There is no immediacy. So does this mean we have to let drug dealers go free? No, it just means the police need to talk to a judge before they search the phone. They need a warrant, just like they do for almost all other searches. I think this ruling is in keeping with the privacy expectations of most of us.

There is one caveat in the Supreme Court’s ruling – they can search the content of your phone if they believe their safety is in danger. I am at a loss to think of a scenario where a phone would constitute a danger but expect some pretty specious arguments. Overall though, this was a clear win for privacy.

The story from Australia is a lot less promising. The Australian Communication Minister announced today that it will impose mandatory internet filtering to block “obscene and crime-related websites”. That content is already illegal from publication in Australia but they have no ability to control it when a citizen accesses the content from an overseas server.

If the filter is implemented, it would be the strictest among the world’s democracies. It would put Australia in the ranks with Burma, China, Iran, Syria and North Korea.1 Unfortunately, the Minister has also already conceded that the filter will be ineffective, despite the success of a recent technological test. Much of the information that he proposes to block is available via peer-to-peer and chat sites, neither of which would be affected by the domain name-based filters which are being proposed. The filters also inevitably block some proportion of legitimate content. The result would be a sweeping grant of power to create a secret blacklist to little or no obvious gain. Electronic Frontiers Australia, a privacy rights group, has challenged the government’s plans, saying “We’re yet to hear a sensible explanation of what this policy is for, who it will help, and why it is worth spending so much taxpayers’ money on.”

In both these cases, it’s easy to empathize with the “tough on crime” position. Drug dealers are evil and obscenity is bad. But the erosion of privacy and other personal liberties is far worse, no matter how well-intentioned. I am heartened that the Ohio Supreme Court found the right decision even though it took an ugly case to bring it to light. I hope that the Australians find their way as well.

CNN recently ran an excellent article asking this question. The article included five case studies on privacy issues being raised by all our new technology. The connecting question was whether and how our old privacy laws apply to this new environment.

To me, the answer is simple. Yes, you are responsible for anything you write, whether you post it on Twitter, a personal blog or by regular mail. If your words would be libelous when published in the newspaper, they are equally libelous published online. (Of course, speaking the truth is the best defense against accusations of libel.)

The problem in my opinion is that being online gives some people an illusion of anonymity. (And, yes, it is an illusion – more on that in future posts.) This illusion encourages some to say things that they would never say in person. This is unacceptable to me. If you have something to say, stand up and be proud. Take all the credit – and all the blame – that your words deserve. Stand behind your words, whether you post them on Facebook or shout them from a soapbox in the village square.

In fairness, there are a few exceptions to that rule. Political dissent can be quite dangerous in some parts of the world. I am lucky enough to live in a country that explicitly protects political speech. Many in this world are not so blessed. True anonymity has a place in that arena and should be protected wherever and however possible. But short of the level of physical danger, you are responsible for what you say and should not expect otherwise.

Most other privacy “conundrums” are equally easy to solve if you fairly apply the old principles to the new environment. The differences are of degree and speed, not in the fundamental principles.

Last week we talked about securely destroying paper-based information. This week, we’ll touch on the electronic.

As we’ve said often before, electronic files don’t really go away when you hit the delete button. In many instances, they can be recovered, often with frightening ease. In a study conducted last year by Kessler Int’l, 40% of the hard-drives purchased on eBay contained sensitive or private information from corporate financial data to the web-browsing history and personal pictures. And while a small proportion required forensic analysis to recover, most was easily visible to any casual user.

Here’s what happens when you “delete” a file in Windows.

  1. Since Windows 95, deletion merely moves the file into the Recycle Bin. The file is not deleted and can be recovered by simply opening the Recycle Bin, finding the file and clicking Restore.
  2. When you empty your Recycle Bin, the file is still not deleted. Windows merely erases the tiny pointer that told the computer where on the hard drive the file is located. That makes the file invisible to the operating system but it’s still on the disk. It will eventually get overwritten if/when the computer needs to reuse that space but it’s completely random when or even if that overwrite will happen. There are any number of utilities which can search and recover files in this state including many that can recover partial files.
    Okay, it’s actually a little bit more complicated than that since, for example, files on your flash drive go straight to step 2 and the Recycle Bin will automatically age files off based on size but the general principle remains – files aren’t really gone just because you hit the delete button.

So how do you make files really go away when you’re done with them?

  • If you are done with the computer, the simplest and most secure way to be sure that your data is safe is to pull the drive, take it into the parking lot and hit it several times with a big hammer. It’s easy, it’s perfectly secure and (guilty pleasure alert) it’s kind of fun. The downside is that you won’t get as much when you donate or resell the shell afterward.
  • To wipe all your data without physically destroying the drive, you can reformat the disk. The easiest way is to click the Windows Start button, then select Run. When the box opens, type “cmd” to open a DOS command prompt. In this new box, type “format c:\” and hit Enter. Note: This will not only kill the data but will also wipe the operating system and all your programs. (It’s also a good way to kill really persistent viruses.) Be sure you’re running a full reformat, not merely the “Quick Format”. Quick Format merely rebuilds the file index mentioned in 2 above.
  • If you’re feeling truly paranoid, you can download any number of eraser or “disk sanitizer” programs that perform DoD grade wipes and overwrites. These will not only delete the data but will overwrite it multiple times, either with all 1s, all 0s, random data or some combination. Good programs are available on the internet for free.
    A few years ago, these were important because a really good forensic expert with an electron microscope could look for small inconsistencies in the drive and recover even overwritten data. Nowadays, that’s not an issue. The tolerances for harddrive heads have become so tight that there are no inconsistencies to exploit. According to recent research, even a single overwrite is sufficient now.
  • CDs, DVDs and older floppies can be run through the disk-slot of a home shredder. (Shredders with that slot are a little heavier-duty and can handle the resistance. If you don’t have one, look for that feature when it’s time to replace the shredder.)

If you only want to eliminate some files without wiping the entire drive, you’ll need specialized software. I downloaded a program called Eraser but I have to admit that other than a few tests I haven’t used it. I figure that whole-disk encryption is good enough to protect my information until it’s time to get rid of the computer – and then I want to get out the sledgehammer and have some fun.