Back in 2009, Google CEO and now Executive Chairman Eric Schmidt, already under heavy fire for his company’s strategy to collect, store, and mine every shred of personal data out there, said on CNBC, “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.”
It makes sense. Why worry about surveillance if you haven’t done anything wrong? This, in his unvarnished manner, is what he thinks about privacy. There is none. You don’t need it. You don’t want it. It’s not good for you. It just makes you appear guilty. It’s the philosophy under which police states operate.
Google has no compunction about reading emails of its Gmail users, browsing through user details in its social network services, tracking people throughout their searches, purchases, and reading patterns. It draws conclusions and combines it all with other data into a beautiful whole. For people with Android mobile devices, there is little Google doesn’t know.
But suddenly, Schmidt got all riled up about privacy issues of devices that Google doesn’t control through its software and that can access and record promising details of life: civilian drones. Including the toy-like “everyman” minidrones, such as multi-rotor helicopters. He wants them banned outright. And if they can’t be banned, he wants them regulated. To make his point, he dragged out an unfortunate example of a neighbor with an axe to grind:
“How would you feel if your neighbor went over and bought a commercial observation drone that they can launch from their back yard,” he said. “It just flies over your house all day. How would you feel about it?” He didn’t like that prospect. Not at all. “It’s got to be regulated,” he said, he whose company fights regulations wherever it encounters them. “It’s one thing for governments, who have some legitimacy in what they’re doing, but have other people doing it … It’s not going to happen.”
An unfortunate example because an insidious and at once funny Google moment of this type erupted in a village in France. A guy was urinating in his yard. We know he did; just then a Street View car drove by. Its camera, mounted on a rooftop post, could see over the closed gate and the perimeter enclosure and caught the hapless dude in flagrante delicto.
He didn’t know it at the time. And he didn’t know it when the scene appeared on Street View. His neighbors discovered the photo of him in his yard, relieving himself, face slightly blurred. It was only after he’d become the laughingstock of his village that he learned about it. Sure, in Schmidt’s surveillance-state words, he “shouldn’t be doing it in the first place.”
So the difference between a Street View car and a drone is one of degrees. One can only capture what’s visible from its elevated equipment; the other can fly. One is an essential part of its business model; the other should be banned? Why his sudden handwringing about privacy when it comes to drones? Especially since Google is plowing a fortune into cars that drive themselves – road-bound drones, so to speak. The next step would be devices that fly. The mapping and control software would by then be on the shelf.
In a couple of years, the FAA will take up the delicate matter of drones used by civilians and companies. Perhaps by then, Google Ventures will fund a company that is developing the latest and greatest unmanned multi-rotor helicopters the size of a briefcase to replace the awkward Street View cars. They’d take pictures of the insides of homes, to show what a neighborhood is really like, beyond the facades. Users would love it. Software will blur the faces of the people inside to guard their “privacy,” very helpful, as the hapless dude in France found out. And then Google will oppose vigorously any regulation that doesn’t suit it. Because Airborne Street View would be the next leap forward for Google – and Schmidt must already be fantasizing about it.
Here are some tricks I use to maintain privacy and security on the internet – written in my own manner so that even I can follow the instructions: Windows 7, Internet Explorer, Silverlight, Flash Player, & Java Privacy Settings and Cleanup.
Yesterday the drone regulation bill in the Washington state legislature died, having failed to meet the cutoff date for moving to the House floor. Although our lobbyist there thought the bill would have passed both houses had the Democratic leadership allowed it to get there, they did not. Boeing lobbied against the bill, as did law enforcement.
One of the arguments presented by opponents, our Washington state lobbyist Shankar Narayan reports, was the claim that no regulations are needed for drones because we ought to let the courts work out the privacy issues surounding drones and deal with any abuses that arise. I have also heard spokespeople for the drone industry association, the AUVSI, making this argument lately. It seems to be emerging as a primary argument of drone-legislation opponents.
This is a weak argument. Let me briefly give five reasons why:
- There is no reason to wait for abuses to happen when they are easily foreseeable. When you put an enormously powerful surveillance technology in the hands of the police and do not place any restrictions on its use, it will be abused, sooner or later, in ways illegal (i.e. by bad apples) and legal (i.e. through officially approved policies that nonetheless violate our Constitution and/or values). Why wait, when we can prevent them before they take place and spare their victims the grief?
- The legal system has always been very slow to adapt to new technology. For example, it took the Supreme Court 40 years to apply the Fourth Amendment to telephone calls. At first the court found in a 1928 decision that because telephone surveillance did not require entering the home, the conversations that travel over telephone wires are not protected. It was not until 1967 that this literal-minded hairsplitting about “constitutionally protected areas” was overturned (with the court declaring that the Constitution “protects people, not places”). Today, technology is moving far faster than it did in the telephone era—but the gears of justice turn just as slowly as they ever have (and maybe slower).
- There are many uncertainties about how our Constitution will be applied by the courts to aerial surveillance. Just as the new technology of the telephone broke the Supreme Court’s older categories of understanding, so too will drones with all their new capabilities bring up new situations that will not fit neatly within existing jurisprudential categories of analysis. For example, how will the courts view the use of drones for routine location tracking? The Supreme Court started to grapple with such questions in its recent decision in the Jones GPS case, but it is far from clear what the ultimate resolution will be. The Supreme Court has ruled before that the Fourth Amendment provides no protection from aerial surveillance, even in one’s backyard surrounded by a high fence, and while the new factors that drones bring to the equation could shift that judgment, we cannot be certain. Legislators should not sit around waiting for cases to come before the courts; they should act to preserve our values now.
- Legislatures often set rules even when the Constitution would seem to cover something. To take just one example: after the Supreme Court issued that 1967 ruling that a warrant was needed to tap someone’s phone, Congress went on to enact detailed standards the government had to follow before it could do so. What it did not do was throw its hands up and say “the court has ruled, if there are any further abuses we can let the courts take care of them.”
- Our courts often defer to the judgments of elected bodies. While the courts’ role is to step in and protect fundamental rights when they are threatened by the majority, they normally show great deference toward the judgments of elected representatives of the people. And for good reason—we live in a democracy, and unless fundamental rights are at stake decisions should be made by our democratic representatives. A legislature acting to protect fundamental rights such as privacy does not threaten such rights, and there is no reason why elected representatives shouldn’t act to protect our fundamental values if they feel that the citizens in their districts want them to.
Let’s hope that state legislators in other states don’t fall for this line of argument.
The Supreme Court recently heard oral argument in Maryland v. King, a case considering the constitutionality of warrantless DNA collection from arrestees. We’ve long warned about the privacy problems with the rise of cheap, easy and fast blanket DNA collection, and filed an amicus brief with the Court urging it to hold the government can only obtain this sensitive genetic material with a search warrant. While it can be fruitless trying to read the tea leaves of oral argument, one specific idea — that technological advances making DNA analysis faster means warrantless collection may be OK — should leave you worried about the fate of privacy going forward in the digital age.
One of the main disagreements surrounding the issue of DNA collection is whether the state is collecting DNA from arrestees for immediate identification — to figure out if they’ve arrested the right person and learn who that person is for purposes of making a bail determination — or for past and future investigation — to solve cold cases and to store DNA for future searches. The state has long claimed they used DNA for both, while we’ve argued the government simply isn’t able to use DNA collection for immediate identification purposes since there’s currently a delay in analyzing DNA ranging from several days up to a few months. But with the rise of rapid DNA analyzers which can analyze DNA in 90 minutes, law enforcement is chomping at the bit to purchase and install these devices at police stations across the country. When the lawyer challenging the blanket DNA collection argued that law enforcement’s interest in using DNA for immediate identification was simply not possible because of the lengthy delays in DNA analysis, Chief Justice Roberts interrupted to note (PDF):
Now, your brief says, well, the only interest here is the law enforcement interest. And I found that persuasive because of the concern that it’s going to take months to get the DNA back anyway, so they are going to have to release him or not before they know it. But if we are in a position where it now takes 90 minutes or will soon take 90 minutes to get the information back, I think that’s entirely different…
Other members of the court echoed this idea, hinting that if DNA analysis was done faster, than there could be a legitimate identification — as opposed to investigative — need for the practice. And if that was the case, then DNA collection was no different than fingerprinting, and the police could swab and collect DNA without a search warrant. This would be a dangerous Fourth Amendment precedent.
The reasonableness of a search under the Fourth Amendment has always depended on whether the search is reasonably related in scope to the circumstances that justify the search in the first place. But that determination shouldn’t hinge on how long it takes to do the search, but rather what the search reveals. And with DNA searches, an enormous amount of sensitive information is being revealed to the government: a person’s entire genome. Ignoring the breadth of this intrusion by focusing on the ease of collection — implicitly believing the easier it is to intrude into a private place, the less protected it is — elevates form over substance to the detriment of the right of privacy enshrined in the Fourth Amendment.
This dangerous thinking extends beyond DNA collection. We’ve already warned about the problems with warrantless home video surveillance and stingrays, or fake cell phone towers which the government has been very secretive about. As technological advances like these allow the government to easily collect and catalog greater amounts of information, courts run the risk of allowing broader and more intrusive searches to pass Fourth Amendment scrutiny simply because of the possibility of exposure. Instead, courts should be focusing on the actual intrusion and people’s expectation that private information will not be exposed, regardless of how technological advances can make government access easier or faster.
The fact the government can do something now it couldn’t do before doesn’t make it constitutional. In fact, it should be the opposite. As it becomes easier for the government to seize and analyze, institutional checks — like a search warrant — on the government’s power is necessary to protect privacy before it becomes a casualty to technological advances.
Google is embroiled in its biggest privacy battle yet in the UK over reportedly tracking users’ online habits. At least 10 UK citizens began legal action with dozens more lining up. According to media estimates up to 10 million Britons could join in.
Google is accused of evading security settings on Apple’s devices and Safari’s web browser in order to keep tabs on people’s online preferences.
This is the first group claim over privacy issues that the tech-giant is facing in the UK, the lawyer behind the action Dan Tench told The Guardian.
“It is particularly concerning how Google circumvented security settings to snoop on its users. One of the things about Google is that it is so ubiquitous in our lives and if that’s its approach, then it’s quite concerning,” Tench said.
On top of that there are plans in the works to launch an umbrella privacy action suit, which could potentially bring in millions of people in the UK.
Google executives reportedly received a letter from two users prior to the launch of legal proceedings.
The tech-giant is being sued for breaches of privacy and confidence, computer misuse and trespass, and breach of the Data Protection Act of 1998.
Claimants want Google to reveal how much data was secretly collected, for how long, and how the information is being used.
The point of the claim is not to make money off Google, but to send a message, argued a privacy campaigner working on the legal claims, Alexander Hanff.
“This lawsuit is about sending a very clear message to corporations that circumventing privacy controls will result in significant consequences. The lawsuit has the potential of costing Google tens of millions, perhaps even breaking 100 million pounds [US$15.7 million] in damages given the potential number of claimants – making it the biggest group action ever launched in the UK,” Hanff says.
Some users responded by creating a Facebook group titled ‘Safari Users against Google’s Secret Tracking’ to gather support for the new claims against Google. The page promises to hold Google responsible for any privacy breaches.
The group was set up “to provide information for anyone who used the Safari internet browser between September 2011 and February 2012, and who was illegally tracked by Google,” reads the group’s statement. “Any users in the UK may have a claim against Google for this breach of their privacy. Other users, who have set up this group, are taking action against Google to hold them to account.”
The page was created only a day ago, but it already garnered over one hundred ‘likes’.
One Facebook user, Vitor Costa, commented on the secrecy aspect behind Google’s privacy breaches, questioning “what they are doing with this information.”
The legal action follows a US ruling that approved a $22.5 million fine to penalize Google for a privacy breach between summer 2011 and spring 2012. The fine resulted after allegations that Google secretly kept tabs on millions of Safari web users, while leading them to believe that their online activities could not be traced as long as they did not change the browser’s privacy settings.
The FTC came to the conclusion that Google’s stealth tracking (which allowed the company to bypass Safari’s settings) contradicted its own privacy assurances.
Google is not new to privacy violation accusations. In the past it faced many allegations such as, keeping tabs on Wi-Fi users with its StreetView cars and privacy failures of the Google’s previous social network, Google Buzz.
Also, European Union lawmakers have been continuing to pressure Google to boost personal security controls and limit the collection of data without users consent.
But new Google services such as Conversations API, which merges offline consumer info with online intelligence, allowing advertisers to target users based on what they do at the keyboard and at the mall, only raise more privacy-based questions.
- Google faces legal action over alleged secret iPhone tracking (guardian.co.uk)
Violating the Privacy Rights of Students
In 2006, Marlyn, a mother who lives in Gwinnett County with her children, was surprised to hear that her son Kyle, a senior at Brookwood High School, had taken the ASVAB test. ASVAB or the Armed Services Vocational Aptitude Battery test is the military’s entrance exam, given to recruits to determine their aptitude for military occupations. Marlyn does not recall consenting to her son’s taking of the test or for the results to be sent to military recruiters. Her son did not know either that the results will be sent to recruiters. Kyle was subsequently contacted by recruiters and Marlyn had a tough time getting them to stop once Kyle had made a college selection.
Marlyn and Kyle are certainly not alone. In fact, Georgia’s record in terms of protecting the privacy of students who take the ASVAB test has gotten even worse over the years.
With the start of the school year, the ACLU Foundation of Georgia sent a letter to Georgia’s State School Superintendent, Dr. Barge, asking for protection of privacy rights of Georgia’s high school students who take the ASVAB. Even without a student’s or parent’s consent, the ASVAB test may be used to send highly sensitive information about a student to the military for purposes of recruitment. After the administration of the ASVAB test, military representatives may directly communicate with youth to suggest military career paths, based on the individualized profiles ascertained from their test data.
U.S. Military Entrance Processing Command (USMEPCOM) Regulation 601-4 identifies the options schools may choose regarding the administration and release of their students’ ASVAB results. These options include Option 8, which provides high schools and their students with the students’ test results, but does not entail automatically sending the results to military recruiters.
In its letter, the ACLU of Georgia asks that a state-wide policy that requires schools to protect such information be adopted in Georgia, specifically, a policy that requires the selection of option 8 by school officials.
States such as Maryland and Hawaii and cities such as New York City have required that their public schools respect student privacy by enacting laws and policies in which schools must choose Option 8 when the ASVAB test is administered.
In documents obtained under the Freedom of Information Act by the National Coalition to Protect Student Privacy, not a single high school in Georgia selected option 8 during the 2010-2011 school year, while the ASVAB test results of more than 26,000 students was marked by Options 1-6, meaning test results and student information may be released to recruiters without prior consent. The data for 2011 covering more than 29,000 students indicates the same.
If school officials do not select a release option, the school’s Educational Support Specialist will select Option 1 which entails automatically releasing the information to military recruiters. In 2009-2010, 83.9% of the children in Georgia were tested under Option 1. This percentage had increased to 87.7% of Georgia’s students in 2010-2011.
This lack of protection for students’ privacy also contravenes the obligations of the U.S. government and the State of Georgia under international law.
The U.S. ratified the Optional Protocol to the Convention on the Rights of the Child on the involvement of children in armed conflict in 2002. The Protocol is therefore binding on the U.S. government and state and local government entities and agents, including Georgia public schools.
As part of the treaty mechanism, the U.S. has to submit a report every four years to the Committee on Rights of the Child (CRC), the United Nations body that monitors compliance with the Optional Protocol.
The U.S. government’s latest report to the CRC will be reviewed by the Committee in January 2013. The list of issues to be discussed during this review includes the use of the ASVAB test in schools including the age of children who were given this test and whether parents have the possibility to prevent their children from taking it.
We hope that this will be an opportunity for the U.S. government and Georgia schools to provide needed transparency and to be held accountable to their international obligations as well as obligations to protect the privacy of Georgia students.
Azadeh Shahshahani is the Director of the National Security/Immigrants’ Rights Project at the ACLU of Georgia. She is the president-elect of the National Lawyers’ Guild.
MIT’s Technology Review has an article today on research that is underway to make extremely sensitive and rapid molecular sensors—aka “artificial noses”—that are so thin they could even be integrated into paper or textiles.
The use of particle detectors and chemical sensors to identify tiny amounts of chemicals or odors is an area that we’ve been keeping an eye on for a while—something we file under “possible future privacy-invasive technologies.” As Technology Review describes it, this technology
rapidly detects volatile organic compounds (VOCs)—gases in our surrounding environment that are produced by a wide variety of sources, everything from household paints to a person’s own skin. Many do not have an odor, but an electronic sensor could alert a user to the presence of harmful chemicals or perhaps indicate that something is off-kilter with a user’s health.
The main context in which Americans have encountered chemical sensors so far is in bomb detection—mainly at the airport when they or their belongings are swabbed and tested for traces of explosives. A “puffer machine” that blows air on passengers standing inside a booth was also tested for a while but found to be so far impractical for mass deployment. We’ve never had a problem with particle detectors; as long as they are tuned only to look for explosives, they do not raise substantial privacy concerns, as explosives are not something people normally have. (We have pointed out that there can be questions about their effectiveness, and the importance of treating people who “alarm” properly given that false negatives are probable.)
But such deployments may be only the beginning. Here are some other chemical detection efforts that we have seen already:
• DHS has been working on a scheme to place chemical sensors in cell phones so that every American becomes a roaming chemical sensor able to alert the authorities to the release of chemical toxins resulting from accidents or terrorist plots.
• Companies are selling sensitive drug-sniffing products that go way beyond breathalyzers, such as contactless hand-held scanners that claim to be able to detect trace amounts of drugs on virtually all surfaces, including skin and clothing.
• DHS is also researching the use of body odor as a unique identifier or “odor fingerprint.” In theory, if that panned out, cheap and pervasive sensors could identify you everywhere you go.
• As part of the same project, DHS is also researching their use “as an indicator of deception”—in short, they are pursuing that perennial chimera, a lie detector. While lie detection is a fool’s errand, it’s possible that odor detectors could reveal very crude facts about people’s emotional state.
• Researchers are developing techniques for detecting medical conditions including cancer, asthma, and many other diseases by detecting “trace amounts of distinctive biomarkers in their breath.” (Sounds great in the hands of your doctor; used secretly during a job interview or bank loan application, not so much.)
• Under a pilot program spearheaded by the White House’s “drug czar” in 2006, the government tested sewage from treatment plants in the Washington, D.C. area to measure the amount of trace cocaine that was present. This was done in an effort to estimate the level of drug use in those communities. It did not reveal anything about specific individuals.
The breadth of activity in this area makes it clear that if this technology continues to advance rapidly and becomes cheap and widespread as so many other technologies have in recent years, we will be facing an entirely new set of privacy issues. A whole new range of facts about ourselves (health conditions; emotional state; drug, alcohol and pharmaceutical use; our identity) could become open to unwelcome scrutiny by others (government, employers, insurance companies, nosy neighbors).
Sometimes such technologies get scary very fast; other times they don’t turn out to be a problem. We’ll be watching closely.
- A Flexible Nerve-Gas Sensor (cen.acs.org)
- Electronic nose out in front (phys.org)
- A Nose in Your Clothes (technologyreview.com)
- Global Chemical Sensors Market to Reach $17.28 Billion by 2015, According to New Report by Global Industry Analysts, Inc. (prweb.com)
Congress is poised to give final passage to legislation that would give a big boost to domestic unmanned aerial surveillance — aka “drones.”
As we explained in our recent report, drone technology is advancing by leaps and bounds, and there is a lot of pent-up demand for them within the law enforcement community. But, domestic deployment of unmanned aircraft for surveillance purposes has largely been blocked so far by the Federal Aviation Administration (FAA), which is rightly concerned about the safety effects of filling our skies with flying robots (which crash significantly more often than manned aircraft).
As we also explained in our report, the FAA is under pressure to loosen the reins and permit broader deployment of drones by government agencies.
One result of that pressure is this legislation (H.R. 658 — see conference report for more details), which authorizes appropriations for the FAA through fiscal 2014. Unfortunately, nothing in the bill would address the very serious privacy issues raised by drone aircraft. This bill would push the nation willy-nilly toward an era of aerial surveillance without any steps to protect the traditional privacy that Americans have always enjoyed and expected.
Congress — and to the extent possible, the FAA — need to impose some rules (such as those we proposed in our report) to protect Americans’ privacy from the inevitable invasions that this technology will otherwise lead to. We don’t want to wonder, every time we step out our front door, whether some eye in the sky is watching our every move.
On Friday, the House gave final passage to the legislation. House approval came on a quite partisan vote, with most Republicans in favor and most Democrats opposing. The Senate is scheduled to take up the bill later today.
Here are details on what the bill would do in terms of drones:
- Require the FAA to simplify and speed up the process by which it issues permission to government agencies to operate drones. It must do this within 90 days. The FAA has already been working on a set of proposed regulations to loosen the rules around drones, reportedly set for release in the spring of 2012.
- Require the FAA to allow “a government public safety agency” to operate any drone weighing 4.4 pounds or less as long as certain conditions are met (within line of sight, during the day, below 400 feet in altitude, and only in safe categories of airspace).
- Require the FAA to establish a pilot project within six months to create six test zones for integrating drones “into the national airspace system.”
- Require the FAA to create a comprehensive plan “to safely accelerate the integration of civil unmanned aircraft systems into the national airspace system.” “Civil” drones means those operated by the private sector; currently it is all but impossible for any non-government entity, except for hobbyists, to get permission to fly drones (for-profit use of drones is banned). Industry groups and their congressional supporters see this as a potential area for growth. Congress specifies that the plan must provide for the integration of drones into the national airspace system “as soon as practicable, but not later than September 30, 2015.” The FAA has nine months to create the plan. The FAA is also required to create a “5-year roadmap for the introduction” of civil drones into the national airspace.
- Require the FAA to publish a final rule within 18 months after the comprehensive plan is submitted, “that will allow” civil operation of small (under 55 pounds) drones in the national airspace, and a proposed rule for carrying out the comprehensive plan.
The bottom line is: domestic drones are potentially extremely powerful surveillance tools, and that power — like all government power — needs to be subject to checks and balances. We hope that Congress will carefully consider the privacy implications that this technology can lead to.
Throughout history, there have been a number of reasons why individuals have taken to writing or producing art under a pseudonym. In the 18th century, James Madison, Alexander Hamilton, and John Jay took on the pseudonym Publius to publish The Federalist Papers. In 19th century England, pseudonyms allowed women–like the Brontë sisters, who initially published under Currer, Ellis, and Acton Bell–to be taken seriously as writers.
Today, pseudonyms continue to serve a range of individuals, and for a variety of reasons. At EFF, we view anonymity as both a matter of free speech and privacy, but in light of International Privacy Day, January 28, this piece will focus mainly on the latter, looking at the ways in which the right to anonymity–or pseudonymity–is truly a matter of privacy.
Privacy from employers
Human beings are complex creatures with multiple interests. As such, many professionals use pseudonyms online to keep their employment separate from their personal life. One example of this is the Guardian columnist GrrlScientist who, upon discovering her Google+ account had been deleted for violating their “common name” policy, penned a piece explaining her need for privacy. Another example is prominent Moroccan blogger Hisham Khribchi, who has explained his use of a pseudonym, stating:
When I first started blogging I wanted my identity to remain secret because I didn’t want my online activity to interfere with my professional life. I wanted to keep both as separate as possible. I also wanted to use a fake name because I wrote about politics and I was critical of my own government. A pseudonym would shield me and my family from personal attacks. I wanted to have a comfortable space to express myself freely without having to worry about the police when I visit my family back in Morocco.
Though Khribchi’s reasoning is two-fold, his primary concern–even stronger than his need for protection from his government–was keeping his online life separate from his employment.
Privacy from the political scene
In 2008, an Alaskan blogger known as “Alaska Muckraker” (or AKM) rose to fame for her vocal criticism of fellow Alaskan and then-McCain-running-mate Sarah Palin. Later, after inveighing against a rude email sent to constituents by Alaska State Representative Mike Doogan, AKM was outed–by Doogan–who wrote that his “own theory about the public process is you can say what you want, as long as you are willing to stand behind it using your real name.”
AKM, a blogger decidedly committing an act of journalism, could have had any number of reasons to remain anonymous. As she later wrote:
I might be a state employee. I might not want my children to get grief at school. I might be fleeing from an ex-partner who was abusive and would rather he not know where I am. My family might not want to talk to me anymore. I might alienate my best friend. Maybe I don’t feel like having a brick thrown through my window. My spouse might work for the Palin administration. Maybe I’d just rather people not know where I live or where I work. Or none of those things may be true. None of my readers, nor Mike Doogan had any idea what my personal circumstances might be.
Though Doogan claimed that AKM gave up her right to anonymity when her blog began influencing public policy, he’s wrong. In the United States, the right to anonymity is protected by the First Amendment and must remain so, to ensure both the free expression and privacy rights of citizens.
Similarly, in 2009, Ed Whelans, a former official with the Department of Justice, outed anonymous blogger John Blevins–a professor at the South Texas College of Law–in the National Review, calling him “irresponsible”, and a “coward.” Blevins took the fall gracefully, later explaining why he had chosen to blog under a pseudonym. Like Khribchi, Blevins’ reasons were numerous: He feared losing tenure and legal clients, but he also feared putting the jobs of family members in the political space at risk.
Privacy from the public eye
A friend of mine–let’s call him Joe–is the sibling of a famous celebrity. But while he’s very proud of his sibling, Joe learned early on that not everyone has his best interests at heart. Therefore, Joe devised a pseudonym to use online in order to protect the privacy of himself and his family.
In Joe’s case, the threat is very real: celebrities are regularly stalked, their houses broken into. His pseudonym keeps him feeling “normal” in his online interactions, while simultaneously protecting his sibling and the rest of his family from invasions of privacy.
Achieving anonymity online
Anonymity and pseudonymity may seem increasingly difficult to achieve online. Not only do companies like Facebook restrict your right to use a pseudonym, but even when you do think you’re anonymous, you might not be–as blogger Rosemary Port found out in 2009 after Google turned over her name in response to a court order.
While we should continue to fight for our privacy under the law, the best thing we can do as users to who value our right to anonymity is to use tools like Tor. Anonymous bloggers can use Global Voices Advocacy’s online guide to blogging anonymously with WordPress and Tor. And all Internet users should educate themselves about what is–and isn’t–private on their online accounts and profiles.
- International Privacy Day: Fighting Data Retention Mandates Around the World (alethonews.wordpress.com)
- Google And Privacy: Nothing To See Here, Move On (informationweek.com)
- Google shifts stance on Google+ anonymity, will support pseudonyms (arstechnica.com)
- Google Says Bye Bye to User Privacy (forbes.com)
- Google+ relaxes real-name policy, allows pseudonyms (plus.google.com)
This January 28 marks International Privacy Day, the day that the first legally binding international privacy treaty was opened for signature to Member States in January 28, 1981. Different countries around the world are celebrating this day with their own events. This year, we are honoring the day by calling attention to recent privacy threats around the world and describing a few of the available tools that allow individuals to protect their privacy and anonymity.
Today, we are calling on governments to repeal mandatory data retention schemes. Mandatory data retention harms individuals’ anonymity, which is crucial for whistle-blowers, investigators, journalists, and for political speech. It creates huge potential for abuse and should be rejected as a serious infringement on the rights and freedoms of all individuals.
It has been six years since the highly controversial Data Retention Directive (DRD) was adopted in the European Union. Conceived in the EU and steamrolled by powerful U.S. and U.K. government lobbies, this mass-surveillance law compels EU-based Internet service providers to collect and retain traffic data revealing who communicates with whom by email, phone, and SMS, including the duration of the communication and the locations of the users. This data is often made available to law enforcement. Europeans have widely criticized the DRD, and year after year, it has inspired some of the largest-ever street protests against excessive surveillance.
The European Commission has begun mounting a defense for this highly controversial mass-surveillance scheme, though they have thus far been unable to show that the DRD is necessary or proportionate. For the DRD to be legal in the EU, any limitation to the right to privacy mustbe “necessary” to achieve an objective of general interest and “proportionate” to the desired aim. This requirement is important to ensure that the government does not adopt severe measures to address a problem that could be otherwise solved in a way that is less harmful to civil liberties. But the Commission has been arguing that all uses of retained data illustrate that the Directive is “valuable.” This doesn’t meet the legal standard. Instead, the Commission should provide evidence that in the absence of a mandatory data retention law, traffic data crucial to the investigation of “serious crime” would not have been available to law enforcement.
Despite the European Commission’s efforts to preserve the Directive as-is, a leaked letter confirms that the Commission has been scrambling to conjure evidence for the “need” of a DRD scheme in the European Union. It also underscores the fact that there is no system of oversight that would allow citizens to monitor the impact of the proposed program on their privacy rights. Perhaps the most disquieting detail that has been confirmed by the letter is that service providers have already been storing instant messages, chats, uploads, and downloads. This type of data collection falls outside the scope of the DRD. Moreover, the letter indicates that “unnamed” players seek to broaden the uses of the DRD to include prosecution of copyright infringement including “illegally downloading.” Since this is not a serious crime, this legally falls outside the scope of the DRD.
In response to this leak, EDRI stated, “The leaked document however shows that the Commission can neither prove necessity nor proportionality of the Data Retention Directive – but still wants to keep the Directive.” The leaked letter also disclosed that the EU Commission is evaluating the possibility of amending the Directive. The Commission has commissioned a study into data preservation in the EU and around the world. According to the letter, this exercise is to be completed by May 2012.
Ending Data Retention: Constitutional Challenges
Constitutional courts have begun weighing in on the legality of this mass-surveillance scheme. In a decision celebrated by privacy advocates, the Czech Constitutional Court declared in March 2011 that the Czech data retention law was unconstitutional. Earlier this month, the same Court dealt another blow to data retention by annulling part of the Criminal Procedure Code, which would have enabled law enforcement access to data stored voluntarily by operators. Most importantly, the Czech Court used compelling language in articulating the importance of the protection of traffic data. The Court stated that the collection of traffic data and communication data warranted identical legal safeguards since both have the same “intensity of interference”.
We couldn’t agree more. Sensitive data of this nature demands stronger protection, not an all-access pass. Individuals should not have to worry whether one sort of private information has less protection than another.
I believe that both decisions will help ensure that new legislation enforces the same restrictions as exist for use of wiretap. These include strong privacy safeguards for government access to citizen’s data, the obligation to inform individuals about the use of their data, and so on.
Several other courts in EU member states have also ruled on the illegality of data retention laws. Earlier in 2009, the Romanian constitutional Court rejected the imposition of an ongoing, sweeping traffic data retention program. The Court rightly emphasized that mandatory data retention overturns the presumption of innocence in a way that treats all Romanians like potential suspects. Despite this court decision, a new draft data retention bill was introduced in the Parliament, but the Senate finally rejected it at the end of 2011.
In March 2010, the German Court declared unconstitutional the German mandatory data retention law. The Court ordered the deletion of the collected data and affirmed that data retention could “cause a diffusely threatening feeling of being under observation that can diminish an unprejudiced perception of one’s basic rights in many areas.” The lawsuit was brought on by 34,000 citizens through the initiative of AK Vorrat, the German working group against data retention.
Over in Ireland, the Court is referring to the European Court of Justice the case challenging the legality of the DRD, thanks to the complaint brought by Digital Rights Ireland. The Irish Court acknowledged the importance of defining “the legitimate legal limits of surveillance techniques used by governments”, and rightly emphasized that “without sufficient legal safeguards the potential for abuse and unwarranted invasion of privacy is obvious”. The Courtsin Cyprus and Bulgaria have also declared their mandatory data retention laws unconstitutional.
The DRD compels EU member countries to implement the Directive into national law. Fortunately, many member states have not yet done so. The Czech Republic, Germany, Greece, Romania, and Sweden have not adopted this piece of legislation, despite pressure from the European Commission to do so. In Austria, the data protection law will take effect in April 2012. AK Vorrat Austria plans to use all legal means to challenge the legality of the DRD. They have also handed over a petition to the Austrian Parliament asking the government to fight against the DRD at the EU level and to review all existing anti-terror legislation. (If you are Austrian, sign the petition today at zeichnemit.at.) In Slovakia, the NGO European Information Society Institute is opposing the Slovakian data retention implementation law.
Meanwhile, civil society groups are resisting and campaigning against this oppressive data retention law. EDRI, along with EFF and AK Vorrat, has fought to repeal the DRD in favor of targeted collection of traffic data. EDRI has previously reported that Deutsche Telekom, a German telco, illegally used telecommunications traffic and location data to spy on roughly 60 individuals including journalists, managers, and union leaders. They also reported that two major intelligence agencies in Poland used retained traffic and subscriber data to illegally disclose journalistic sources without any judicial oversight. These are only a few examples in which data retention policies have directly threatened individuals’ expression and privacy rights.
The DRD is a threat to Internet privacy and anonymity, and has been proven to violate the privacy rights of 500 million Europeans. EFF, together with EDRI, will keep fighting to repeal the DRD in favor of targeted collection of traffic data.
Mandatory Data Retention in the United States
Two bills introduced in the U.S. Congress in 2009 would have required all Internet providers and operators of WiFi access points to keep records on Internet users for at least two years to assist police investigations. Neither bill became law. Some legislators and law enforcement officials continue to argue, however, that mandatory data retention is necessary to investigate online child pornography and other Internet crimes. In January 2011, the U.S. House of Representatives Judiciary Subcommittee on Crime, Terrorism, and Homeland Security held a hearing that discussed whether Congress should pass legislation that would force ISPs and telecom providers to log Internet user traffic data. In May 2011, H.R. 1981, which would require retention of such traffic data, was introduced in the House of Representatives. This bill is still alive and continues to be a threat to the privacy and anonymity of all Americans. EFF has joined civil liberties and consumer organizations in publicly opposing H.R. 1981. Please join EFF, and help us defeat this bill before it is made law. Contact your Representative now.