A new project aimed at “countering illegal use of the Internet” is making headlines this week. The project, dubbed CleanIT, is funded by the European Commission (EC) to the tune of more than $400,000 and, it would appear, aims to eradicate the Internet of terrorism.
European Digital Rights, a Brussels-based organization consisting of 32 NGOs throughout Europe (and of which EFF is a member), has recently published a leaked draft document from CleanIT.
On the project’s website, its stated goal is to reduce the impact of the use of the Internet for “terrorist purposes” but “without affecting our online freedom.” While the goal may seem noble enough, the project actually contains a number of controversial proposals that will compel Internet intermediaries to police the Internet and most certainly will affect our online freedom. Let’s take a look at a few of the most controversial elements of the project.
Privatization of Law Enforcement
Under the guise of fighting ‘terrorist use of the Internet,’ the “CleanIT project,” led by the Dutch police, has developed a set of ‘detailed recommendations’ that will compel Internet companies to act as arbiters of what is “illegal” or “terrorist” uses of the Internet.
Specifically, the proposal suggests that “legislation must make clear Internet companies are obliged to try and detect to a reasonable degree … terrorist use of the infrastructure” and, even more troubling, “can be held responsible for not removing (user generated) content they host/have users posted on their platforms if they do not make reasonable effort in detection.”
EFF has always expressed concerns about relying upon intermediaries to police the Internet. As an organization, we believe in strong legal protections for intermediaries and as such, have often upheld the United States’ Communications Decency Act, Section 230 (CDA 230) as a positive example of intermediary protection. While even CDA 230’s protections do not extend to truly criminal activities, the definition of “terrorist” is, in this context, vague enough to raise alarm (see conclusion for more details).
Erosion of Legal Safeguards
The recommendations call for the easy removal of content from the Internet without following “more labour intensive and formal” procedures. They suggest new obligations that would compel Internet companies to hand over all necessary customer information for investigation of “terrorist use of the Internet.” This amounts to a serious erosion of legal safeguards. Under this regime, an online company must assert some vague notion of “terrorist use of the Internet,” and they will have carte blanche to bypass hard-won civil liberties protections.
The recommendations also suggest that knowingly providing hyperlinks to a site that hosts “terrorist content” will be defined as illegal. This would negatively impact a number of different actors, from academic researchers to journalists, and is a slap in the face to the principles of free expression and the free flow of knowledge.
Internet companies under the CleanIT regime would not only be allowed, but in fact obligated to store communications containing “terrorist content,” even when it has been removed from their platform, in order to supply the information to law enforcement agencies.
Material Support and Sanctions
The project also offers guidelines to governments, including the recommendation that governments start a “full review of existing national legislation” on reducing terrorist use of the Internet. This includes a reminder of Council Regulation (EC) No. 881/2002 (art. 1.2), which prohibits Internet services from being provided to designated terrorist entities such as Al Qaeda. It is worth noting that similar legislation exists in the US (see: 18 U.S.C. § 2339B) and has been widely criticized as criminalizing speech in the form of political advocacy.
The guidelines spell out how governments should implement filtering systems to block civil servants from any “illegal use of the Internet.”
Furthermore, governments’ criteria for purchasing policies and public grants will be tied to Internet companies’ track record for reducing the “terrorist use of the Internet.”
Notice and Take Action
Notice and take action policies allow law enforcement agencies (LEAs) to notify and act against Internet companies, who must remove “offending” content as fast as possible. This obligates LEAs to determine the extent to which content can be considered “offensive.” An LEA must “contextualize content and describe how it breaches national law.”
The leaked document contains recommendations that would require LEAs to, in some cases, send notice that access to content must be blocked, followed by notice that the domain registration must be ended. In other cases, sites’ security certificates would be downgraded.
Real Identity Policies
Under the CleanIT provisions, all network users, whether in social or professional networks, will be obligated to supply their real identities to service providers (including social networks), effectively destroying online anonymity, which EFF believes is crucial for protecting the safety and well-being of activists, whistle-blowers, victims of domestic violence, and many others (for more on that, see this excellent article from Geek Feminism). The Constitutional Court of South Korea found an Internet “real name” policy to be unconstitutional.
Under the provisions, companies can even require users to provide proof of their identity, and can store the contact information of users in order to provide it to LEAs in the case of an investigation into potential terrorist use of the Internet. The provisions will even require individuals to utilize a real image of him or herself, destroying decades of Internet culture (in addition to, of course, infringing on user privacy).
The plan also calls for semi-automated detection of “terrorist content.” While content would not automatically be removed, any searches for known terrorist organizations’ names, logos or other related content will be automatically detected. This will certainly inhibit research into anything remotely associated with what law enforcement might deem “terrorist content,” and would seriously hinder normal student inquiry into current events and history! In effect, all searches about terrorism might end up falling into an LEA’s view of terrorist propaganda.
LEA Access to User Content
The document recommends that, at the European level, browsers or operating systems should develop a reporting button of terrorist use of the Internet, and suggests governments draft legislation to make this reporting button compulsory for browser or operating systems.
Furthermore, the document recommends that judges, public prosecutors and (specialized) police officers be able to temporarily remove content that is being investigated.
Frighteningly, one matter up for discussion within the CleanIT provisions is the banning of languages that have not been mastered by “abuse specialists or abuse systems.” The current recommendation contained in the document would make the use of such languages “unacceptable and preferably technically impossible.”
With more than 200 commonly-used languages and more than 6,000 languages spoken globally, it seems highly unlikely that the abuse specialists or systems will expand beyond a select few. For the sake of comparison, Google Translate only works with 65 languages.
At a time when new initiatives to preserve endangered languages are taking advantage of new technologies, it seems shortsighted and even chauvinistic to consider limiting what languages can be used online.
What Is Terrorism, Anyway?
While the document states that the first reference for determining terrorist content will be UN/EU/national terrorist sanctions list, it seems that the provisions allow for a broader interpretation of “terrorism.” This is incredibly problematic in a multicultural environment; as the old adage goes, “one man’s terrorist is another man’s freedom fighter.” Even a comparison of the US and EU lists of designated terrorist entities shows discrepancies, and the recent controversy in the US around the de-listing of an Iranian group shows how political such decisions can be.
Overall, we see the CleanIT project as a misguided effort to introduce potentially endless censorship and surveillance that would effectively turn Internet companies into Internet cops. We are also disappointed in the European Commission for funding the project: Given the strong legal protections for free expression and privacy contained in the Charter of Fundamental Rights of the European Union [PDF], it’s imperative that any efforts to track down and prosecute terrorism must also protect fundamental rights. The CleanIT regime, on the other hand, clearly erodes these rights.
- EU proposal to stop terrorist sites even more ridiculous than it sounds (arstechnica.com)
- Leak shows EU’s plans for largescale surveillance of all communications (edri.org)
- Leaked Clean IT Document Is Frightening (webpronews.com)
- Police across Europe will “patrol” Facebook, Google and Twitter for postings supporting terrorism under an EU project says leaked report (familysurvivalprotocol.com)
As we’ve acknowledged before, our lives are increasingly contained on our digital devices, which makes travel—and the decisions we make about what to carry with us—increasingly complicated.
A recent case in which two young travelers to Israel were requested not simply to provide their laptops for arbitrary searches, but to log in to their e-mail accounts and allow Israeli officials to search through their e-mail for specific strings and correspondence highlights the increasing obstacles to privacy that travelers face, as well as the increasingly global nature of security theatre.
In that particular case, the two young women—both of Palestinian origin—complied with officials’ requests but were nonetheless detained overnight before being deported. In another, similar case, a U.S. citizen who refused access to her email was told she was probably hiding something and was refused entry to the country. Israeli security (Shin Bet) told a reporter that “the actions taken by the agents during questioning were within the organization’s authority according to Israeli law.”
Not unlike travelers to the U.S., travelers to Israel face serious privacy challenges at the border. The government generally has broad authority to search through your personal possessions, including your laptop, for any reason at all. When you cross the border to Israel, the Israeli government retains the authority to question you and examine your belongings, which it interprets as also allowing it to go through your electronic devices and computer files. More recently, authorities have also been known to demand user passwords to online accounts.
As we state in our guide to U.S. border searches:
For doctors, lawyers, and many business professionals, these border searches can compromise the privacy of sensitive professional information, including trade secrets, attorney-client and doctor-patient communications, research and business strategies, some of which a traveler has legal and contractual obligations to protect. For the rest of us, searches that can reach our personal correspondence, health information, and financial records are reasonably viewed as an affront to privacy and dignity and inconsistent with the values of a free society.
EFF recently asked Jonathan Klinger, an Israeli attorney, for his thoughts on the law and government practices that apply to searches at the Israeli border, and here is his analysis.
The Situation at the Israeli Border
At the Israeli border, there are some limited legal protections against the search itself. Based on a collection of experiences, however, it seems that mentioning these protections to border officials can be considered antagonism, and can limit your ability to enter Israel. Those concerned about the security and privacy of the information on their devices at the border should therefore use technological measures in an effort to protect their data. They can also choose not to take private data across the border with them at all, and then use technical measures to retrieve it from abroad.
There is, however, little to prevent a scenario in which one’s email is searched, as refusal to allow the search may result in deportation. With that in mind, concerned travelers should think ahead and review their online accounts before traveling.
Why Can My Devices Be Searched at the Border?
Article 7 of Israel’s Basic Statute of Human Dignity and Freedom1 states that every person is entitled to his privacy, and that his property may not be searched, apart from where it is required under legal authority. This generally means that the government has to show probable cause that a crime has been committed and get a warrant before it can search a location or item in which you have a reasonable expectation of privacy; moreover, a recent Supreme Court ruling stated that there is no such thing called consensual search,2 and where there is no probable cause, the state cannot rely on a person’s consent in order to search in his possessions. But searches at places where people enter or leave Israel are subject to different statutes. The two applicable statutes are the Aviation Act (Security in Civil Aviation), 19773and the General Security Service Act, 20024; the two acts altogether provide two different state authorities the right to search on a person’s body and in his property. However, they do not refer to computer searches at all.
The Aviation Act allows security personnel, police officers, soldiers and members of the civil defense forces to search at border crossings if “the search is required, in [the officer's] opinion, to keep the public’s safety or if he suspects that the person unlawfully carries weapons or explosives, or that the vehicle, the plane or the goods has weapons or explosives.”
Similarly, the General Security Service Act states that in order to prevent unlawful activities, secure persons or any other activity that the government authorized with the approval of the Knesset committee for the Shin Bet5 to perform, any employee of the Shin Bet (the service) may search a person’s body, property, baggage or other goods and collect information, as long as the person is present.
Only in extreme cases, where there is an object that needs to be seized for a vital role in the Shin Bet’s activity, can the Shin Bet also search without a person’s presence.
However, nothing in these acts authorizes computer searches. Recently, the Israeli Justice office proposed a new anti-terror bill,6 which is yet to pass through the legislative process. This Anti-Terror bill does request to correct the current General Security Service act to specifically state that computers may be searched.
How the Government Searches Devices at the Border
There are three government agencies primarily responsible for inspecting travelers and items entering Israel: the General Security Service (Shin Bet), The Customs Authority and the Immigration authority.
The law gives the Shin Bet and other officials a great deal of discretion to inspect items coming into the country. There is no official policy published in respect to border search of electronic devices and accounts. And when recently requested to comment, the Shin Bet stated that its acts are “according to law.”
Recently, the Israeli Foreign Ministry admitted that it used Facebook in order to create a blacklist of activists who were then—along with a number of uninvolved and mistakenly identified individuals—banned entry to the country amidst the Flytilla events. If you are active on one or more social networks and express opinions about Israel, you carry a greater risk of being profiled and selected for search.
Keep in mind that the Shin Bet can keep your computer or copies of your data for “the time required for the seizure.” There is no specific consideration regarding forensic practices and the ways that your computer files may be copied during the seizure. This is unlike the Israeli Criminal Procedure Order (Arrest and Search), 1969,7 which deals specifically with the forensic procedures of copying computer materials and requires two witnesses for any file duplication.
The Israeli Customs Authority, under Article 184,8 allows any customs official to search every person for contraband or drugs given probable cause. Moreover, the customs official may also request urine, blood or saliva samples and request persons to undress. However, nothing in the law allows them to search through computer materials.
In short, border agents have a lot of latitude to search electronic devices at the border or take them elsewhere for further inspection for a short period of time, whether or not they suspect a traveler has done anything wrong.
We do not have the exact numbers or methods of how such searches are handled, and the Shin Bet is exempt from the Israeli Freedom of Information Act.9; However, the frequency of technology-oriented searches at the border may increase in the future. Researchers and vendors are creating tools to make forensic analysis faster and more effective, and, over time, forensic analysis will require less skill and training. Law enforcement agencies may be tempted to use these tools more often and in more circumstances as their use becomes easier.
Travelers should consider taking the same precautions outlined in EFF’s guide to carrying digital devices across the United States border.
In the wake of a horrific rampage, in which Mohamed Merah (now dead after a 32-hour standoff with police) reportedly murdered three French soldiers, three young Jewish schoolchildren, and a rabbi, President Nicolas Sarkozy of France has begun calling for criminal penalties for citizens who visit web sites that advocate for terror or hate. “From now on, any person who habitually consults Web sites that advocate terrorism or that call for hatred and violence will be criminally punished,” Sarkozy was reported as saying.
Apart from the obvious flaws in Sarkozy’s plan–users, can, of course, use anonymizing tools to view the material or simply access it from a variety of locations to avoid appearing as “habitual” viewers–there are numerous other reasons to be concerned about criminalizing access to information.
First, there’s no guarantee that criminalizing access to hate speech or terrorist content will end the very real problems of hate crime and terrorism. Extremist violence didn’t start with the Internet and it won’t end with it, either.
Second, who defines “hate speech”? In France, that definition includes Holocaust denial, which in the past resulted in Yahoo! discontinuing auctions of Nazi memoribilia (the collectors of which are not, by any stretch, all sympathizers). And negative comments about France’s Muslim community have also resulted in criminal penalties, most notably in the case of actress Brigitte Bardot, who has been convicted five times for “inciting racial hatred.” While Holocaust denial and comments about Muslims such as those made by Bardot may be deplorable, they should not be criminal.
Finally, while Sarkozy is not–yet–calling for websites to be blocked, it wouldn’t be a stretch; after all, France already offers mechanisms for blocking child pornography and “incitement to terrorism and racial hatred.” If Sarkozy were to decide censorship is the answer, one major risk would be overblocking: there’s nary a country in the world that censors the Internet without collateral damage (in Australia, for example, testing on a would-be censorship regime found the site of a dentist blocked, among others).
EFF has serious concerns about the implications of Sarkozy’s comments. When a democratic country such as France decides to censor or criminalize speech, it is not just the French that suffer, but the world, as authoritarian regimes are given easy justification for their own censorship. We urge French authorities to judge crime on action, not expression.