April 16, 2015
In INHUMAN KIND, Motherboard gains exclusive access to a small fleet of US Army bomb disposal robots—the same platforms the military has weaponized—and to a pair of DARPA’s six-foot-tall bipedal humanoid robots. We also meet Nobel Peace Prize winner Jody Williams, renowned physicist Max Tegmark, and others who grapple with the specter of artificial intelligence, killer robots, and a technological precedent forged in the atomic age. It’s a story about the evolving relationship between humans and robots, and what AI in machines bodes for the future of war and the human race.
Read: The Evil ‘Star Wars’ Robot Who Owns the Term ‘Meatbag’ – http://bit.ly/1Hy6KLU
Subscribe to Motherboard Radio: http://apple.co/1DWdc9d
The United States intelligence community’s research arm is set to launch a program that will thoroughly broaden the capabilities of biometric facial recognition software in order to establish an individual’s identity.
The Janus program of the Intelligence Advanced Research Projects Agency (IARPA) will begin in April 2014 in an effort to “radically expand the range of conditions under which automated face recognition can establish identity,” according to documents released by the agency over the weekend.
Janus “seeks to improve face recognition performance using representations developed from real-world video and images instead of from calibrated and constrained collections. During daily activities, people laugh, smile, frown, yawn and morph their faces into a broad variety of expressions. For each face, these expressions are formed from unique skeletal and musculature features that are similar through one’s lifetime. Janus representations will exploit the full morphological dynamics of the face to enable better matching and faster retrieval.”
Current facial recognition relies mostly on full-frontal, aligned facial views. But, in the words of Military & Aerospace Electronics, Janus will fuse “the rich spatial, temporal, and contextual information available from the multiple views captured by security cameras, cell phone cameras, news video, and other sources referred to as ‘media in the wild.’”
In addition, Janus will take into account aging and incomplete or ambiguous data for its recognition assessment goals.
IARPA was created in 2006 and is a division of the Office of the Director of National Intelligence. The intelligence agency is modeled after DARPA, the Pentagon’s notorious research arm that fosters technology for future military utilization.
In-Q-Tel, a not-for-profit venture capital firm run by the Central Intelligence Agency, invests in companies that develop facial recognition software.
In an age of ubiquitous surveillance video amid a severe lag of legal protections for privacy, civil liberties advocates are expressing concern.
IARPA’s effort to significantly boost facial recognition capabilities “represents a quantum leap in the amount of surveillance taking place in public places,” said Jay Stanley, a senior policy analyst with the American Civil Liberties Union’s Speech, Privacy and Technology Project, as quoted by USA Today.
Stanley noted that law enforcement and the like could easily run random facial recognition programs over surveillance video to assess the identities of crowds in public places without oversight.
IARPA gave industry representatives a solicitation briefing on the program in June, according to media reports.
Late last month, the Federal Bureau of Investigation published a request for information in developing “a roadmap for the FBI’s future video analytics architecture” as the agency prepares to make its high-tech surveillance abilities all the more powerful.
In September, the Department of Homeland Security tested its Biometric Optical Surveillance System (BOSS) at a junior hockey game in Washington state. When it’s fully operational, BOSS could be used to identify a person of interest among a massive crowd in just seconds.
Over the summer, the state of Ohio admitted it had access to a facial recognition database that included all state-wide driver’s license photos and mug shots without the public’s knowledge.
The Ethics of Deception Detection Research
Last month, a proposal to establish a U.S. Special Operations Command (SOCOM) Center for Excellence in Operational Neuroscience at Yale University died a not-so-quiet death. The broad goal of “operational neuroscience” is to use research on the human brain and nervous system to protect and give tactical advantage to U.S. war fighters in the field. Crucial questions remain unanswered about the proposed center’s mission and the unusual circumstances surrounding its demise. But just as importantly, this episode brings much needed attention to the morally fraught and murky terrain where partnerships between university researchers and national security agencies lie.
A Brief Chronology
Let’s start with what transpired, according to the news reports and official press releases. In late January, the Yale Herald reported that the Department of Defense had awarded $1.8 million to Yale University’s School of Medicine for the creation of the new center under the direction of Yale psychiatrist Charles Morgan III. Descriptions of the proposed center’s work revolved around the teaching of Morgan’s interviewing techniques to U.S. Special Forces in order to improve their intelligence gathering. To heighten the soldiers’ cross-cultural awareness and sensitivity, Morgan reportedly intended to draw volunteer interviewees from New Haven’s immigrant communities.
Such details typically become public only after a university center has been formally established and its funding officially secured. In this case, however, the early news reports – which included statements from director-to-be Morgan – quickly led last month to a widely circulated Yale Daily News op-ed, an online petition, a Facebook page, and protests by students and local groups outraged over reports of Yale’s support for the military center and plans to treat immigrants as “guinea pigs.” According to ABC News/Univision, in response Morgan explained that he was approached by the Defense Department to help “promote better relations between U.S. troops and the people whose villages they work in and around” – by teaching soldiers “better communication skills” and “how to ask non-leading questions, how to listen to what people are saying, how to understand them.”
A public affairs officer for U.S. SOCOM initially confirmed that it was providing funding for the center. Shortly thereafter, Yale University representatives issued a conflicting statement. Characterizing the center as “an educational and research center with a goal of promoting humane and culturally respectful interview practices among a limited number of members of the armed forces, including medics,” they emphasized that no formal proposal had been submitted for academic and ethical review. Yale also noted that volunteer interviewees “selected from diverse ethnic groups” would be protected by university oversight, and that public reports about the center were in part “based on speculation and incomplete information.” Three days later, SOCOM’s spokesperson retracted his previous statement, explaining that the information provided had been incorrect, and that no funds for the center would be forthcoming. Yale confirmed that the center would not be established at the university. Two days later, SOCOM declared that, in fact, they had decided a year earlier not to fund Morgan’s proposal.
Ethical Risks of Operational Neuroscience
The name of the proposed center – the U.S. SOCOM Center of Excellence for Operational Neuroscience – deserves more attention and scrutiny than it has received thus far. The burgeoning interdisciplinary field of operational neuroscience – supported by hundreds of millions of dollars from the Department of Defense – is indisputably much larger and much more worrisome from an ethical perspective than the mere teaching of interview techniques and people skills would suggest. What makes this particular domain of scientific work so controversial is not only its explicit purpose of advancing military goals. The methods by which these ends are pursued are equally disquieting because they raise the specter of “mind control” and threaten our deeply held convictions about personhood and personal autonomy.
In a presentation to the intelligence community five years ago, program manager Amy Kruse from the Defense Advanced Research Projects Agency (DARPA) identified operational neuroscience as DARPA’s latest significant accomplishment, preceded by milestone projects that included the Stealth Fighter, ARPANET, the GPS, and the Predator drone. National security interests in operational neuroscience encompass non-invasive, non-contact approaches for interacting with a person’s central and peripheral nervous systems; the use of sophisticated narratives to influence the neural mechanisms responsible for generating and maintaining collective action; applications of biotechnology to degrade enemy performance and artificially overwhelm cognitive capabilities; remote control of brain activity using ultrasound; indicators of individual differences in adaptability and resilience in extreme environments; the effects of sleep deprivation on performance and circadian rhythms; and neurophysiologic methods for measuring stress during military survival training.
Anthropologist Hugh Gusterson, bioethicist Jonathan Moreno, and other outspoken scholars have offered strong warnings about potential perils associated with the “militarization of neuroscience” and the proliferation of “neuroweapons.” Comparing the circumstances facing neuroscientists today with those faced by nuclear scientists during World War II, Gusterson has written, “We’ve seen this story before: The Pentagon takes an interest in a rapidly changing area of scientific knowledge, and the world is forever changed. And not for the better.” Neuroscientist Curtis Bell has called for colleagues to pledge that they will refrain from any research that applies neuroscience in ways that violate international law or human rights; he cites aggressive war and coercive interrogation methods as two examples.
Research Misapplied: SERE and “Enhanced Interrogation Techniques”
Some may argue that these concerns are overblown, but the risks associated with “dual use” research are well recognized and well documented. Even though a particular project may be designed to pursue outcomes that society recognizes as beneficial and worthy, the technologies or discoveries may still be susceptible to distressing misuse. As a government request for public comment recently highlighted, certain types of research conducted for legitimate purposes “can be reasonably anticipated to provide knowledge, information, products, or technologies that could be directly misapplied to pose a significant threat with broad potential consequences to public health and safety….”
Yale’s Morgan must surely be aware that operational neuroscience research can be used for purposes contrary to its purported intent – as this appears to be what happened with some of his own work. Morgan’s biographical sketch on the School of Medicine website refers to his research on the “psycho-neurobiology of resilience in elite soldiers” and “human performance under conditions of high stress.” Both of these topics are related to his extensive study of the effects of the military’s physically and psychologically grueling Survival, Evasion, Resistance, and Escape (SERE) training program. In SERE training, soldiers are subjected to extreme conditions in order to inoculate them against enemy interrogation should they be captured and subjected to torture by forces that don’t observe international laws prohibiting prisoner abuse. The techniques applied during the trainee’s simulated incarceration and mock interrogations include isolation, stress positions, sleep and food deprivation, loud noises, sexual humiliation, extreme temperatures, confinement in small spaces, and in some cases waterboarding.
Along with colleagues, Morgan has published a series of research articles examining the psychological, physiological, and biological effects of the SERE program. In summarizing key findings of this research, Morgan and his co-authors highlighted the following: the stress induced by SERE is within the range of real-world stress; SERE students recover normally and do not show negative effects from the training; and the mock interrogations do not produce lasting adverse reactions as measured by physiological and biological indicators. However, after reviewing these same studies, the authors of a Physicians for Human Rights report reached a starkly different conclusion: “SERE … techniques, even when used in limited and controlled settings, produce harmful health effects on consenting soldier-subjects exposed to them.” They also emphasized that during the training many students experienced dissociative reactions and hormone level changes comparable to major surgery or actual combat; the post-training assessments were short-term and insufficient to evaluate soldiers for PTSD and related disorders; and the soldiers benefited from knowing that they could end their participation whenever they chose to do so.
SERE research like that conducted by Morgan and his colleagues was subsequently misused by the Bush Administration after the 9/11 terrorist attacks to illegitimately authorize the abuse and torture of national security detainees held at Guantanamo Bay, Bagram Air Base, and CIA “black sites.” The infamous“enhanced interrogation techniques” (EITs) were developed by former SERE psychologists – working for the CIA – who “reverse-engineered” the SERE interrogation tactics. But even more importantly here, a crucial 2002 Office of Legal Counsel “torture memo” asserted that the EITs did not cause lasting psychological harm, and it cited as evidence consultation with interrogation experts and outside psychologists, as well as a review of the “relevant literature” – which plausibly would have included Morgan’s own extensive work in the area. In short, this appears to be a striking and tragic instance where operational neuroscience research, undertaken in a different context, was subsequently appropriated and misapplied for unconscionable purposes. It is worth adding that these prisoners were subjected to indefinite detention without trial and they were not free to discontinue their torturous interrogations at will. Their torture sessions were also substantially longer and the techniques were instituted more frequently and with greater intensity than Morgan’s research subjects experienced.
Morgan’s Deception Detection Research
Another significant area of operational neuroscience research for Morgan has been deception detection – that is, figuring out when someone isn’t being truthful during an interview, or an interrogation. According to his online CV, he has received Department of Defense funding totaling nearly $2 million for this work over the past several years. Research on this same topic reportedly also became an important focus of attention for several intelligence agencies – including the CIA – immediately after the 9/11 attacks. Befitting his expertise and stature in the field, Morgan has been involved in a variety of high-level initiatives designed to bring together university researchers and personnel from the defense and intelligence sectors.
For example, Morgan is among the listed attendees at a July 2003 invitation-only workshop on “The Science of Deception: Integration of Theory and Practice.” The event was co-hosted by the American Psychological Association (APA) and the RAND Corporation, with generous funding from the CIA. The participants discussed various scenarios, including one focused on law enforcement interrogation and debriefing, and another on intelligence gathering. They also explored specific research questions, such as which pharmacological agents affect truth-telling, and whether it might be possible to overwhelm a person’s senses so as to reduce his capacity to engage in deception during an interrogation. Psychologist Jeffrey Kaye has noted that, in a very unusual step, the APA has scrubbed most of the information about this workshop from its website.
In June 2004 Morgan was a participant at another invitation-only workshop – co-sponsored by the Department of Justice, the FBI, and the APA – titled “The Nature and Influence of Intuition in Law Enforcement: Integration of Theory and Practice.” Among the topics examined were the extent to which police officers, intelligence analysts, interrogators, and others can effectively use “intuition” in their work – for instance, in order to detect deception – and how such capabilities might be applied to counterterrorism efforts. The proceedings from this event identify Morgan as “Senior Research Scientist, Behavioral Science Staff, Central Intelligence Agency” – a professional affiliation that does not appear on his online CV.
Morgan is credited with a similar affiliation in the 2006 report “Educing Information,” published by the National Defense Intelligence College. As a member of the Government Experts Committee, Morgan is listed as working for the “Intelligence Technology Innovation Center,” an administrative unit that falls under the CIA. The foreword to the report describes the volume as “a primer on the ‘science and art’ of both interrogation and intelligence gathering.” Included is a chapter on deception detection by Morgan’s close research colleague, psychologist Gary Hazlett. One of Hazlett’s recommendations in the report is that “the United States adopt an aggressive, focused plan to support research and development of enhanced capabilities to validate information and the veracity of sources.” He also notes that the most troubling limitation of deception research thus far is the lack of “various Asian, Middle Eastern, Central and South American, or African populations” as research participants.
Responding to Morgan’s reported plans for a new center at Yale, local advocacy group Junta for Progressive Action issued a statement of concern last month. It noted that, “As a city that has worked to establish itself as a welcoming and inclusive city for immigrants, the idea of targeting immigrants specifically for the purpose of identifying the distinction of how they lie is offensive, disrespectful and out of line with the values of New Haven.” In a recent newspaper report, Morgan called rumors that the proposed center at Yale would teach new interrogation techniques mere “hype and fantasy,” explaining that he instead “suggested to the Army that perhaps some training in people skills – how to talk to and listen to people might be helpful and create better relations.” Even assuming that this reassuring account is true, it’s certainly not unreasonable to question whether deception detection research and training might have been part of the proposed center’s future operational neuroscience agenda.
Classified and Unclassified Research on Campus
There are broader questions beyond those focused specifically on the uncertain details and background surrounding the not-to-be Center of Excellence for Operational Neuroscience at Yale. The unusual sequence of events that unfolded in New Haven last month should ideally serve as a springboard for open discussion of the opportunities and pitfalls associated with research partnerships between universities and national security agencies. To its credit, Yale University has a clear policy that explicitly prohibits its faculty from conducting secret or classified research:
The University does not conduct or permit its faculty to conduct secret or classified research. This policy arises from concern about the impact of such restrictions on two of the University’s essential purposes: to impart knowledge and to enlarge humanity’s store of knowledge. Both are clearly inhibited when open publication, free discussion, or access to research are limited.
But not all academic institutions have such stringent rules, which are necessary to promote full transparency, informed critiques by other scholars and researchers, and constructive engagement beyond the walls of higher education institutions. At the same time, it should be noted that, even at Yale, voluntary faculty members – Morgan’s official status at the university – do not need to disclose research activities that are not being conducted on behalf of Yale.
Some of the most challenging ethical issues remain even when classified research is not conducted on university campuses. As psychologist Stephen Soldz has highlighted, in cases of unclassified research funded by national security agencies, the academic researchers are not necessarily informed about the totality of the projects to which they are contributing. He offers the example of findings from seemingly uncontroversial deception detection studies, which may ultimately become the basis for the capture, indefinite detention, and torturous interrogation of prisoners in undisclosed locations – well beyond the university researchers’ awareness. Soldz also warns that researchers may never know if their campus work has become “part of a vast secret effort to unlock the mystery of mind control and develop techniques for coercive interrogations, as happened to hundreds of behavioral scientists and others in the decades of the CIA’s MKULTRA and other Cold War behavioral science initiatives.” These risks are further exacerbated for psychologists, psychiatrists, and other health professionals for whom a “do no harm” ethic intrinsically poses conflicts with research projects aimed at identifying and destroying those who are considered adversaries.
There are applications of operational neuroscience – such as improved prosthetic limbs for injured veterans and more effective treatments for victims of brain injury – that are compelling in their apparent value and their promotion of human welfare. But other applications raise profound concerns, especially where the defining goals and priorities of a university and its medical researchers and scientists diverge from those of national security and intelligence operatives. Community health sciences professor Michael Siegel – a graduate of Yale’s School of Medicine – emphasized this point when he was interviewed on Democracy Now! last month. Siegel noted: “The practice of medicine was designed to improve people’s health, and the school of medicine should not be taking part in either training or research that is primarily designed to enhance military objectives.”
In this context it’s worthwhile to recall exactly who Morgan envisioned as the trainees for his proposed “people skills” interview project at the medical school: U.S. Special Forces, the highly skilled soldiers often assigned the military’s most difficult and dangerous missions. These forces – over 60,000 strong including military personnel and civilians – are now covertly deployed around the globe. Journalists Dana Priest and William Arkin have described them as“America’s secret army.” Their counterterrorism operations include intelligence-gathering missions and lethal raids – not only in Afghanistan but also in countries where the United States is not at war. They’ve been authorized to keep “kill lists” of individuals who can be assassinated rather than captured, and some have conducted brutal interrogations at secret detention sites. The Army refers to its Special Forces as the “most specialized experts in unconventional warfare.”
At this point, signs clearly indicate that a U.S. SOCOM Center of Excellence for Operational Neuroscience will not be coming to Yale. But it would be a mistake to assume that this research – and the very considerable national security sector funding it attracts – will not find another home. This is why it’s important that the current controversy not be dismissed without fuller engagement and discussion among all stakeholders of pressing practical and ethical considerations – before a similar project appears on another campus or resurfaces in a reconfigured form in New Haven. The prospect of all defense-related neuroscience research being conducted clandestinely by government or corporate entities – away from the public and expert oversight that universities can offer – is far from reassuring, so difficult issues like this must be tackled head-on.
One valuable next step would be an open forum at Yale. Dr. Morgan could have the opportunity to describe in greater detail the nature of his deception detection work and related projects – including his ongoing research in New Haven about which Yale recently claimed it was unaware. Other distinguished scientists, ethicists, and human rights experts could provide their commentaries. And community members, students, faculty, and administrators could offer their own perspectives and pose questions. Such an event would not likely produce consensus, but the sharing of information, the free expression of differing viewpoints, and informed debate are among the most vital functions of a university. Pending further developments, there are very good reasons to be concerned – and confused – about the recent twists and turns surrounding the proposed center at Yale. Many of the most critical questions still await answers.
Roy Eidelson is a clinical psychologist and the president of Eidelson Consulting, where he studies, writes about, and consults on the role of psychological issues in political, organizational, and group conflict settings. He is a past president of Psychologists for Social Responsibility, associate director of the Solomon Asch Center for Study of Ethnopolitical Conflict at Bryn Mawr College, and a member of the Coalition for an Ethical Psychology.
- Yale’s Proposed Interrogation Center (nationalinterest.org)
Your digital footprint could be getting a whole lot bigger: Pentagon scientists are searching for a way to transcribe every real-world conversation that happens into computer-readable files.
Robert Beckhusen of Wired’s Danger Room says it wouldn’t be unlike a real-life Twitter feed or an “email archive for everyday speak.”
“Imagine living in a world where every errant utterance you make is preserved together,” Beckhusen writes in an article this week that explores a Defense Department project that’s been undertaken by its Darpa laboratories and is now in the hands of a University of Texas computer scientist named Matt Lease.
Least has received a few hundred thousand dollars from Darpa — the US military’s Defense Advanced Research Projects Agency — to help find a way to take cell phone conversations, board room meetings and every miniscule real world back-and-forth and have them digitized.
The project is being called “Blending Crowdsourcing with Automation for Fast, Cheap and Accurate Analysis of Spontaneous Speech,” and Lease will receive $300,000 in all from the government to work on it after winning a 2012 Young Faculty Award from Darpa last year.
Lease has previously worked with the Pentagon scientists on another project, Effective Affordable Reusable Speech-to-text, or EARS, which had him trying to find a better way to transcribe dialogue into text. Now after winning the respect of Darpa, he’s putting that research to work in hopes of finding a way to streamline all real world conversations into digital transcriptions. And by strategically crowd-sourcing the information, he thinks he might be able to do just that.
“Like other AI [artificial intelligence], it can only go so far, which is based on what the state-of-the-art methodology can do,” Lease tells Wired. “So what was exciting to me is thinking about going back to some of that work and now taking advantage of crowdsourcing and applying that into the mix.”
Lease says he saw both the “need and opportunity to really make conversational speech more accessible, more part of our permanent record instead of being so ephemeral, and really trying to imagine what this world would look like if we really could capture all these conversations and make use of them effectively going forward,” Lease adds.
Wired reports that the end result could mean that conversations and events could be transcribed and edited through crowdsourcing, then eventually and easily be shared with friends, family and colleagues. Once digitized, those dialogues could also be perused for general search purposes. By uploading everything, though, some concerns are quickly showing up. For one, there’s the matter of possible privacy violations brought on by the seemingly constant collection of data. Then, of course, there’s the matter of what is being done with it.
According to a 2003 memo from the Congressional Research Service, the EARS project that first got Lease involved in the Pentagon was being considered for a rather particular kind of use. That report said that dialogue could be inputted into the system by way of telephone conversations so that “the military, intelligence and law enforcement communities” could “extract clues about the identity of speakers.”
For now, Lease won’t even speculate as to why the Pentagon wants him to develop his crowdsourcing project. He agrees, however, that there is an issue with “respecting the privacy rights of multiple people involved.”
The PBS series NOVA, “Rise of the Drones,” recently aired a segment detailing the capabilities of a powerful aerial surveillance system known as ARGUS-IS, which is basically a super-high, 1.8 gigapixel resolution camera that can be mounted on a drone. As demonstrated in this clip, the system is capable of high-resolution monitoring and recording of an entire city. (The clip was written about in DefenseTech and in Slate.)
In the clip, the developer explains how the technology (which he also refers to with the apt name “Wide Area Persistent Stare”) is “equivalent to having up to a hundred Predators look at an area the size of a medium-sized city at once.”
ARGUS produces a high-resolution video image that covers 15 square miles. It’s all streamed to the ground and stored, and operators can zoom in upon any small area and watch the footage of that spot. Essentially, it is an animated, aerial version of the gigapixel cameras that got some attention for super-high resolution photographs created at Obama’s first inauguration and at a Vancouver Canucks fan gathering.
At first I didn’t think too much about this video because it seemed to be an utterly expected continuation of existing trends in camera power. But since it was brought to my attention, this technology keeps coming back up in my conversations with colleagues and in my thoughts. I think that’s because it is such a concrete embodiment of the “nightmare scenario” for drones, or at least several core elements of it.
First, it’s the culmination of the trend towards ever-more-pervasive surveillance cameras in American life. We’ve been objecting to that trend for years, and many of our public spaces are now under 24/7 video surveillance—often by cameras owned and operated by the police. But even in our most pessimistic moments, I don’t think we thought that every street, empty lot, garden, and field would be subject to video monitoring anytime soon. But that is precisely what this technology could enable. We’ve speculated about self-organizing swarms of drones being used to blanket entire cities with surveillance, but this technology makes it clear that nothing that complicated is required.
Second and more significantly to me, this technology also makes real a key threat that drones pose to privacy that we’ve talked about: the ability to do location tracking. The video shows cars and pedestrians near Quantico, Virginia automatically tagged with colored boxes, which follow them as they move around. As the technology’s developer told NOVA,
Everything that is a moving object is being automatically tracked. The colored boxes represent that the computer has recognized the moving objects. You can see individuals crossing the street, you can see individuals walking in parking lots.
The surveillance potential of such a tracking algorithm attached to such powerful cameras is worth pausing to think about. To identify someone there’s no need for face or license-plate recognition (which may be impractical from above anyhow), cell phone tracking, gait recognition, or what have you. Even knowing where a little green square starts and finishes its day can reveal a lot, because it turns out that even relatively rough location information about a person will often identify them uniquely. For example, according to this study, just knowing the zip code (actually census tract, which is basically equivalent) of where you work, and where you live, will uniquely identify 5% of the population, and for half of Americans will place them in a group of 21 people or fewer. If you know the “census blocks” where somebody works and lives (an area roughly the size of a block in a city, but much larger in rural areas), the accuracy is much higher, with at least half the population being uniquely identified.
However, ARGUS-type tracking could be used to get more precise data than that—in many cases, to determine a vehicle’s home address, which pretty much reveals who you are if you’re in a single-family home, and narrows it down pretty well even if you’re in a large apartment building. (Academic papers have been written on inferring home address from location data sets.) Add work address and I expect that would nail virtually everybody. And of course lodged in the data set would be not just where a particular vehicle starts and finishes its day, but all the places it stopped in between—potentially revealing, as we so often point out, an array of information about a person such as their political, religious, and sexual activities.
True, such tracking using ARGUS would be disrupted whenever a subject disappears from aerial view. For example, pedestrians who travel by subway or bus or walk under foliage, or vehicles entering tunnels, would be harder to track. But even there, datamining large data sets collected over time could probably reveal a lot of things about people’s daily patterns and I would bet could eventually be used to identify a surprisingly large number of them. I expect that ARGUS would be used (if it’s not already) to generate a database consisting of location tracks of moving vehicles or pedestrians beginning in one place and ending in another. Think of them as little strings on a map. Some of these strings would stretch from a person’s home to their work, with stops in between, while others might be fragments, interrupted by a tunnel or other obstruction. But even the fragments, when the dimension of time is added to the equation, could probably be correlated together.
Of course low-lying clouds or fog might also interfere with aerial tracking, though imaging technologies already in existence could probably be deployed to see through them.
NOVA was not allowed to show images of the ARGUS censor, and stated that part of the program remained classified, including whether it has yet been deployed. (Though, we know it has been deployed domestically at least once, over Virginia as shown on NOVA. I’m going to assume it has not been deployed domestically in any more routine manner.) But, it is good that the Air Force allowed NOVA to see its capabilities. I’d like to think it’s because as Americans, Air Force officials have respect for our country’s values and democratic processes and don’t want to let such powerful and potentially privacy-invasive tools to be created in secret. It could also be, however, because the Air Force needs private-sector help in figuring out how to analyze the oceans of data the device can collect (5,000 hours of high-def video per day).
Either way, it’s important for the public to be aware of the kinds of technologies that are out there so that it can better decide how drones should be regulated.
Today EFF posted several thousand pages of new drone license records and a new map that tracks the location of drone flights across the United States.
These records, received as a result of EFF’s Freedom of Information Act (FOIA) lawsuit against the Federal Aviation Administration (FAA), come from state and local law enforcement agencies, universities and—for the first time—three branches of the U.S. military: the Air Force, Marine Corps, and DARPA (Defense Advanced Research Projects Agency).
Military Drone Flights in the United States
While the U.S. military doesn’t need an FAA license to fly drones over its own military bases (these are considered “restricted airspace”), it does need a license to fly in the national airspace (which is almost everywhere else in the US). And, as we’ve learned from these records, the Air Force and Marine Corps regularly fly both large and small drones in the national airspace all around the country. This is problematic, given a recent New York Times report that the Air Force’s drone operators sometimes practice surveillance missions by tracking civilian cars along the highway adjacent to the base.
The records show that the Air Force has been testing out a bunch of different drone types, from the smaller, hand-launched Raven, Puma and Wasp drones designed by Aerovironment in Southern California, to the much larger Predator and Reaper drones responsible for civilian and foreign military deaths abroad. The Marine Corps is also testing drones, though it chose to redact so much of the text from its records that we still don’t know much about its programs.
The capabilities of these drones can be astounding. According to a recent Gizmodo article, the Puma AE (“All Environment”) drone can land anywhere, “either in tight city streets or onto a water surface if the mission dictates, even after a near-vertical ‘deep stall’ final approach.” Another drone, Insitu’s ScanEagle, which the Air Force has flown near Virginia Beach, sports an “inertial-stabilized camera turret, [that] allows for the tracking of a target of interest for extended periods of time, even when the target is moving and the aircraft nose is seldom pointed at the target.” Boeing’s A160 Hummingbird, which the Air Force has flown near Victorville, California, is capable of staying in the air for 16-24 hours at a time and carries a gigapixel camera and a “Forester foliage-penetration radar” system designed by the Defense Advanced Research Projects Agency (DARPA). (Apparently, the Army has had a bunch of problems with the Hummingbird crashing and may not continue the program.)
Perhaps the scariest is the technology carried by a Reaper drone the Air Force is flying near Lincoln, Nevada and in areas of California and Utah. This drone uses “Gorgon Stare” technology, which Wikipedia defines as “a spherical array of nine cameras attached to an aerial drone . . . capable of capturing motion imagery of an entire city.” This imagery “can then be analyzed by humans or an artificial intelligence, such as the Mind’s Eye project” being developed by DARPA. If true, this technology takes surveillance to a whole new level.
On a possibly lighter note, DARPA’s 2008 drone looks more like a modified flying lawnchair than an advanced piece of technology. As DARPA notes in its application, this drone has been used recreationally, though probably with someone sitting in it.
Law Enforcement Drones—Some Agencies Release Information While Others Arbitrarily Withhold it
Once again, we see in these records that law enforcement agencies want to use drones to support a whole host of police work. However, many of the agencies are most interested in using drones in drug investigations. For example, the Queen Anne County, Maryland Sheriff’s Department, which is partnering with the federal Department of Justice, Department of Homeland Security and the Navy, applied for a drone license to search farm fields for pot, “surveil people of interest” (including “watching open drug market transactions before initiating an arrest”), and to perform “aerial observation of houses when serving warrants.”
The Gadsden Alabama Police Department also wanted to use its drone for drug enforcement purposes like conducting covert surveillance of drug transactions, while Montgomery County, Texas wanted to use the camera and “FLIR systems” (thermal imaging) on its ShadowHawk drone to support SWAT and narcotics operations by providing “real time area surveillance of the target during high risk operations.” Another Texas law enforcement agency—the Arlington Police Department—also wanted to fly its “Leptron Avenger” drone for narcotics and police missions. Interestingly, the Leptron Avenger can be outfitted with LIDAR (Light Detection And Ranging) technology. While LIDAR can be used to create high-resolution images of the earth’s surface, it is also used in high tech police speed guns—begging the question of whether drones will soon be used for minor traffic violations.
More disturbing than these proposed uses is the fact that some law enforcement agencies, like the Orange County, Florida Sheriff’s Department and Mesa County, Colorado Sheriff, have chosen arbitrarily to withhold some or—in Orange County’s case—almost all information about their drone flights—including what type of drone they’re flying, where they’re flying it, and what they want to use it for—claiming that releasing this information would pose a threat to police work. This risk seems extremely far-fetched, given that other agencies mentioned above and in prior posts have been forthcoming and that even the US Air Force feels comfortable releasing information about where it’s flying drones around the country.
Interesting Drone Uses
Universities and state and local agencies are finding new and creative uses for drones. For example, the Washington State Department of Transportation requested a drone license to help with avalanche control, while the U.S. Department of Energy in Wyoming wanted to use a drone to “monitor fugitive methane emissions.” The University of Michigan requested one license for its “Flying Fish” drone—essentially a buoy that floats on open water but that can reposition itself via flight—and another license for its “YellowTail” drone, which is designed to study “persistent solar-powered flight.”
And while some proposed uses seem unequivocally positive—several agencies like the U.S. Forest Service and the California Department of Forestry want to use drones to help in fighting forest fires—other uses raise the problem of mission creep. For example, the University of Colorado (which the FAA said has received over 200 drone licenses) requested a license in 2008, not just to study meteorological conditions but also to aid “in the study of ad hoc wireless networks with [the drone] acting as communication relays.” And Otter Tail County, Minnesota wanted to use its drone, not only for “engineering and mapping” but also “as requested for law enforcement needs such as search warrant and search and rescue.”
Records Show FAA is Still Concerned About Safety but Reinforce Need for Greater Transparency
Luckily, these records show the FAA is still vigilant about safety. The agency denied Otter Tail’s license application because the county couldn’t meet the FAA’s minimum requirements for pilots and observers and presented an “unacceptable risk” to the National Airspace System. The FAA also denied the Georgia Tech Police Department’s application because its drone—the Hornet Micro—was not equipped with an approved sense-and avoid system, even though Georgia Tech wanted to fly it in the middle of a major helicopter flight route.
However, once again, the records do not show that the FAA had any concerns about drone flights’ impact on privacy and civil liberties. This is especially problematic when drone programs like Otter Tail’s appear on first glance to be benign but later turn out to support the same problematic law enforcement uses that EFF has been increasingly concerned about. The FAA recently announced it wants to slow down drone integration into US skies due to privacy concerns. We are hopeful this indicates the agency is finally changing its views.
In the meantime, these records further support the need for full transparency in drone licensing. Before the public can properly assess privacy issues raised by drone flights, it must have access to the FAA’s records as a whole. It’s been over a year and a half since we first filed our FOIA request with the FAA, and we’re still waiting for more than half of the agency’s drone records. This is unacceptable. Also, law enforcement agencies with active drone licenses like the Orange County Sheriff’s Department should not be able to withhold all important information about their drone flights under a specious claim that revealing the information could interfere with a law enforcement investigation. If a huge federal agency like the US Air Force can be up front about when and how it’s using drones, so too should the large and small law enforcement agencies and other public entities around the country. Without this information it’s impossible to fully assess the issues raised by drone surveillance.
To download and review any of the documents, follow this link and click on the “FOIA Documents” tab in the middle of the page.
View EFF’s new Map of Domestic Drone Authorizations.
The world is a mysterious place.
Regina Dugan, the director of the Defense Advanced Research Project Agency (DARPA) is quitting her Pentagon funded post at the agency — trading it for a ‘senior executive’ position with internet giant Google.
It makes you wonder just how big and powerful Google is getting, and what are they actually into.
According to Donald Melanson;
The company (Google) has just reported $8.58 billion in gross revenue for the first quarter of 2011, which represents a 27 percent increase over the first quarter of last year, but is actually a bit less than analysts were expecting. That figure also doesn’t include the company’s so-called traffic acquisition costs, however, which totaled $2.04 billion for the quarter and bring the company’s actual revenue down to “just” $6.54 billion. Net income for the quarter was $2.3 billion, which represents a more modest gain from $1.96 billion in the first quarter of 2010. Also cutting into profits quite a bit was Google’s operating expenses, which were up a hefty 33 percent to $2.8 billion — a sizable chunk of which went to the nearly 2,000 new employees the company hired during the quarter.
That’s a pretty hefty take, must be nice.
A Wired excerpt reads;
Dugan’s emphasis on cybersecurity and next-generation manufacturing earned her strong support from the White House, winning her praise from the President and maintaining the agency’s budget even during a period of relative austerity at the Pentagon. Her push into crowdsourcing and outreach to the hacker community were eye-openers in the often-closed world of military R&D. Dugan also won over some military commanders by diverting some of her research cash from long-term, blue-sky projects to immediate battlefield concerns.
“There is a time and a place for daydreaming. But it is not at Darpa,” she told a congressional panel in March 2011 (.pdf). “Darpa is not the place of dreamlike musings or fantasies, not a place for self-indulging in wishes and hopes. Darpa is a place of doing.” For an agency that spent millions of dollars on shape-shifting robots, Mach 20 missiles, and mind-controlled limbs, it was something of a revolutionary statement.
The shift was only one of the reasons why Dugan was a highly polarizing figure within her agency, and in the larger defense research community. The Pentagon’s Office of Inspector General (OIG) is alsoactively investigating hundreds of thousands of dollars’ worth of contracts that Darpa gave out to RedX Defense — a bomb-detection firm that Dugan co-founded, and still partially owns. A separate audit is examining a sample of the 2,000 other research contracts Darpa has signed during Dugan’s tenure, to “determine the adequacy of Darpa’s selection, award, and administration of contracts and grants,” according to a military memorandum.
Results of the Inspector General’s work haven’t been released and, according to her spokesman, the work had “no impact” on Dugan’s decision, “The only reason she decided to leave the Pentagon was the allure of working at Google.”
So what will Dugan really be working on for Google? Only time will tell.