CLIMATE HUSTLE, hosted by award-winning investigative journalist Marc Morano, reveals the history of climate scares including global cooling; debunks outrageous claims about temperatures, extreme weather, and the so-called “consensus;” exposes the increasingly shrill calls to “act immediately before it’s too late,” and in perhaps the film’s most important section, profiles key scientists who used to believe in climate alarm but have since converted to skepticism.
The movie had a red carpet premiere last December in Paris, and was shown last week in a Congressional briefing.
The film will be aired in 500 theaters in the U.S. (and one in Canada) on May 2 in a one night theater event. Locations and showtimes can be found [here].
Video clips including trailers, interviews with Morano, and other clips are found on the Climate Hustle web page [link]. A list of scientists interviewed in film is found [here].
An interesting interview with Marc Morano about the film is found [here].
Let me start by discussing my take on Marc Morano, and why I agreed to be interviewed for his movie. I first heard of Marc Morano circa 2006, from Joe Romm. Romm’s take on Morano was basically that of the climate ‘anti-Christ.’ I then put ClimateDepot on my list of blogs to monitor, to check up on what the ‘evil’ side in the climate debate was up to. I slowly built up an understanding of what Morano was doing, and I didn’t regard all of it as negative.
At some point (probably around the time of Climategate) I found myself on the same email list as Marc Morano, and we exchanged a few emails on issues of common interest. Circa 2010 (if my memory serves) I referred to Marc Morano as a ‘demagogue’ (I can’t find this anywhere on the internet). Marc was offended, we discussed this on email, and I raised my concern about his attacks on individual climate scientists that included publishing their email addresses, etc. We declared sort of a truce on this, and we agreed to point out to each other if we spotted inappropriate behaviors.
Subsequently, I’ve met Marc several times, and I have to say I like the guy. He’s smart and he’s funny (he pokes fun at both sides), and as far as I can tell he is honest. When he asked to interview me for the movie, I agreed to do it. The interview itself was really fun. I have no complaints about how I was portrayed in the movie.
I saw an earlier version of the film in November, prior to the Paris premiere. I wasn’t quite sure what to expect, but my initial reaction was relief that there were no goofy or incredible statements about the science. I found the movie to be pretty entertaining and even interesting, especially the narratives developed around silly alarmist statements made by scientists and politicians.
I thought the selection of featured scientists was quite good. It included some new faces that were quite effective – Caleb Rossiter, Robert Giegengack, Richard Tol, Daniel Botkin were especially good.
The budget for this was shoestring, I think it was less than $500K (somewhere I recall seeing a $20M budget for Merchants of Doubt movie, this may not be correct). Financials for Merchants of Doubt movie: $192K at the box office, with an additional $114K from home video sales (JC note: Merchants of Doubt movie was discussed in this previous post). It will be interesting to see how Climate Hustle does at the box office (and in subsequent home video sales).
I’m sure people will criticize me for participating in this, but then these are the people that have pretty much already sent me to Coventry, so . . . so what.
The key issues surrounding the movie are reflected in these quotes from Randy Olson and Bill Nye:
“I also think [Morano]’s a danger to the efforts of the climate movement”
“I think it will expose your point of view as very much in the minority and very much not in our national interest and the world’s interest.”
Chip Knappenberger tweetrd re Nye’s ‘national interest’ statement: “Sounds like Nye should work for the State Department.”
Thomas Jefferson, third President of the United States, expressed this concept of democracy in 1801 in his First Inaugural Address. He said,
All . . . will bear in mind this sacred principle, that though the will of the majority is in all cases to prevail, that will to be rightful must be reasonable; that the minority possess their equal rights, which equal law must protect and to violate would be oppression.
In every genuine democracy today, majority rule is both endorsed and limited by the supreme law of the constitution, which protects the rights of individuals. Tyranny by minority over the majority is barred, but so is tyranny of the majority against minorities.
The perspective in Climate Hustle is arguably a minority perspective, at least in terms of world governments and a select group of scientists. Randy Olson comments on this:
There is a need for opposition voices and questioning. If anyone feels threatened by this movie it would have to mean you’re conceding that the communication skills of the environmental side are really bad — which actually they are, so maybe there should be some cause for concern.
So, I hope some of you will be able to see the movie on May 2, I look forward to your reactions.
The House Energy and Commerce Subcommittee held a hearing on a bill intended to streamline nuclear power regulatory rules, in order to allow safer and more efficient next-generation reactors to replace those being decommissioned.
The Advanced Nuclear Technology Development Act of 2016 (HR 4979), introduced by Representative Bob Latta (R-Ohio), was discussed during a Friday hearing of the House Energy and Commerce Subcommittee to reduce regulatory hurdles for building advanced reactors. “Advanced” being defined as having significant improvements over contemporary nuclear reactor, such as better “inherent safety features, lower waste yields, greater fuel utilization, superior reliability, resistance to proliferation, and increased thermal efficiency.”
Currently, the Nuclear Regulatory Commission (NRC) demands a complete and final design from potential nuclear developers. This, combined with expensive reviews that developers pay out of pocket, can deter potential startups with a multimillion dollar price tag with no assurance of ever being allowed to operate. The bipartisan panel’s tenor was that this needs to change.
“The future of the nuclear industry needs to start now, and the Nuclear Regulatory Commission needs to be able to provide the certainty the provide sector needs to invest in innovative technologies.” Goodlatte said at the hearing. “As the United States looks to the future, more energy will be needed, and Nuclear power provides reliable, clean baseload power option.
“Investment in new technology is already happening, with approximately 50 companies in this country working to develop the next generation of nuclear power. It’s time to insure that the NRC provides a framework so that innovators and investors can prepare to apply to licensing technologies.”
In order to create a conducive environment for investment in next-generation plants, HR 4979 would require the NRC to implement a new framework to streamline nuclear plant licensing, making it more efficient and cost-effective to investors by 2019. The commission would have to submit to an implementation plan for such a framework within 180 days of the enactment of the law.
The US’s 99 operational nuclear energy plants provide nearly 20 percent of the country’s power, but approximately 126,000 megawatts of nuclear power generation is set to be retired over the next 15 years. At the same time, the US Energy Information Administration forecasts a need for 287,000 megawatts of new electric capacity by 2040 – on top of replacing the electric capacity that is needed to replace the retired power plants.
This reality, combined with the fact that nuclear power produces no greenhouse gasses, has led to environmentally-conscious lawmakers on the committee making common cause with their innovation-minded colleagues worried about falling behind international competitors.
“Our nation will, by necessity, diminish its dependence on fossil fuels in order to fight climate change. And as we do so, we will need to turn more and more to nuclear power,” said Representative Jerry McNerney (D-Illinois), who co-signed the bill.
The hearing comes at a time of renewed anxiety about aging nuclear power infrastructure. Earlier this month, a Manhattan Project-era nuclear storage facility in Washington state had up to 3,500 gallons of waste leaking out. However, the Washington Department of Ecology said that there was no risk to the environment or nearby residents.
The world has had 30 years to assess the consequences for life on Earth of the disaster at Chernobyl.
This is about the same period during which I have studied the effects of radioactive pollution on the planet. It was the radioactive rain in the mountains of North Wales, where I lived in 1986, that brought me into this strange Alice in Wonderland area of science, where people and children die, and the global authorities, advised by physicists, deny what would be obvious to a child at school.
Chernobyl was mentioned as the star that fell to earth in the Book of Revelations. You may laugh, and it may be a coincidence, but the impact of the event has certainly been of biblical proportions. It is a story about the imposition by reductionist science on humanity of a version of the truth constructed from mathematics, not the only one, but perhaps the most important, since it involves the systematic destruction of the genetic basis of life. It is a story of lies, secrecy, power, assassination and money: the vast amounts of money that would be lost if the truth came out.
Shortly after the murder in 1992 of the German Green Party leader and anti-nuclear activist Petra Kelly, the late Prof Ernest Sternglass (the first of the radiation scientist/ activists) told me that Kelly had just struck a deal with a German TV company to run a series demonstrating the true awfulness of the immediate effects of radiation. He said: if the truth came out, all the Uranium and the billions of dollars in Uranium shares would turn into sand. So something like a cover-up had to happen, and it did, continuing the process of chicanery and control of information that began with the nuclear weapons tests of the 50s and 60s. In 1959, as the genetic effects of the atmospheric tests became apparent, the control of the understanding of radiation and health was wrested from the World Health Organization (WHO) and passed to the International Atomic Energy Agency (IAEA).
Since then, no research on the health effects of radiation has been carried out by WHO, which has led to a permanent vigil outside their headquarters in Geneva by the group Independent WHO.
The arguments about the health effects of Chernobyl have mostly centered on cancer. I won’t write much about cancer here. The study of radiation and cancer has many complications, including that the data is often suspect, the time lag between the cancer diagnosis and the original radiation exposure can be 20 years, in which time a lot can happen, introducing ammunition (and opportunity) for those denying causation. The predictions of the global cancer yield of the Chernobyl contamination has ranged from around a million (as predicted independently by the European Committee on Radiation Risk (ECRR), Rosalie Bertell, John Gofman and me, to about 600,000 (Alexey Yablokov), to less than a few thousand (the International Commission on Radiological Protection (ICRP), whose risk model is the current basis for all legal constraints on radioactive releases in Europe.
Cancer is caused by genetic damage but takes a while to show. More easily studied is the immediate and direct genetic damage, demonstrated in birth rates of congenital diseases, birth defects, fetal abnormalities, data which is easier to locate. The effects of a sudden increase in radioactive contamination are most easily seen in sudden increases in these indicators. You don’t have to wait 20 years. Out they come after nine months or in aborted fetuses with their heart and central nervous system defects, their lack of hands and feet, their huge hydrocephalic heads, their inside-out organs, their cleft palates, cyclops eyes and the whole range of dreadful and usually fatal conditions. There is no argument, and the affair is in the hands of doctors, not physicists. The physicists of the ICRP base their risk of genetic effects on experiments with mice.
I was in Kiev in 2000 at the WHO conference on Chernobyl. On the podium, conducting the theatricals, were the top men in the IAEA (Abel Gonzalez) and the United National Scientific Committee on the Effects of Atomic Radiation (UNSCEAR), represented by Canadian Norman Gentner. No effects can be seen—Abel Gonzalez. Internal radiation is the same as external—Norman Gentner. Happily you can watch this farce as it was videotaped by a Swiss team.
So: cut to the chase, to the fatal assault on the edifice of the current ICRP radiation risk model. In January 2016 Prof Inge Schmitz Feuerhake, Dr Sebastian Pflugbeil and I published a major review paper on the genetic effects of radiation in the prestigious Korean peer-reviewed Journal of Environmental Health and Toxicology.
What the research shows is that in every corner of the ex-Soviet Union and Europe and even further afield where epidemiologists and pediatricians looked, there were large and statistically significant increases in congenital diseases at birth and in babies that were aborted.
The new article recalculates the genetic risk from radiation based upon reports from Germany, Turkey, Greece, Croatia, Egypt, Belarus, Ukraine, Russia, Hungary, Italy, the UK, Scotland, Wales, indeed everywhere where anyone looked. There was a sudden jump in birth defects immediately following the contamination from Chernobyl and in proportion; but only up to the point where the exposure was so great the babies died in the womb or miscarried early in pregnancy. Thus, the relation between exposure level and effect was not a simple one where the birth defects increased with exposure: after a critical level of exposure they leveled off, or indeed fell. Also since contamination is still there, women are still giving birth to genetically damaged children some 30 years later. These results, published by many doctors, epidemiologists and researchers in many different journals, show that the effects occurred at levels of contamination that provided ‘doses’, that yardstick of radiation exposure invented by the ICRP, that were very low, often below the natural background dose.
It is worse: from research on the nuclear test site veterans’ grandchildren (also reviewed in the study) it is clear that these effects continue down the generations and will only disappear when an offspring dies without issue, and leaves the genome of the human race. And many will or already have done: since what causes genetic malformation in the infant, at a larger dose causes fetal death and infertility. No one can have failed to have noticed the increase in human infertility that has occurred since the radioactive contamination of the planet began in the 1950s. As ex- US Atomic Energy Commission scientists John Gofman wrote in 1981 “the nuclear industry is waging a war on humanity.”
How can it be possible that the legislative system has got it so wrong? The answer is also given in the paper. It is that the concept of ‘dose’ which may be convenient for the physicists as it is simple to compute, really does not address the situation where the substances that provide the dose are inside the body, often bound chemically to the DNA, which is the acknowledged target for all these genetic effects. It shows that the human genome (and of course that of all life) is exquisitely sensitive to radiation damage from such internal exposures, to Strontium-90, Plutonium-239, Uranium and particularly to the nano-particles containing these radioactive elements which were produced when the reactor No 4 blew apart.
The paper shows the studies of the Hiroshima bomb survivors, upon which the current unsafe radiation laws are based were faulty because the true comparison group, those not in the city at the time of the bombing, was abandoned when it began to look like there was a real effect. Was this stupidity? Was it a trick? Does someone have to go to jail?
Last month, Prof. Alexey Yablokov, Dr. Alex Rosen and I wrote to the editor of The Lancet, in a recorded delivery letter posted by the Independent WHO in Geneva, requesting space in that influential journal to draw attention to these truths and overturn the false and dangerous structures created by the physicists. Let us all hope that some good will finally come of the disaster—that the real legacy of Chernobyl will be the understanding of the true danger to health of radioactive pollution.
Note: The ECRR has focused on Chernobyl as a major data source for establishing the risk posed by radiation. It has concluded that the current ICRP model is in error by upwards of about 300-fold, for some types of internal exposures, by upwards of 1000-fold. This means that over the period of the radiation contamination, more than 60 million people have died from cancer as a result of the releases. This risk model is available on the website http://www.euradcom.org.
Christopher Busby is an expert on the health effects of ionizing radiation. He qualified in Chemical Physics at the Universities of London and Kent, and worked on the molecular physical chemistry of living cells for the Wellcome Foundation. Professor Busby is the Scientific Secretary of the European Committee on Radiation Risk based in Brussels and has edited many of its publications since its founding in 1998. He has held a number of honorary University positions, including Visiting Professor in the Faculty of Health of the University of Ulster. Busby currently lives in Riga, Latvia. See also: http://www.chrisbusbyexposed.org, http://www.greenaudit.org and http://www.llrc.org.
New York Attorney General Eric T. Schneiderman has accused ExxonMobil of lying to the public and investors about the risks of climate change according to the NY Times and has launched an investigation and issued a subpoena demanding extensive financial records, emails and other documents.
Massachusetts, the US Virgin Islands, and California are also investigating ExxonMobil. It is interesting that all but one of the attorneys general are Democrats. The remaining attorney general is Claude Walker of the US Virgin Islands who is a Green leaning Independent. So, this is a very partisan investigation, carefully coordinated with anti-fossil fuel activists. How much is there to it?
I’ve reviewed the 22 internal documents from 1977 to 1989 made available by ExxonMobil here. I’ve also reviewed what I could find on 104 publications (most are peer-reviewed) with ExxonMobil personnel as authors or co-authors. For some of the peer-reviewed articles I only had an abstract and for some I could find the reference but no abstract or text without paying a fee. Below this short essay is an annotated bibliography of all 22 internal documents and 89 of the published papers. The documents are interesting reading, they fill in the history of modern climate science very well. Much of the current debate on climate change was being debated in the same way, and often with the same uncertainties, in 1977.
Between 1977 and the fifth IPCC report in 2013 ExxonMobil Corporate Research in New Jersey investigated the effect of increasing CO2 on climate. If they withheld or suppressed climate research from the public or shareholders, it is not apparent in these documents. Further, if they found any definitive evidence of an impending man-made climate catastrophe, I didn’t see it. The climate researchers at ExxonMobil participated in the second, third, fourth and fifth IPCC assessment reports making major contributions in mapping the carbon cycle and in climate modeling. They calculated the potential impact of man-made CO2 in several publications. They investigated methods of sequestering CO2 and adapting to climate change. They also investigated several potential biofuels.
The internal documents are generally summaries of published work by outside researchers. Some of the documents are notes from climate conferences or meetings with the DOE (Department of Energy). For many of the internal documents one has to read carefully to separate what is being said by the writer and what he is reporting from outside research. Exxon (and later ExxonMobil) did some original research, particularly making ocean and atmospheric measurements of CO2 from their tankers. But, most of what they produced was by funding research at Columbia University or the Lamont-Doherty Earth Observatory. All of their internal research and the work at Columbia was published as far as I can tell, so it is difficult to accuse them of hiding anything from the public or shareholders.
At the heart of Schneiderman’s accusation, according to the NY Times, is a list of statements made by ExxonMobil executives that he believes contradict the internal memos summarized below. The statements are reported here. In fact, the internal memos and documents listed below, do not contradict the ExxonMobil executives in any way. The internal documents and publications all clearly describe the considerable uncertainties in climate science and align with the executives’ statements. Go to the link to see all of them, two of the most notable are quoted below:
Mr. Ken Cohen, ExxonMobil Vice President for Public and Government Affairs, 2015 (Blog Post):
“What we have understood from the outset – and something which over-the-top activists fail to acknowledge — is that climate change is an enormously complicated subject.
“The climate and mankind’s connection to it are among the most complex topics scientists have ever studied, with a seemingly endless number of variables to consider over an incredibly long timespan.”
Duane Levine, Exxon’s manager of Science and Strategy Development, 1989 (Internal Document #21 below)
“In spite of the rush by some participants in the greenhouse debate to declare that the science has demonstrated the existence of [man-made global warming] today, I do not believe such is the case. Enhanced greenhouse is still deeply imbedded in scientific uncertainty, and we will require substantial additional investigation to determine the degree to which its effects might be experienced in the future.”
Even if there were a contradiction between the executives and the ExxonMobil climate researchers, who is to say which of them is wrong? Free speech is a fundamental individual right in the USA and executives are allowed to disagree with their employees. As University of Tennessee Law Professor Glenn Harlan Reynolds has said in USA Today :
Federal law makes it a felony “for two or more persons to agree together to injure, threaten, or intimidate a person in any state, territory or district in the free exercise or enjoyment of any right or privilege secured to him/her by the Constitution or the laws of the Unites States, (or because of his/her having exercised the same).”
“I wonder if U.S. Virgin Islands Attorney General Claude Walker, or California Attorney General Kamala Harris, or New York Attorney General Eric Schneiderman have read this federal statute. Because what they’re doing looks like a concerted scheme to restrict the First Amendment free speech rights of people they don’t agree with. They should look up 18 U.S.C. Sec. 241.”
ExxonMobil has filed court papers in Texas seeking to block a subpoena issued by the attorney general of the US Virgin Islands Claude Walker. They argue that the subpoena is an unwarranted fishing expedition into ExxonMobil’s internal records.
Environmentalist groups, like the Rockefeller Family Fund and 350.org are trying to organize a legal attack against ExxonMobil patterned on the attack many organizations led against the tobacco companies. They feel that their presumed imminent man-made climate disaster is being ignored and they want to make ExxonMobil a scapegoat. As Lee Wasserman (Rockefeller Family Fund) said recently “It’s not really about Exxon.”
Mr. Scheiderman may have made the “error of assuming facts that are not in evidence.” He assumes that man-made greenhouse gases are a significant factor in climate change and that the resulting enhanced climate change is dangerous. Neither assertion has been proven. He also assumes that Exxon’s early research proved these assertions to be true, with little or no doubt. Therefore, Mr. Scheiderman believes the Exxon executives’ claims that there is significant uncertainty around the idea of dangerous man-made climate change is a lie. I do not see any proof of dangerous climate change, man-made or otherwise in any of the documents below. In peer reviewed document #55 below, Flannery, et al. in 1985 suggest that the effect of CO2 on climate, based on geological data from the Cretaceous Period, is 50% or less. Internal document #3 indicates concern that there is a “potential problem amid all the scientific uncertainties.”
Along this line of thought, the ExxonMobil court filing against Mr. Walker and the US Virgin Islands says in part:
“… [ExxonMobil] has “widely and publicly confirmed” that it recognizes “that the risk of climate change and its potential impacts on society and ecosystems may prove to be significant.”
Brian Flannery states in published document #66 below in 2001:
“Although we know the human emissions fairly well, we don’t know the natural emissions well at all. Added to this uncertainty is the fact that natural emissions can change as a result of long-term climate changes.”
The key problem is that ExxonMobil management and most, if not all, of their researchers do not think the idea of dangerous man-made climate change has been proven. Further, one of them said in internal document #3 below: “we have time to evaluate the uncertainties even in a worse-case scenario.” This is still true, especially considering the very slow pace of warming over the last twenty years.
In internal document #3 below, they discuss the potential effect of doubling CO2 in the atmosphere and the discussion is instructive. The CO2 level prior to the industrial revolution (roughly 1840-1850) is unknown. They give two possibilities (260-270 ppm or 290-300 ppm). The temperature increase from 1850 to the end of 2015 is roughly 0.85°C from the HADCRUT 4 dataset and the 5th IPCC Assessment reports 0.85°C from 1880 to 2012. The Exxon researchers did not think a clear anthropogenic signal was detectable in 1979, because at that time the total temperature increase from 1850 had not exceeded 0.5°C, their assumed natural variability. So, they thought man-made warming might be clearly detected by the year 2000.
We are now well past the year 2000 and according to the data shown in their Table 6 (Internal Document #3), we are on track with their most benign scenario of a temperature increase of 1.3° to 1.7°C per doubling of CO2 (ECS). This assumes an initial concentration of CO2 of 265 to 295 ppm and a natural variability of +-0.5°C. The initial CO2 concentration assumption is reasonable, the assumption of 0.5°C for natural variability may be too low. However, if the assumptions are true, they probably eliminate the possibility of higher climate sensitivity to CO2 (ECS>2°). This is also supported by recent empirical estimates of ECS. There are considerable uncertainties in this approach, but they are important to recognize. We don’t know the CO2 level when we started emitting a lot of fossil fuel CO2, we don’t know the net effect on our climate, and can’t be certain we have seen any impact of man-made CO2 on our climate to date.
Even Brian Flannery, one of the Exxon researchers who has been deeply involved in the IPCC process stated in internal document 22, below: “While uncertainty exists, science supports the basic idea that man’s actions pose a serious potential threat to climate.” This is the most alarmist statement I could find anywhere, but it still says “potential” and notes that uncertainty exists.
In peer-reviewed paper #25 below, Dr. Kheshgi and Dr. White state in 2001:
“Many previous claims that anthropogenically caused climate change has been detected have utilized models in which uncertainties in the values of some parameters have been neglected (Santer et al. 1996b). In section 5 we have incorporated known parameter uncertainties for an illustrative example by using the proposed methodology for distributed parameter hypothesis testing. The results clearly show that incorporation of parameter uncertainty can greatly affect the conclusions of a statistical study. In particular, inclusion of uncertainty in aerosols forcing would likely lead to rejection of the hypothesis of anthropogenically caused climate change for our illustrative model …”
They are concerned here and in other papers, that the GCM (global circulation climate models) have used fixed parameters for their calculations for variables that actually have a great deal of uncertainty. By fixing these variables across many models, the modelers produce a narrower range of outcomes giving a misleading appearance of consistency and accuracy that does not actually exist.
As Professor Judith Curry has often said there is an uncertainty monster at the science-policy interface. The ExxonMobil scientists are very good, they write well and their superiors in ExxonMobil understand what they are saying. Man-made climate change is a potential problem, but it is shrouded in uncertainty because it is an extremely complex research topic with countless variables. The internal and published documents below show that Exxon has worked hard to define the uncertainty and they have even succeeded in reducing the uncertainty in some areas, especially in the carbon cycle. But still, the remaining uncertainty is huge and it covers the range from zero anthropogenic effect to perhaps 4° or 5°C (see publication #7, Kheshgi and White 1993) to this day. Not much different than in 1977 when they got started.
I’ll conclude this post with a quote from internal document #11, the 1982 Exxon Consensus statement. I think it speaks well for ExxonMobil and puts Schneiderman (and many in the media) to shame:
“As we discussed in the August 24 meeting, there is the potential for our research to attract the attention of the popular news media because of the connection between Exxon’s major business and the role of fossil fuel combustion in contributing to the increase of atmospheric CO2. Despite the fact that our results are in accord with most major researchers in the field and are subject to the same uncertainties, it was recognized that it is possible for these results to be distorted or blown out of proportion.
Nevertheless the consensus position was that Exxon should continue to conduct scientific research in this area because of its potential importance in affecting future energy scenarios and to provide Exxon with the credentials required to speak with authority in this area. Furthermore our ethical responsibility is to permit the publication of our research in the scientific literature; indeed to do otherwise would be a breach of Exxon’s public position and ethical credo on honesty and integrity.”
This is the only thing I found in the internal memos that was not published. In 1982 they thought the media might distort their research results or blow them out of proportion (the Uncertainty Monster). Well, that certainly happened. For science to work properly, research outcomes cannot be dictated. All interested parties must be allowed to investigate the problem and publish their results. They must have access to data, computer programs and models that are publicly funded. But, above all, they should not be punished, jailed, intimidated or sued because they are skeptical of a popular scientific thesis. They should be judged only on the quality of their scientific work and not who they work for or who funds them.
This post is excerpted from a longer post The Exxon Climate Papers, that includes links and annotations to 89 documents, including internal documents and published papers.
Bio notes: Andy May worked for Exxon from 1980 to 1985. During part of that time he worked on the Natuna D-Alpha project discussed in some of these documents. He did not work at either the Florham Park, New Jersey Research laboratory or the Linden, New Jersey laboratory where the climate research was done. The views expressed in this essay and bibliography are his own. This was written in his spare time and he received no compensation from anyone for writing and posting it.
WASHINGTON – The US government has sent Special Envoy Amos Hochstein to Kuwait, Qatar, Egypt and Israel to discuss falling oil prices after the failure of the Doha energy talks, the US Department of State announced in a media note on Monday.
“Special Envoy and Coordinator for International Energy Affairs Amos J. Hochstein will be travelling to the region to meet with key interlocutors in Jerusalem, Cairo, Kuwait City and Doha,” the note stated.
As global oil prices remain near record lows, and the United States emerges as a global exporter of liquefied natural gas, Hochstein will be seeking to strengthen US relationships with partners in the region as well as discuss strategies for addressing the market realities of the energy sector, the note explained.
Hochstein will discuss energy security issues in Israel, power generation issues in Egypt and plans to investment in developing new oil fields and build additional oil refineries in Kuwait, the State Department pointed out.
In Qatar, Hochstein will give a speech emphasizing US support for liquefied natural gas development and its role in reducing global carbon emissions, the note said.
The last few years have seen an alarming increase in claims that tribal peoples have been shown to be more violent than we are. This is supposed to prove that our ancestors were also brutal savages. Such a message has profound implications for how we view human nature – whether or not we see war as innate to the human condition and so, by extension, broadly unavoidable. It also underpins how industrialized society treats those it sees as “backward.” In reality though it’s nothing more than an old colonialist belief, masquerading once again as “science.” There’s no evidence to support it.
The American anthropologist, Napoleon Chagnon, is invariably cited in support of this brutal savage myth. He studied the Yanomami Indians of Amazonia from the 1960s onwards (he spells the tribe “Yanomamö”) and you’d be hard pressed to find a book or article on tribal violence which doesn’t refer to his work. Popular writers such as Steven Pinker and Jared Diamond frequently make much of Chagnon’s thesis, so it’s worth giving a thumbnail sketch of why in reality it proves little about the Yanomami, and nothing about human evolution.
First, it’s important to dispatch a red herring from the murky cauldron being cooked up by the brutal savage promoters: They often point to Darkness in El Dorado, a book by Patrick Tierney, which attacked Chagnon’s work, but went too far. Tierney raised the possibility that one of Chagnon’s colleagues may have deliberately introduced a deadly measles epidemic to the Indians. That simply wasn’t true: In fact, the epidemic was inadvertently started by American missionaries. That Tierney was wrong on this single point is now used to claim that all his and other writers’ criticisms of Chagnon have been discredited. They haven’t. In any case, were a single error deemed to negate a whole thesis, then pretty much all science, as well as journalism, the law and a lot else, falls apart.
Anyway, let’s set Tierney aside. For decades, Napoleon Chagnon’s findings have been rejected by almost all of the many other anthropologists who have worked with the Yanomami, and in most countries his work simply isn’t taught. He had rather faded from anthropology in the United States too, until his recent resurgence as the darling of establishment attitudes.
According to Chagnon, brutality is a key driver of human evolution. How did he come upon such a disturbing “discovery”? Basically, he counted how many Yanomami men boasted that they were unokai and he told us this means they’ve killed people. He then crunched the numbers to show that unokai are similarly successful in love as they are in war, and that by fathering more children than non-killers, they ensure the next generation is as murderous as they are.
As with any sweeping conclusion in human sciences, there are numerous known unknowns. For example, did Yanomami raiding in the 1960s increase through growing pressure from settler or missionary incursions? (After all, Chagnon used the extremist New Tribes Mission to get into the Yanomami.) Did the influx of outside trade goods, including guns, play a role? Such impacts are difficult to analyze, though some believe they were clearly significant.
But the most significant fact, the extraordinary single error that, in this case, does destroy Chagnon’s thesis in one swoop, is something Chagnon doesn’t tell us – unokai does not just mean “killer.” It’s also the status claimed by everyone who’s ever shot an arrow into a dead body during an inter-village raid (most raids stop after one killing). It describes many other individuals as well, including men who’ve killed an animal thought to be a kind of shamanic embodiment of a human, as well as stay-at-homes who try and cast lethal spells. It even includes those who’ve participated in a ritual during their future wife’s puberty (she also becomes unokai). In other words, many unokai haven’t killed anyone. With this simple fact, every one of Chagnon’s conclusions about “killers” falls apart.
But supposing he was right after all, what would his figures show? What percentage of the population are we talking about? Here the brew gets fishier: Chagnon plays fast and loose with his own data. His autobiography, “Noble Savages,” says that “killers” number “approximately 45 percent of all the living adult males.” Yet even according to his own (shaky) data, that is simply not true: Chagnon’s own figures do not show that 45 percent of men are unokai. He has grossly inflated his percentage by ignoring everyone younger than 25, an age group with far fewer claiming unokai status. Were they included, his percentage would plummet.
Chagnon has been asked about this manipulation for years. When he bothers to reply, he claims he’ll publish new supporting data. We’re still waiting.
So there you have it: That’s the poster boy of the “scientific proof” behind the myth of the brutal savage. The fact that Chagnon’s thesis has been repeatedly demolished in scholarly publications for decades is simply ignored by those who want him to be right. For them to dismiss the many Chagnon critics, to pretend that science is on their side, and to chorus sneeringly “noble savages” whenever Chagnon is criticized, is just facile propaganda.
By the way, if you want to know how many unokai (supposed “killers”) Chagnon managed to winkle out during a quarter century of fieldwork with one of Amazonia’s largest tribes – numbering several thousand – the answer is just 137 men. They could all comfortably fit into a single car on the New York subway. How many of those were actually killers? We’ll never know.
That’s the size of the sample group supposedly proving that tribal peoples live in a state of chronic warfare and, by throwing in more red herrings, that our ancestors did so too. The latter assertion is widely promulgated. It goes like this: The Yanomami are a small-scale tribal (non-state) hunting society, our ancestors were the same, so the Yanomami can teach us about our ancestors because they live in a similar way. And yet the theory fails on several points: For example, no one knows the degree to which our distant ancestors scavenged for meat, rather than actively hunted it. That’s quite a different approach to life, and the Yanomami wouldn’t dream of doing it. In any case, a moment’s informed reflection tells you that no one who inhabited the ice age plains of Eurasia, for example, lived remotely like the tropical rainforest Yanomami of Chagnon’s 1960s.
The real story is more obvious, prosaic and simpler than the Chagnon-created “fierce people” and their supposed “chronic” warfare. The truth is that there are some tribal peoples who have a belligerent reputation, others known for avoiding violence as much as possible, and lots in between. That’s nothing to do with any grasping at mythic noble savages, it’s what anthropologists have actually found.
Despite the growing mythology, the archeological record reveals very little evidence of past violence either (until the growth of big settlements, starting around 10,000 years ago). Researchers Jonathan Haas and Matthew Piscitelli studied descriptions of 2,930 earlier skeletons from 900 different sites worldwide. Apart from a single massacre site of two dozen people in the Sudan, they found “but a tiny number of cases of violence in skeletal remains,” and noted how just four sites in Europe “are mentioned over and over by multiple authors” striving to demonstrate the opposite of what the evidence actually reveals. The archeological record before 10,000 years ago, they conclude, in fact “shows that warfare was the rare exception.”
Much of the other “proof” for the brutal savage advanced by Steven Pinker, Jared Diamond, and other champions of Chagnon, is rife with the selection and manipulation of facts to fit a desired conclusion.
To call this “science” is both laughable and dangerous. These men are desperate to persuade us that they’ve got “proof” for their opinions, which isn’t surprising as they’re nothing more – opinions based on a narrow and essentially self-serving political point of view. They have proved nothing, except to those who want to believe them.
Does it matter? Yes, very much. How we think of tribal peoples dictates how we treat them. Proponents of Chagnon seek to reestablish the myth of the brutal savage which once underpinned colonialism and its land theft. It’s an essentially racist fiction which belongs in the 19th century and, like a flat earth, should have been discarded generations ago. It’s the myth at the heart of the destruction of tribal peoples and it must be challenged.
It’s not just deadly for tribal peoples: It’s dangerous for all of us. False claims that killing is a proven key factor in our evolution are used to justify, even ennoble, the savagery inherent in today’s world. The brutal savage may be a largely invented creature among tribal peoples, but he is certainly dangerously and visibly real much closer to home.
In her oily, cringe-inducing and totally predictable speech to AIPAC on March 21, Hillary Clinton argued that, since (according to her) “anti-Semitism is on the rise across the world… we must repudiate all efforts to malign, isolate and undermine Israel and the Jewish people.” In other words, we must do what we can to shut down any legitimate criticism of Israeli policy. A reliable means of doing so is to conflate said criticism with anti-Semitism and thus vilify the critic in question. This particular strategy has been perfected and institutionalized for decades, and was perhaps best deconstructed by Norman Finkelstein in “The Holocaust Industry.”
By dismissing BDS advocates as irrational, Jew-hating troublemakers, Hillary Clinton, the great bastion of liberalism and progress, makes common cause with the jingoist far right (where she actually belongs). But she also makes common cause with a good chunk of US academia, where criticism of Israel and its atrocities is often met with censorship and intimidation. In a comprehensive report on the subject, Palestine Legal details the extent of the suppression: “From January 2014 through June 2015, Palestine Legal interviewed hundreds of students, academics and community activists who reported being censored, punished, subjected to disciplinary proceedings, questioned, threatened, or falsely accused of anti-Semitism or supporting terrorism for their speech in support of Palestinian rights or criticism of Israeli policies.”
Needless to say, this is a gross violation of First Amendment rights, and it needs to be challenged at every opportunity. The university system is based on the principles of free inquiry and unfettered discourse; absent the open exchange of conflicting ideas and opinions, academia is essentially worthless. When certain viewpoints are institutionally favored, colleges cease to be places of learning and instead become places of indoctrination. Who could desire such a circumstance? Well, apart from authoritarians, fascists, religious fanatics (including Zionists) and Hillary Clinton, it’s becoming more and more apparent that “liberal” student activists do.
On college campuses across the country, students are mobilizing and protesting against institutionalized discrimination. Few on the left would argue that this is a negative development. After all, if nothing else these students are contesting authority—a noble and worthy exercise in itself. However, what do we say when fundamental democratic values like free speech are subordinated to an ideology? This is the precarious situation in which many student activists currently find themselves. It’s bizarre: presumably, the students protesting at places like Yale and the University of Missouri (to take two high-profile examples from last year) would stand with the BDS activists who are targeted and censored by pro-Israel forces. And yet these same students—exhibiting a degree of schizophrenia—would have their own ideological opponents treated in the same fashion.
Take a recent incident. At Emory College in Atlanta, some students used chalk to write “Trump 2016”—and other similarly anodyne messages—throughout the campus. Curiously (or perhaps not at this point), controversy erupted when a number of students declared that they felt physically threatened by the chalk drawings, which were considered by some to be acts of violence. “I thought we were having a KKK rally on campus,” one student reportedly told the Daily Beast. She “legitimately feared for [her] life.” Another student said that “some of us were expecting shootings” and thus “feared walking alone.” They demanded that the Emory administration identify the perpetrators, presumably so some sort of disciplinary action could take place—perhaps a public flogging. When the administration responded with a tepid defense of the anonymous chalkers’ right to free speech, the offended shifted their ire onto the college itself, for failing to provide an adequate safe space. All of which is par for the course by now.
So here we have a conflation of Donald Trump supporters with homicidal white supremacists; of political campaigning with physical violence. This is not dissimilar to the conflation of BDS with anti-Semitism, which plagues Palestinian rights activists everywhere. In fact, it’s closer to the profoundly stupid idea that all Muslims endorse terrorism—a notion that the offended students at Emory surely find abhorrent. There is one obvious distinction that must be made: the censorship of BDS on college campuses comes from the top, while the attempted censorship of Donald Trump supporters comes from the comparatively impotent student body. The former case is a much graver threat to free speech, but that is not an excuse to ignore the latter. Soon enough the student body will hold positions of authority.
ESP seems to be a trait common to advocates of censorship. For example, in a recent pro-Israel memo from the Regents of the University of California, it is contended that “opposition to Zionism often is expressed in ways that are not simply statements of disagreement over politics and policy, but also assertions of prejudice and intolerance toward Jewish people and culture.” Translation: the mind readers at the Regents of the University of California can tell when critics of Israel are actually rabid Jew-haters, and they will adjudicate such cases accordingly. Similarly, the would-be student censors use their clairvoyance to judge when an opinion they don’t like is motivated by race hatred or some other form of bigotry. Support for Donald Trump, as we have already seen, implies a desire to kill minorities. It is therefore no different from real physical violence.
What would happen if an entire college was founded on this line of thinking? A recent petition drawn up by some student activists at Western Washington University spells it out for us. The group calls themselves the Student Assembly for Power and Liberation, which is more than a little ominous-sounding. In their own words: “We are a growing group of students from a multitude of communities and disciplines around campus combatting the systemic oppression embedded within our society that is inevitably upheld through this institution, as it was created to uphold white supremacy at its core.”
Note the aggressively bureaucratic language (the grammar of which unravels throughout the petition). Prolixity of this sort is often employed by postmodernist academics—in whose tradition these students are working—for reasons that aren’t entirely clear. Noam Chomsky once argued that, in general, postmodernism “allows people to take a radical stance—more radical than thou—but to be completely dissociated from anything that’s happening, for many reasons. One reason is nobody can understand a word they’re saying. So they’re already dissociated. It’s kind of like a private lingo.”
Obviously, Michel Foucault these kids are not, but the postmodernist influence is plain to see. It’s like that smug kid in your Creative Writing workshop whose stories are all cheap Bukowski imitations. They don’t really have any idea what they’re doing, but they’re busting with self-satisfaction nevertheless.
What these students want, and what their petition is meant to facilitate, is the creation of a brand new college: the College of Power and Liberation. The function of this hypothetical college would be the “development of academic programs that are committed to social justice.” The first step in realizing this goal is “a cluster hire of ten tenure-track faculty to teach at the college.” Fair enough. However, there is something of a catch: “the Student Assembly for Power and Liberation will have direct input and decision-making power over the hiring of faculty for the college.”
That’s right—the professors at the College of Power and Liberation are to be hired by the students attending that college. The “power,” then, is to reside entirely in the hands of the student body. Naturally, they also reserve the right to take “disciplinary action” against “everyone in a teaching position within the university.” And it gets weirder. Demanded in part three of the petition is “the creation and implementation of a 15 persxn [sic] paid student committee, The Office for Social Transformation.”
The misspelling of “person” here is deliberate, as is the discontinuous misspelling of “history” (hxstory) later on. The implication is that these nouns are gendered (person, history) and thus microaggressive residue of an outmoded patriarchal system of thought. Therefore they have been changed. This, I suppose, is an example of the “de-colonial work” for which the College of Power needs “an annually dedicated revenue of $45,000.”
The Office for Social Transformation doesn’t just sound Orwellian—it quite literally is. Here is its express purpose: “to monitor, document, and archive all racist, anti-black, transphobic, cissexist, misogynistic, ablest, homophobic, islamophobic, xenophobic, anti-semitism [sic], and otherwise oppressive behavior on campus.” This oppressive behavior, the petition continues, is regularly found “in faculty curriculum.” By that I assume they mean curriculum including books with controversial subject matter, for instance the novels of James Baldwin and Mark Twain. So much for the English professors who wish to teach the “Adventures of Huckleberry Finn”—a terribly oppressive book.
The petition does not explicitly propose thought crime legislation, but it doesn’t rule it out either. One inevitably wonders about the criteria by which a person’s behavior is judged oppressive (i.e., punishable). For example, what becomes of the student or faculty member who is caught reading Kipling? Surely owning a copy of The Cantos is grounds for disciplinary action—Ezra Pound was a bona fide fascist. Hemingway was anti-Semitic and homophobic: it follows that The Sun Also Rises is beyond the pale. Tolstoy abused his wife, and so reading War and Peace implies an endorsement of misogyny.
Simone de Beauvoir once appealed to the censors of her time: “Must we burn [the Marquis de] Sade?” Indeed we must—and most others, for that matter.
Never fear, though: the College of Power and Liberation has a “three-strike disciplinary system that corresponds to citations that are processed.” Thank heavens for the three-strike disciplinary system, without which people might be fired and expelled unreasonably.
You get the picture. The mini despots comprising the so-called Student Assembly for Power and Liberation are concerned very much with Power and very little with Liberation. Their ultimate goal is to establish a totalitarian microcosm of a state, very far removed from reality, in which power and wealth is concentrated in the hands of a few self-righteous 20-somethings with delusions of grandeur. Because the First Amendment is overrated anyway.
The Holocaust Industry would be proud. And that’s what makes all of this so distressing. If so-called liberal student activists believe in censorship (and many of them evidently do), who can we rely on to challenge the unconstitutional suppression of BDS activism on college campuses? It necessarily devolves into a battle of hypocrites: the right rationalizes their brand of censorship while condemning the left’s, and vice versa. The reality is that both need to be condemned, because both represent explicit attacks on basic democratic principles. The crucial difference, I suppose, is that the Zionists (who know exactly what they’re doing) must be fought, while the overzealous students (who don’t) need merely to be educated. We can and should do both at once.
Michael Howard is a freelance writer from Buffalo, NY. He can be reached at firstname.lastname@example.org .
Those who fear the effects of radiation always focus on cancer. But the most frightening and serious consequences of radiation are genetic.
Cancer is just one small bleak reflection, a flash of cold light from a facet of the iceberg of genetic damage to life on Earth constructed from human folly, power-lust and stupidity.
Cancer is a genetic disease expressed at the cellular level. But genetic effects are transmitted across the generations.
It was Herman Joseph Muller, an American scientist, who discovered the most serious effects of ionizing radiation – hereditary defects in the descendants of exposed parents – in the 1920s. He exposed fruit flies – drosophila – to X-rays and found malformations and other disorders in the following generations.
He concluded from his investigations that low dose exposure, and therefore even natural background radiation, is mutagenic and there is no harmless dose range for heritable effects or for cancer induction. His work was honoured by the Nobel Prize for medicine in 1946.
In the 1950s Muller warned about the effects on the human genetic pool caused by the production of low level radioactive contamination from atmospheric tests. I have his original 1950 report, which is a rare item now.
Muller, as a famous expert in radiation, was designated as a speaker at the Conference, ‘Atoms for Peace’ in Geneva in 1955 where the large scale use of nuclear energy (too cheap to meter) was announced by President Eisenhower. But when the organisers became aware that Muller had warned about the deterioration of the human gene pool by the contamination of the planet from the weapon test fallout, his invitation was cancelled.
The Wonderful Wizard of Oz
The protective legislation of western governments does, of course, concede that radiation has such genetic effects. The laws regulating exposure are based on the risk model of the International Commission on Radiological Protection, the ICRP.
The rules say that no one is allowed to receive more than 1mSv of dose in a year from man-made activities. The ICRP’s scientific model for heritable effects is based on mice; this is because ICRP states that there is no evidence that radiation causes any heritable effects in humans.
The dose required to double the risk of heritable damage according to the ICRP is more than 1000mSv. This reliance on mice has followed from the studies of the offspring of those who were present in Hiroshima and Nagasaki by the Japanese/ US Atomic Bomb Casualty Commission (ABCC).
These studies were begun in 1952 and assembled groups of people in the bombed cities to compare cancer rates and also birth outcomes in those exposed at different levels according to their distance from the position of the bomb detonation, the hypocentre. The entire citadel of radiation risk is built upon this ABCC rock.
But the rock was constructed with smoke and mirrors and everything about the epidemiology is false. There have been a number of criticisms of the A-Bomb Lifespan Studies of cancer: it was a survivor population, doses were external, residual contamination was ignored, it began seven years after the event, the original zero dose control group was abandoned as being “too healthy”, and many others.
But we are concerned here with the heritable effects, the birth defects, the congenital malformations, the miscarriages and stillbirths. The problem here is that for heritable damage effects to show up, there have to be births. As you increase the exposures to radiation, you quickly obtain sterility and there are no pregnancies. We found this in the nuclear test veterans.
Then at lower doses, damaged sperm results in damaged foetuses and miscarriages. When both mother and father are exposed, there are miscarriages and stillbirths before you see any birth defects. So the dose response relation is not linear. At the higher doses there are no effects. The effects all appear at the lowest doses.
Bad epidemiology is easily manipulated
As far as the ABCC studies are concerned, there is another serious (and I would say dishonest) error in the epidemiology. Those people discarded their control population in favour of using the low dose group as a control.
This is such bad epidemiology that it should leave any honest reviewer breathless. But there were no reviewers. Or at least no-one seemed to care. Perhaps they didn’t dig deeply enough. In passing, the same method is now being used to assess risk in the huge INWORKS nuclear worker studies and no-one has raised this point there either.
Anyway, the ABCC scientists in charge of the genetic studies found the same levels of adverse birth outcomes in their exposed and their control groups, and concluded that there was no effect from the radiation.
Based on this nonsense, ICRP writes in their latest 2007 risk model, ICRP103, Appendix B.2.01, that “Radiation induced heritable disease has not been demonstrated in human populations.”
But it has. If we move away from this USA controlled, nuclear military complex controlled A-Bomb study and look in the real world we find that Muller was right to be worried. The radioactive contamination of the planet has killed tens of millions of babies, caused a huge increase in infertility, and increased the genetic burden of the human race and life on earth.
And now the truth is out!
In January of this year Prof. Inge Schmitz-Feuerhake, of the University of Bremen, Dr Sebastian Pflugbeil of the German Society for Radioprotection and I published a Special Topic paper in the prestigious peer-review journal Environmental Health and Toxicology. The title is: Genetic Radiation Risks – a neglected topic in the Low Dose debate.
In this paper we collected together all the evidence which has been published outside the single Japanese ABCC study in order to calculate the true genetic effects of radiation exposure. The outcome was sobering, but not unexpected.
Using evidence ranging from Chernobyl to the nuclear Test Veterans to the offspring of radiographers we showed clearly that a dose of 1mSv from internal contamination was able to cause a 50% increase in congenital malformations. This identifies an error in the ICRP model and in the current legislation of a factor of 1,000. And we write this down. The conclusion of the paper states:
Genetically induced malformations, cancers, and numerous other health effects in the children of populations who were exposed to low doses of ionizing radiation have been unequivocally demonstrated in scientific investigations.
Using data from Chernobyl effects we find a new Excess Relative Risk (ERR) for Congenital malformations of 0.5 per mSv at 1mSv falling to 0.1 per mSv at 10mSv exposure and thereafter remaining roughly constant. This is for mixed fission products as defined though external exposure to Cs-137.
Results show that current radiation risk models fail to predict or explain the many observations and should be abandoned. Further research and analysis of previous data is suggested, but prior assumptions of linear dose response, assumptions that internal exposures can be modelled using external risk factors, that chronic and acute exposures give comparable risks and finally dependence on interpretations of the high dose ABCC studies are all seen to be unsafe procedures.
Most of the evidence is from effects reported in countries contaminated by the Chernobyl accident, not only in Belarus and Ukraine but in wider Europe where doses were less than 1mSv. Other evidence we referred to was from the offspring of the nuclear test veterans.
In a study I published in 2014 of the offspring of members of the British Nuclear Test Veterans Association (BNTVA) we saw a 9-fold excess of congenital disease in the children but also, and unexpectedly, an eight-fold excess in the grandchildren. This raises a new and frightening spectre not anticipated by Herman Muller.
In the last 15 years it has become clear that radiation causes genomic instability: experiments in the laboratory and animal studies show that radiation exposure throws some kind of genetic switch which causes a non-specific increase in general mutation rates.
Up until these genomic instability discoveries it was thought that genetic processes followed the laws of Gregor Mendel: there were specific dominant and recessive gene mutations that were passed down the generations and became diluted through a binomial process as offspring married away.
But radiation scientists and cancer researchers could not square the background mutation rate with the increased risks of cancer with age: the numbers didn’t fit. The discovery of the genomic instability process was the answer to the puzzle: it introduces enough random mutations to explain the observations.
It is this that supplies the horrifying explanation for the continuing high risk of birth defects in Fallujah and other areas where the exposures occurred ten to twenty years ago. Similar several generation effects have been seen in animals from Chernobyl.
Neonatal mortality in the nuclear bomb era
So where does that leave us? What can we do with this? What can we conclude? How can this change anything? Let’s start by looking at the effects of the biggest single injection of these radioactive contaminants, the atmospheric weapons tests of the period 1952 to 1963.
If these caused increases in birth defects and genetic damage we should see something in the data. We do. The results are chilling. If babies are damaged they die at or shortly before birth. This will show up in the vital statistics data of any country which collects and publishes it.
In Fig 1 (above right) I show a graph of the first day (neonatal) mortality rates in the USA from 1936 to 1985. You can see that as social conditions improved there was a fall in the rates between the beginning and end of the period, and we can obtain this by calculating what the background should have been using a statistical process called regression.
The expected backgound is shown as a thin blue line. Also superimposed is the concentration of Strontium-90 in milk (in red) and its concentration in the bones of dead infants (in blue). The graph shows first day neonatal mortality in the USA; it is taken from a paper by Canadian paediatrician Robin Whyte (woman) in the British Medical Journal in 1992. This paper shows the same effect in neonatal (1 month) mortality and stillbirths in the USA and also the United Kingdom. The doses from the Strontium-90 were less than 0.5mSv.
This is in line with what we found in our paper from Chernobyl and the other examples of human exposures. The issue was first raised by the late Prof Ernest Sternglass, one of the first of the radiation warrior-scientists and a friend of mine. The cover-ups and denials of these effects are part of the biggest public health scandal in human history.
It continues and has come to a venue near you: our study of Hinkley Point showed significant increased infant mortality downwind of the plant at Burnham on Sea as I wrote in The Ecologist.
It’s official – genetic damage in children is an indicator of harmful exposures to the father
As to what we can do with this new peer-reviewed evidence we can (and we shall) put it before the Nuclear Test Veterans case in the Pensions Appeals hearings in the Royal Courts of Justice which is tabled for three weeks from June 14th 2016 before a tribunal headed by high court judge Sir Nicholas Blake.
I represent two of the appellants in this hearing and will bring in the genetic damage in the children and grandchildren as evidence of genetic damage in the father.
We are calling Inge Schmitz-Feuerhake, the author of the genetic paper, as one expert witness; the judge has conceded that genetic damage in the children is an indicator of harmful exposures to the father. He has made a disclosure order to the University of Dundee to release the veteran questionnaires. They have.
Finally, I must share with you a window into the mind-set of the false scientists who work for the military and nuclear operation. As the fallout Strontium-90 built up in milk and in childrens’ bones and was being measured, they renamed the units of contamination, (picoCuries Sr-90 per gram of Calcium) ‘Sunshine Units’.
Can you imagine? I would ship them all to Nuremberg for that alone.
Retraction Watch: You write that as evidence-based medicine “became more influential, it was also hijacked to serve agendas different from what it originally aimed for.” Can you elaborate?
John Ioannidis: As I describe in the paper, “evidence-based medicine” has become a very common term that is misused and abused by eminence-based experts and conflicted stakeholders who want to support their views and their products, without caring much about the integrity, transparency, and unbiasedness of science.
RW: You also write that evidence-based medicine “still remains an unmet goal, worthy to be attained.” Can you explain further?
JI: The commentary that I wrote gives a personal confession perspective on whether evidence-based medicine currently fulfills the wonderful definition that David Sackett came up with: “integrating individual clinical expertise with the best external evidence”. This is a goal that is clearly worthy to be attained, but, in my view, I don’t see that this has happened yet. Each of us may ponder whether the goal has been attained. I suspect that many/most will agree that we still have a lot of work to do.
RW: You describe yourself as a “failure.” What do you mean?
JI: Well, I still know next to nothing, even though I am always struggling to obtain more solid evidence and even though I always want to learn more. If you add what are probably over a thousand rejections (of papers, grant proposals, nominations, and other sorrowful academic paraphernalia) during my career to-date, I think I can qualify for a solid failure. Nevertheless, I still greatly enjoy my work in science and in evidence-based medicine.
RW: You say that your first grant, which you applied for 17 years ago, was “not even rejected.” Tell us about that grant.
JI: It was a randomized controlled trial of antibiotics versus placebo for acute sinusitis. Hundreds of millions of people were treated with antibiotics without good evidence back then, and hundreds of millions of people continue to be treated with antibiotics even nowadays even though most of them would not need antibiotics. I sent in the application to a public funding agency, but have not heard back yet. Probably they felt that requesting funding for a randomized trial and not going to the industry for such funds was a joke. Many public funding agencies are accustomed to funding only research that clearly has no direct relevance to important, real-life questions, so perhaps they didn’t know where to place my application.
RW: You write that clinical evidence is “becoming an industry advertisement tool” and that “much ‘basic’ science [is] becoming an annex to Las Vegas casinos.” Provocative — what do you mean?
JI: Since clinical research that can generate useful clinical evidence has fallen off the radar screen of many/most public funders, it is largely left up to the industry to support it. The sales and marketing departments in most companies are more powerful than their R&D departments. Hence, the design, conduct, reporting, and dissemination of this clinical evidence becomes an advertisement tool. As for “basic” research, as I explain in the paper, the current system favors PIs who make a primary focus of their career how to absorb more money. Success in obtaining (more) funding in a fiercely competitive world is what counts the most. Given that much “basic” research is justifiably unpredictable in terms of its yield, we are encouraging aggressive gamblers. Unfortunately, it is not gambling for getting major, high-risk discoveries (which would have been nice), it is gambling for simply getting more money.
RW: Studying what ails science doesn’t make you popular with other researchers — until they want to publish with you, of course, as you point out in your piece. But those criticisms can also lump you in with those that you describe as “pseudo-scientists and dogmatists… trying to exploit individuals and populations and attack science.” How do you differentiate your own work?
JI: I definitely can’t complain for lack of popularity. I feel privileged to have worked with thousands of other scientists over the years and to have learnt from them. It is not possible to make everybody happy all the time, but the work of my team is aiming to protect science, defend the scientific method, question dogma, and enhance the capability and efficiency of research methodology and research practices. In this regard, it is at the very opposite pole than those who want to attack science, question the scientific method and promote dogmas.
RW: You’re worried that Cochrane Collaboration reviews — the apex of evidence-based medicine — “may cause harm by giving credibility to biased studies of vested interests through otherwise respected systematic reviews.” Why, and what’s the alternative?
JI: A systematic review that combines biased pieces of evidence may unfortunately give another seal of authority to that biased evidence. Systematic reviews may sometimes be most helpful if, instead of focusing on the summary of the evidence, highlight the biases that are involved and what needs to be done to remedy the state-of-the-evidence in the given field. This often requires a bird’s eye view where hundreds and thousands of systematic reviews and meta-analyses are examined, because then the patterns of bias are much easier to discern as they apply across diverse topics in the same or multiple disciplines. Much of the time, the solution is that, instead of waiting to piece together fragments of biased evidence retrospectively after the fact, one needs to act pre-emptively and make sure that the evidence to be produced will be clinically meaningful and unbiased, to the extent possible. Meta-analyses should become primary research, where studies are designed with the explicit anticipation that they are part of an overarching planned cumulative meta-analysis.
RW: What are your hopes for evidence-based medicine moving forward?
JI: The right ideas are there, and there are many superb scientists and clinicians who want to do the right thing, so I am always cautiously hopeful. We should keep trying.
RW: The essay is really personal and full of interesting stories. We’d like to end with a quote from when he was an early career researcher questioning entrenched research attitudes in Europe:
A senior professor of cardiology told a friend of mine that I should not be too outspoken, otherwise Albanian hit men may strangle me in my office. I replied that they should make sure to get correct instructions to my office – turn left when they come up the stairs. I would feel remorse, if the assassins entered the wrong office and strangled the wrong person.
A new United Nations report published this February has criticised prevalent approaches to countering ‘radicalisation’ as ineffective, conceptually flawed, and more likely to reinforce extremist narratives than prevent them.
The report to the UN Human Rights Council is authored by Ben Emmerson QC, the UN Special Rapporteur on Counter Terrorism and Human Rights.
Emmerson is a leading British lawyer, deputy High Court Judge, and British judge on the Residual Mechanism of the International Criminal Tribunal for Rwanda and the International Criminal Tribunal for the Former Yugoslavia.
His new report to the UN criticises “[m]any programmes directed at radicalisation” for being “based on a simplistic understanding of the process as a fixed trajectory to violent extremism with identifiable markers along the way.”
Despite volumes of research and huge expenditures, he points out, “there are no authoritative statistical data on the pathways towards individual radicalisation.”
To make matters worse, Emmerson concludes, “States have tended to focus on those [approaches] that are most appealing to them, shying away from the more complex issues, including political issues such as foreign policy and transnational conflicts.” This has led to a misguided “focus on religious ideology as the driver of terrorism and extremism”, and an escalating resort to repressive and discriminatory measures targeted at Muslim communities.
Far from preventing extremism, this is fuelling it. Emmerson refers to an earlier warning from the UN Human Rights Commissioner that “any more repressive [an] approach would have the reverse effect of reinforcing the narrative of extremist ideologies”, and warns that this is precisely what is now coming to pass.
80% of terrorism studies are bullshit
Yet this important UN report barely scratches the surface of how truly crap the state of the science is when it comes to understanding what ‘radicalisation’ even is, let alone countering it.
Over thirty years ago, Alex P. Schmid, former Office-in-Charge of the UN’s Terrorism Prevention Branch and Albert Jongman of Leiden University’s PIOOM Foundation (Interdisciplinary Research Programme on Root Causes of Human Rights Violations) reviewed over 6,000 academic studies of terrorism published between 1968 and 1988. Shockingly, as they explained in their seminal book Political Terrorism, they found that “perhaps as much as 80 percent of the literature is not research-based in any rigorous sense.”
Of course, that’s a very polite, typically academic way of putting it.
The thing is, when an academic tells you that your work is “not research-based in any rigorous sense”, what she’s basically saying is that your work has very little, if any, scholarly merit. It doesn’t actually make an original contribution to knowledge.
In short, for all intents and purposes, it’s bullshit.
When evidence is lacking: recycle
In any other discipline, academic research that is “not research-based in any rigorous sense” would mean you fail to get your degree — let alone fail to get published in a peer-reviewed journal.
Not in terrorism studies.
In terrorism studies, decades of ‘scholarship’ that is “not research-based in any rigorous sense” is being continuously recycled to regurgitate ‘new’ theories and policy recommendations which, however, have little if any evidential support.
By 2001, Professor Andrew Silke of the University of East London — a counter terrorism specialist who advises the UN and the UK government’s Cabinet Office — wrote in the journal Terrorism and Political Violence that the situation had still barely improved.
Despite decades of scholarship, he concluded, terrorism studies still struggled “in its efforts to explain terrorism or to provide findings of genuine predictive value.”
Most of the ‘scientific’ literature on terrorism, Silke found, recycled information from previous secondary sources, with only about 20% of publications offering genuinely original and novel data.
When Silke updated his analysis of the field in his contribution to the 2009 Routledge anthology, Critical Terrorism Studies: A New Research Agenda, he found that despite some marginal progress, the field was still characterised largely by an over-reliance on secondary sources and a dearth of empirical data.
Numerous other terrorism experts have admitted this problem. A 2006 report by the NATO Programme for Security in Science, Tangled Roots: Social and Psychological Factors in the Genesis of Terrorism, examined 1535 academic papers on terrorism between 2000 and 2004. It concluded:
“… a careful review reveals that genuine new data was reported in less than 10% of that subgroup.”
Other reviews have been even more damning. That year, a major study of the literature in Campbell Systematic Reviews concluded that “only 3% of articles from peer-reviewed sources appeared to be based on some form of empirical analysis.” Another 1% consisted of case studies, and the remaining 96% consisted essentially of “thought pieces.”
Which means that a whopping 96% was recycled bullshit.
That was ten years ago, so have things gotten better since then?
Pseudo-science echo chamber
In 2011, Professor Adam Dolnik, Director of Terrorism Studies at the Centre for Transnational Crime Prevention (CTCP) at the University of Wollongong in Australia, observed in Perspectives on Terrorism that the continual dependence on secondary sources means that terrorism studies represents a “highly unreliable closed and circular research system, functioning in a constantly reinforcing feedback loop.”
The continual transmission of contradictory truisms within the field, has meant that terrorism experts are not really advancing knowledge of terrorism or extremism, and how to deal with it — they’re just repeating the same stale assumptions and prescriptions again and again.
Of course, that’s not to say that all terrorism research is useless. There is good research going on — but it’s few and far between, and the best work doesn’t necessarily impact on policy.
In any other discipline, the chronic inability to produce meaningful and original contributions to knowledge would justify wholesale dismissal as the work of cranks and pseudo-scientists.
Unfortunately, the one saving grace is that when the best counter-terrorism specialists are able to apply scientific standards to the field, among the most consistent findings is that the field is full of very serious, beard-stroking, speculative conjecture dressed up as ‘theory.’
In 2013, a background note by the International Centre for Counter-Terrorism in The Hague conceded that, despite some important improvements in the gathering of empirical data:
“A lack of research based on primary sources has been one of the major impediments to progress in the field of (counter-) terrorism studies… As numerous leading experts have warned, the consequences of an overreliance on secondary sources of information, such as newspapers, has led to a great amount of theorising based on a perilously small empirical foundation.”
That year, the Scientific Approach to Finding Indicators of and Responses to Radicalisation (SAFIRE) project of the leading Pentagon contractor RAND Europe similarly found that despite offering “numerous insights into the process of violent radicalisation… only a minority of the literature consisted of empirical and/or causal research, which could explain the causes of violent extremism and terrorism.”
Ironically, this has the effect that all these wonderful “insights” may really just be reflections of the prejudices of those involved in the research:
“In other words, one can only have limited confidence that the results from the literature accurately reflect the characteristics of the violent extremist and terrorist population, and not the assumptions and biases of those that have reported the characteristics of violent extremists and terrorists to the researchers.”
This is another polite, academic way of admitting that the bulk of the literature is full of unsubstantiated, self-referential bullshit — while also trying to project a semblance of scholarly credibility.
“The lack of causal research in relation to factors associated with violent extremism and terrorism suggests that the findings from the literature cannot, on the whole, be used to explain what drives people to violent extremism or terrorism or to predict outcomes,” concluded the SAFIRE report.
Translation: the, ahem, “findings” from the literature cannot, on the whole, be treated as actual scientific “findings” that can “explain” or “predict” anything concerning extremism or terrorism.
Forensic psychiatrist and former CIA operations officer Marc Sageman was far more harsh in his 2014 review published in Terrorism and Political Violence.
“Despite over a decade of government funding and thousands of newcomers to the field of terrorist research, we are no closer to answering the simple question of ‘What leads a person to turn to political violence?’” he lamented.
He blamed this “state of stagnation” on government funding of academic research while still withholding access to sensitive primary source information guarded by the intelligence community:
“This has led to an explosion of speculations with little empirical grounding in academia, which has the methodological skills but lacks data for a major breakthrough… Nor has the intelligence community been able to achieve any breakthrough because of the structure and dynamic of this community and its lack of methodological rigor. This prevents creative analysis of terrorism protected from political concerns.”
Sageman’s answer is for governments to give academics access to sensitive data gleaned from the intelligence community, such as evidence from interrogations of detained extremists and terrorists. That’s assuming that, somehow, such intelligence is not itself compromised or politicised.
This was, of course, the case with much of the 9/11 Commission Report. My first book, The War on Freedom: How & Why America was Attacked, September 11, 2001, was among 99 books made available to the 9/11 Commissioners as part of their investigations. It was also the first book read by the Jersey Girls, the well-known group of 9/11 widows who had played a key role in the 9/11 Family Steering Committee, which set out key lines of inquiry for the Commission to explore.
Yet according to NBC News, more than a quarter of the final report’s footnotes refer to interrogations of detainees acquired by torture, including three alleged senior al-Qaeda leaders who were repeatedly waterboarded.
“This has troubling implications for the credibility of the commission’s final report. In intelligence circles, testimony obtained through torture is typically discredited; research shows that people will say anything under threat of intense physical pain.”
Among these sources, one of the main ones is ‘Abu Zubaida’, whose real name is Zain al-Abidin Mohamed Hussein. Zubaida in particular is repeatedly referenced throughout the 9/11 Commission Report, where he is described as a key operational planner of the 9/11 attacks, a senior al-Qaeda lieutenant and operations chief, a close associate of Osama bin Laden, and so on.
But according to Daniel Coleman, a 31-year veteran FBI agent intimately familiar with Abu Zubaida’s case, the terror suspect was, in fact, mentally ill:
“Looking at other evidence, including a serious head injury that Abu Zubaida had suffered years earlier, Coleman and others at the FBI believed that he had severe mental problems that called his credibility into question. ‘They all knew he was crazy, and they knew he was always on the damn phone,’ Coleman said, referring to al-Qaeda operatives. ‘You think they’re going to tell him anything?’… Much of the threat information provided by Abu Zubaida, Coleman said, ‘was crap.’”
Coleman would also tell Pulitzer Prize-winning reporter Ron Suskind:
“This guy is insane, certifiable, split personality.”
Zubaida’s claims under torture led to multiple new arrests of other alleged senior al-Qaeda leaders and whole new reams of investigation, much of which also ended up as part of the narrative put out in the 9/11 Commission Report. Dozens of other ‘terror suspects’ detained at Guantanamo (and eventually released without charge) had been linked to him.
Yet by 2009, even the US government was forced to concede that its entire narrative about Zubaida was basically, largely, bullshit. According to the transcript of court proceedings that year:
“The Government has not contended in this proceeding that Petitioner [Abu Zubaida] had any direct role in or advance knowledge of the terrorist attacks of September 11, 2001.”
Er, say what?
The US government went on to concede that Zubaida was not involved in any other previous terrorist attacks, was not actually a member of al-Qaeda, and was not even a member of the Taliban.
“… for purposes of this proceeding the Government has not contended that Petitioner had any personal involvement in planning or executing either the 1998 embassy bombings in Nairobi, Kenya, and Dar-es-Salaam, Tanzania, or the attacks of September 11, 2001…
“… the Government has not contended in this proceeding that Petitioner was a member of al-Qaida or otherwise formally identified with al-Qaida… Respondent does not contend that Petitioner was a ‘member’ of al-Qaida in the sense of having sworn bayat (allegiance) or having otherwise satisfied any formal criteria that either Petitioner or al-Qaida may have considered necessary for inclusion… Nor is the Government detaining Petitioner based on any allegation that Petitioner views himself as part of al-Qaida as a matter of subjective personal conscience, ideology, or worldview… The Government does not contend in its factual return that Petitioner was a ‘member’ of the Taliban…”
The government thus effectively erased the entirety of its own narrative about the operational planning behind the 9/11 terrorist attacks, rendering the vast bulk of the 9/11 Commission Report’s findings null and void.
No wonder Zubaida remains indefinitely detained without charge in Guantanamo — imagine the impact of a trial forcing the US government to release him: it would amount to an official admission that the White House-sanctioned narrative of the 9/11 operation is largely a torture-driven fantasy of CIA and Bush administration sadists.
So what exactly, then, is the US government contending? Not much, according to the court transcript:
“Rather, Respondent’s [the US government’s] detention of Petitioner [Abu Zubaida] is based on conduct and actions that establish Petitioner was ‘part of’ hostile forces and ‘substantially supported’ those forces.”
Whatever this means — and the US government has still refused to charge Zubaida, who is currently detained in Guantanamo — it is somewhat consistent with Suskind’s conclusion:
“Zubaydah was a logistics man, a fixer, mostly for a niggling array of personal items. Like the guy you call who handles the company health plan, or benefits, or the people in human resources. There was almost nothing ‘operational’ in his portfolio. That was handled by the management team. He wasn’t one of them.”
So essentially, this is the sort of deeply compromised ‘intelligence’ that terrorism scholars imagine might help them do more scientifically robust research.
Further translation: it’s pretty much all bullshit. So please give us more funding to try and turn this decades of mounting bullshit into gold; and some more ‘intelligence’ because at least then we can maintain a semblance of credibility by pointing to ‘primary sources.’
Dithering over Violent Extremism (DVE)
No wonder, then, that the policy recommendations that emerge from the field appear to have achieved very little — if not failed dramatically.
This is unambiguously clear from the practical results of a burgeoning sub-field of terrorism studies: preventing or countering violent extremism (PVE or CVE).
Understandably, given governments’ eagerness to be seen by their publics to be ‘doing something’ about terrorism, a veritable industry has ballooned around the urgent task of creating community-level strategies that stop people from being radicalised in the first place.
But like the wider field of terrorism studies, the scientific evidence that prevailing PVE/CVE strategies actually work is thin, to say the least.
You won’t hear government spokespersons admitting this in any formal capacity — but internal assessments and reviews tend to show that privately, the bankruptcy of existing CVE programmes is widely, if reluctantly, recognised.
There have been at least two internal evaluations of the UK Government’s flagship Prevent programme — the Channel Project — which tasks public sector workers to detect signs of potential extremism in schools, hospitals, local authorities, universities, and other forums. Under Channel, referrals to the police of individuals who appear to be ‘vulnerable’ to radicalisation results in an assessment process, and an ensuing social intervention of some sort to prevent radicalisation.
Unfortunately, the government’s formal evaluations of Channel have not been published. But in 2010, internal Home Office slides admitted:
“… hard evidence of intervention projects capability [was] not yet established.”
That year, I’d been invited by a senior police officer in charge of the Channel Project to an internal exercise, to see how the programme worked, and to provide advice on its implementation. One of the problems that stood out, and that I’d highlighted in previous conversations with the officer, is that the model of ‘vulnerability’ to radicalisation was so vague as to potentially encompass most normal people.
‘Risk indicators’ included amorphous generalities like, sudden changes in behaviour — for instance, the way you dress or appear, an interest in politics or foreign affairs, a sudden interest in religion, or activism, or even a shift in a person’s friend circle. Basically, most teenagers.
Legitimate terror suspect
A senior Home Office counter-terrorism official at the exercise with responsibility for managing the national Prevent strategy told me that the vast majority of referrals to Channel would, after being assessed, be dismissed as not requiring any intervention.
“In fact, we see this as a success of the programme,” he said enthusiastically. “The more referrals that come in and are dismissed, the greater the success.”
I looked at him skeptically.
“What you’re saying is that millions of pounds of taxpayers money will be spent on training, organising and assessing pointless referrals of endless numbers of perfectly innocent people who are not at risk of radicalisation,” I said.
The Home Office official nodded, still trying to look enthusiastic.
“Meanwhile,” I continued, “people who are actually developing a real interest in violent extremist ideologies, or even planning terrorist activity, are going to adapt very quickly and do everything they can to avoid being referred.”
The official, and his colleagues who were present, were all genuinely perplexed, but the visage of enthusiasm had now been replaced by the sort of strained look of someone who needs to fart really badly but is trying not to let it out loudly.
“Okay, but what else are we supposed to do?” was what the Home Office official eventually told me in exasperation after we discussed this obvious conundrum.
That was six years ago. Sadly, no one at Channel seems to have taken my advice.
A parliamentary inquiry into both Prevent and Channel by the Select Committee for Communities and Local Government did draw extensively on my criticisms as part of its 2010 report, but the Tory-led government simply ignored most of its recommendations.
From a presentation by Dr. Brian Hughes, School of Psychology, National University of Ireland
The lack of evidence that Channel could ever do more than alienate the very communities that need to be engaged applies to the UK Prevent programme more generally.
“It remains exceedingly difficult to gauge the real success of Prevent,” concluded a 2015 study in the Journalism of Terrorism Research, published by St. Andrews University’s Centre for the Study of Terrorism and Political Violence. The paper went on to admit that:
“… very few effective tools aside from anecdotal evidence exist to measure one’s vulnerability to becoming involved in extremism and the effect certain programs may have at reversing such processes… it is unclear as to whether such programmes have actually been successful in deterring extremist ideology.”
Reinforcing Violent Extremism (RVE)
But it’s not just the UK Prevent programme — the entire CVE field is a confused and somewhat redundant mess.
Eight years ago, a team of criminologists at George Mason University systematically reviewed global counter-terrorism strategies. To describe their findings as ‘dire’ would be a disservice to the gravity of their analysis:
“Overall, we found an almost complete absence of evaluation research on counter-terrorism strategies and conclude that counter-terrorism policy is not evidence-based… there is an almost complete absence of evaluation research on counter-terrorism strategies. This is startling given the enormous increases in the development and use of counter-terrorism programs, as well as spending on counter-terrorism activity.”
The few evaluations that did exist either proved that prevailing counter-terrorism strategies didn’t work, or that they were worsening the risk of radicalisation:
“Even more disconcerting was the nature of the evaluations we did find; some programs were shown to either have no discernible effect on terrorism or lead to increases in terrorism…There has been a proliferation of counter-terrorism programs and policies as well as massive increases in expenditures toward combating terrorism. Yet, we know almost nothing about the effectiveness of any of these programs or continue to use programs that we know are ineffective or harmful.”
Yet there has been little, if any, progress since then. In 2014, Steven Heydemann, Vice President of Applied Research on Conflict at the US government-funded United States Institute for Peace, offered the following scathing evaluation of the CVE sector. Despite rapid growth, he wrote:
“CVE has struggled to establish a clear and compelling definition as a field; has evolved into a catch-all category that lacks precision and focus; reflects problematic assumptions about the conditions that promote violent extremism; and has not been able to draw clear boundaries that distinguish CVE programmes from those of other, well-established fields, such as development and poverty alleviation, governance and democratisation, and education.”
One of the most comprehensive reviews of the state of the research on countering extremism was undertaken last year by the National Consortium for the Study of Terrorism and Responses to Terrorism (START) at the University of Maryland. The review was commissioned by the US Department of Homeland Security (DHS) and the US Department of Defense’s (DoD) Strategic Multilayer Assessment (SMA) office, and the resulting document reported straight back to those departments.
The report’s findings are disturbing.
“Most literature addressing counter‐terrorism, counter‐insurgency, and/or countering violent extremism does not include empirical evaluations of specific policies,” it concluded.
“While multiple authors hypothesise that countering extremist narratives is critical to reduce the appeal of violent extremism, there has been very little scholarship in terms of empirical studies to test the efficacy of counter‐narratives in general or of specific strategic communication programs or content.”
As if to hammer this message home, the START report added that there is a serious “shortage of empirical analyses of counter‐terrorism, counter‐insurgency, and CVE policies and programs.” And generally, “within the literature as a whole, there is a shortage of rigorously designed empirical analyses.”
One of START’s flagship projects is the Influencing Violent Extremist Organisations (I-VEO) Knowledge Matrix, an online tool for counter-terrorism practitioners which “identifies and gauges the level of empirical support for more than 180 hypotheses about influencing VEOs [Violent Extremist Organisations], from positive incentives to punitive actions.”
Founded in 2012, I-VEO’s lead investigator, Gary Ackerman, described it as “explicitly designed to be a one-stop shop for capturing and synthesising the breadth of existing scientific knowledge related to influencing VEOs and to highlight those areas where, despite often vigorous assertion, the empirical evidence is lacking.”
Unfortunately, three years later, this US government-funded project showed that most CVE approaches completely lack evidential support.
“As previously found under the I‐VEO effort, many hypotheses regarding influence operations have either not been empirically tested or have been supported merely through anecdotes,” concluded the 2015 START literature review. “This is especially true regarding non‐coercive strategies.”
Amazingly, out of the 183 counter-terrorism methodologies explored in the database, 50 “did not have any relevant empirical evidence to support or contradict” them, while “fifty-seven of the hypotheses had multiple qualitative and/or quantitative studies with contradictory conclusions.”
In the end, there were only six counter-terrorism hypotheses that received “the highest level of empirical support.”
So utterly crap, in other words, is the state of the scientific literature on countering extremism, that policymakers have been left floundering — which is perhaps why they are quite literally making shit up as they go along. In the understated words of the START report to the DoD and DHS:
“The state of play in the academic literature regarding counter‐terrorism, counter‐insurgency, and countering violent extremism creates difficulties for providing robust guidance to policymakers regarding policy options.”
Translation: basically, most of our “findings” consist of unsubstantiated bullshit, so it’s difficult to give governments decent counter-extremism advice which isn’t, well, unsubstantiated bullshit.
The few empirically robust findings that can be extracted from the available data, however, raises serious questions about the direction of prevailing PVE/CVE approaches — and whether they might actually be making things worse.
“A majority of studies with evaluation find that use of coercive methods such as repression (especially when used exclusively and indiscriminately) tend to produce backlash effects,” noted the START report to the Pentagon. “Indiscriminate” repression in particular is “unlikely to work in the long‐term and may produce backlash effects that result in more, rather than less, political violence.”
No shit, Sherlock.
The report also found that the practice known as ‘target hardening’ — visibly strengthening the security of a building or infrastructure to prevent or reduce the risk of a successful attack — “may make attacks on those targets less likely”, but may also “result in violent extremists shifting their targeting strategy rather than reducing their overall level of violence.”
In other words, more intrusive security measures like increasing anti-terror powers, flooding the streets with more cops, and trying to monitor everyone will only lead terrorists to adapt, by innovating new techniques to avoid detection.
How not to make friends and influence extremists
It comes as no surprise, then, that in his report, UN Special Rapporteur on Counter Terrorism, Ben Emmerson, documents how PVE/CVE strategies have so far disproportionately targeted and alienated Muslim communities.
Criticising “the elasticity of the term ‘violent extremism’, and the lack of clarity on what leads individuals to embrace violent extremism”, Emmerson concluded that as a consequence, “a wide array of legislative, administrative and policy measures are pursued, which can have a serious negative impact on manifold human rights. In addition, targeted measures to counter violent extremism can stigmatise groups and communities, undermining the support that governments need to successfully implement their programmes, and having a counter-productive effect. They can also be used to limit the space in which civil society operates, and may have a discriminatory impact on women and children.”
This has in some cases served to exacerbate “conditions conducive” to terrorism or violent extremism, including “prolonged unresolved conflicts, dehumanisation of victims of terrorism, lack of rule of law and violations of human rights, ethnic, national and religious discrimination, political exclusion, socio-economic marginalisation and lack of good governance.”
The other empirically robust findings of START’s I-VEO Knowledge Matrix are worth noting. One is that: “If the adversary sees that there are no benefits to restraint, it will work against the deterring party.”
Another is that where there are “multiple” violent extremist organisations, negotiating with just one of them is unlikely to work, as it “may lead to increased bad behavior by VEOs left out of negotiations.”
In other words, there should be obvious incentives to ceasing violence, and those incentives should be offered to all the relevant groups, rather than attempting to play off different groups against each other — which is likely to escalate, not ameliorate, extremism.
Perhaps most significantly, the START database also found abundant empirical evidence that:
“On the whole, positive inducements seem more effective than negative ones in deradicalizing/disengaging.”
The very concept underlying prevailing PVE/CVE approaches, then, is deeply questionable. And finally:
“Political reforms can lower VEO activity.”
This, of course, ties into the issue of developing tangible “benefits” in return for the reduction of violence, and addressing deeper causal factors by inculcating social and political reform.
Other empirical studies provide further grounds for recognising that the current PVE/CVE trend is going nowhere, fast.
One New York University study of terrorist attacks between 1980 and 2008 across 56 countries found that high levels of unemployment correlated significantly with increased instances of terrorism, especially in countries which had already experienced previous terrorist attacks.
A University of Texas empirical analysis of 2448 suicide terrorist incidents from 1998 to 2010 found that “foreign occupation and government transition are greatly associated with suicide terrorist attacks.”
The researchers also concluded that government social services were an important alleviator of transnational suicide attacks, but that “military spending is not effective at curbing transnational suicide terrorism,” raising questions about providing foreign counter-terrorism assistance “to countries beset with such attacks.”
In 2011, a study by the London School of Economics and University of Essex, published in the Journal of Peace Research, examined terrorist attacks on Americans by foreigners between 1978 and 2005.
Their model showed that US military support to foreign governments had “substantively strong effects on foreign terror on Americans.” A significant rise in military aid, for instance, produced an increase in anti-American terrorism by 135%.
The new Government Actions in Terrorist Environments (GATE) dataset provides further insights. A preliminary University of South Carolina study found that with regard to Palestinian terrorism, “when significant, [Israeli] repression is associated with more [Palestinian] terrorism (backlash) and conciliation is associated with less terrorism. This finding is especially strong when the actions are indiscriminate and during the Second Intifada.”
Similar conclusions applied for the wider region:
“For the remaining Middle Eastern countries, repression is either ineffective, or associated with more terrorism; and conciliation is either ineffective or associated with less terrorism.”
Likewise, in Canada, “al-Qaeda inspired extremism was very sensitive to actions by the Canadian Military in Afghanistan.”
Last year, a paper in the Journal of Deradicalization published by the Modern Security Consulting Group in Berlin, examined the evidence of causation between different types of military intervention, and the radicalisation process, including the invasions of Afghanistan and Iraq, the US-led drone programme, and coalition strikes against ISIS. The paper, noting that this issue has been largely overlooked in the wider terrorism literature, found that:
“… intervention by a foreign power can encourage the process of radicalisation, or ‘de-pluralisation’ — the developing perception that there exists only one solution, extreme violence — to take place. However, it finds that the type of intervention plays a critical role in determining how individuals experience this process of depluralisation; full-scale intervention can result in a lack of monitoring alongside frustrations (about lost sovereignty for example), a combination which paves the way for radical ideology. Conversely, airstrikes present those underneath with unequal and unassailable power that cannot be fairly fought, fuelling interest in exporting terrorism back to the intervening countries.”
The critical role of state repression, then, is absolutely clear from the available data — but has had little tangible impact on the narcissistic echo chamber of state counter-terrorism strategies.
The state we’re in, but ignore
This suggests that much of what passes for ‘research’ in terrorism studies on the intricacies of the radicalisation process is fundamentally misguided.
Pinpointing multiple pathways to violent radicalisation, numerous overlapping risk-factors and complex interlinked causes has borne little fruit, because the whole enterprise is premised on the assumption that the focal point of analysis should be an internal, psychological change applying to an individual.
As a result, the wider historical, social, cultural, economic, political and ideological context of the radicalisation process has been neglected.
The failure of terrorism studies is not simply a matter of lack of data, but down to the bankruptcy of its very foundational assumptions, the very framing of the problem, and the questionable political and ideological context of that framing: state counter-terrorism policies.
And this brings us back to the central question of empirical analysis. To what extent have self-styled ‘terrorism experts’ really engaged with the empirical reality of terrorism?
The answer is not at all: because the vast bulk of terrorism studies is obsessed exclusively with the radicalisation of individuals, and networks of individuals. But the biggest perpetrators of terrorism, defined as political violence against civilians, are states themselves.
The problem is that primary sources of empirical data on terrorist incidents are either governments themselves, or government-sponsored think tanks and academic groups — which means that they systematically exclude data on terrorist incidents perpetrated by those governments.
A University of Illinois study by political scientist Gregory Holyk — a senior researcher for ABC News’s leading public poll provider Langers Research Associates — compared the lethality of US state terrorism and non-state terrorism between 1968 and 1978. The study found that “the mean number of people killed in state-sponsored terrorist events was significantly greater than the mean number killed in non-state terrorist events,” and cast doubt on the “focus on non-state terrorism in the literature.”
In her seminal 2009 Routledge study, State Terrorism and Neoliberalism: The North in the South, Professor Ruth Blakeley, Head of the School of Politics and International Relations at the University of Kent, provided a further wealth of evidence on the vast extent to which Western states perpetrated terrorism during and after the Cold War. The death toll of this continuum of terror far outweighs the scale of atrocities by al-Qaeda and ISIS.
Even less palatable is the ongoing role of Western states in allying with other state-sponsors of Islamist terrorism, such as Pakistan, the Gulf states and Turkey — all of which have been implicated by credible primary source data in deliberately supporting extremist terrorist groups — for geopolitical purposes.
The emerging sub-field of ‘critical terrorism studies’, pioneered by the likes of Blakeley and others, is beginning to subject conventional academic discourses on terrorism to critical scrutiny. Researchers are opening up new historical, sociological and empirical approaches to understanding both non-state and state-terrorism.
Unfortunately, there’s still a long way to go before this research is able to do more than shine a somewhat unsavoury light on the mountains of shameless bullshit that has accumulated over the last few decades.
Dr Nafeez Ahmed is an investigative journalist, bestselling author and international security scholar. A former Guardian writer, he writes the ‘System Shift’ column for VICE’s Motherboard, and is a weekly columnist for Middle East Eye.
He is the winner of a 2015 Project Censored Award for Outstanding Investigative Journalism for his Guardian work, and was twice selected in the Evening Standard’s top 1,000 most globally influential Londoners, in 2014 and 2015.
Nafeez has also written and reported for The Independent, Sydney Morning Herald, The Age, The Scotsman, Foreign Policy, The Atlantic, Quartz, Prospect, New Statesman, Le Monde diplomatique, New Internationalist, The Ecologist, Alternet, Counterpunch, Truthout, among others.
He is a Visiting Research Fellow at the Faculty of Science and Technology at Anglia Ruskin University, where he is researching the link between global systemic crises and civil unrest for Springer Energy Briefs.
(Israeli Society in the 21st Century – Immigration, Inequality, and Religious Conflict. Calvin Goldscheider. Brandeis University Press, Waltham , Massachusetts. 2015.)
This work intrigued me as it is obviously supportive of the Israeli position in the Middle East and at a quick glance would illuminate something new about the state of Israel and the State of Israel. Unfortunately it does neither.
Calvin Goldscheider is a professor of sociology at Brown University, and unfortunately sociology from my experience is probably the weakest of the social sciences, is not a science at all really, and ranks beneath both political ‘science’ and economics as fields of rational study. My definition of sociology is that it is the art of taking something that could be explained through common sense and common language and transforming it into something pseudo-scientifically profound. This is done through the use of a particular lexicon, and the lengthy creation of repetitious and supposedly neutral academic explanations that are not academically tested.
Having said that it can be assumed that I would be a ‘hostile’ reviewer, but rather I was simply bored – until I arrived at the end where Goldscheider concludes “our exploration of emerging Israeli society by unpacking the influence of external factors.”
The boredom derives directly from Goldscheider’s methodology. As he states himself in the preface “The evidence presented in this book is primarily based on the official statistics of Israel located in the Statistical Abstracts of Israel of 2013 and 2014.” In a brief Appendix he reiterates this, saying “I have relied on the excellent statistical materials presented in yearbooks of the Central Bureau of Statistics [named above].”
In essence, he did nothing scientific, no original research, and performed only two tasks: first, writing out longhand all the statistics that would have been way better presented in graphic form (graphs of some kind); and secondly, writing out very poor analysis in lengthy terms that could have all been done with more basic language annotations under each graph.
The statistical information is obviously very comprehensive and covers many if not most aspects of life in Israel. The sociological lexicon makes any explanation of those statistics repetitive and lacking in common sense. Part of the effect of the sociological lexicon is the sterilization of the information, making it dispassionate, and a facade of intellectual rigor making the ordinary complex.
For example, Goldscheider writes,
“Vulnerability among Arab Israelis stems from the fact that segregations intensifies and magnifies any economic setback and builds deprivations structurally into the socioeconomic environment. The costs of segregation are exacerbated by the economic dependency of Arab Israelis.”
A rather fancy set of terms that seems to say that Arab Israelis are subject to racism. The definition is reiterated on the next page,
“Residential segregation is a structural condition, making deprived communities more likely; combined with social class disadvantage, ethnic segregation concentrates income deprivation in small areas and generates structural discrimination.”
It doesn’t sound like racism, doesn’t look like racism, but if translated into common English, it is racism with all that implies for laws, policing, and opportunities.
Narratives, Lies, and Mythology…
Occasionally within the writing there are short moments of lies, sterilized commentary, and the traditional Israeli narrative. They are not truly surprising but do allow glimpses of how the Israeli narrative can be carried forward so easily in a pseudo-scientific manner:
(1)The Jewish migrants were “working in agriculture to develop barren wastelands.” Not true.
(2) In 1948, “there was an exodus of Arab residents…as territorial control was transferred ….” A good sterilized narrative.
(3) The Jewish migrant is a “fact that Jews returning to the state of Israel descended from ancestors who had not lived there for almost 2,000 years.” Essentially mythological without the scientific proof that a ‘science’ should demand.
(4) “…administered territories [do not imply] long term possession or control…since there was a clear recognition that control was “administrative,” not ideological.” This goes against all historical records in particular from Zionists wanting all of Eretz Israel for their homeland.
(5) Further, “The control is political and firmly anchored in history, religion, and legitimacy.” Yes, political, but mainly military, and also economic. Yes, anchored in history, the history of military wars against the Arab indigenous populations. Legitimacy is part of the religious narrative of which the author says the territory is “named by its Hebrew-Judaic origins is part of a gift of God to the Jewish people.” This could lead to many arguments about the biblical legitimacy, as it does internally within Israeli Jews, and externally.
But accepted that it is “god given” could it not also be “god taken?” Are the current possessors of the land living the will of a just and peaceful god or a god of retribution and violence?
(6) Finally – but not completely – the author mentions “forays from Israel to population centers in Gaza have become routine and costly in human lives, property, economic growth, and trust between neighbors.” Forays!! Umm, perhaps full out military invasions with aerial support from Apache helicopters and fighter jets. Costly – obviously – but trust? The latter is not even to be considered between Israel and Gaza as witnesses from the manner in which Gaza has been made into an open air prison/concentration camp.
Either way, not good.
Even if you are an ardent Jewish Zionist supporter, this is not a good read. It would be much better to go to the Israeli statistical records that are referenced and simply read them. It will save much time and agony from trying to read through a sociological lexicon that speaks volumes but says little.
Along with the poor writing, Israeli Society in the 21st Century provides poor analysis and sterilizes the Israeli narrative of occupation and settlement, not surprising considering its origins.
– Jim Miles is a Canadian educator and a regular contributor/columnist of opinion pieces and book reviews for The Palestine Chronicle. Miles’ work is also presented globally through other alternative websites and news publications.
I’m American, but my first book, Zionism in the Age of the Dictators (now finally back in print), was published in Britain. American houses wouldn’t risk selling an expose of Zionist collaboration with Hitler. Then I found pro and anti-Zionist books published by Croom Helm Ltd. I went to them. They gave me an ultimatum: “You are about to write the most controversial book imaginable…. So there can be no mistakes. You must send us a photocopy of every document you quote.” It was published in 1983. … continue
This article will examine some of the connections between the US and UK National Security apparatus and the appearance of the anthropogenic global warming (AGW) theory beginning after the accident at Three Mile Island. … continue
This site is provided as a research and reference tool. Although we make every reasonable effort to ensure that the information and data provided at this site are useful, accurate, and current, we cannot guarantee that the information and data provided here will be error-free. By using this site, you assume all responsibility for and risk arising from your use of and reliance upon the contents of this site.
This site and the information available through it do not, and are not intended to constitute legal advice. Should you require legal advice, you should consult your own attorney.
Nothing within this site or linked to by this site constitutes investment advice or medical advice.
Materials accessible from or added to this site by third parties, such as comments posted, are strictly the responsibility of the third party who added such materials or made them accessible and we neither endorse nor undertake to control, monitor, edit or assume responsibility for any such third-party material.
The posting of stories, commentaries, reports, documents and links (embedded or otherwise) on this site does not in any way, shape or form, implied or otherwise, necessarily express or suggest endorsement or support of any of such posted material or parts therein.
The word "alleged" is deemed to occur before the word "fraud." Since the rule of law still applies. To peasants, at least.
This site contains copyrighted material the use of which has not always been specifically authorized by the copyright owner. We are making such material available in our efforts to advance understanding of environmental, political, human rights, economic, democracy, scientific, and social justice issues, etc. We believe this constitutes a 'fair use' of any such copyrighted material as provided for in section 107 of the US Copyright Law. In accordance with Title 17 U.S.C. Section 107, the material on this site is distributed without profit to those who have expressed a prior interest in receiving the included information for research and educational purposes. For more info go to: http://www.law.cornell.edu/uscode/17/107.shtml. If you wish to use copyrighted material from this site for purposes of your own that go beyond 'fair use', you must obtain permission from the copyright owner.
This is information for anyone that wishes to challenge our “fair use” of copyrighted material.
If you are a legal copyright holder or a designated agent for such and you believe that content residing on or accessible through our website infringes a copyright and falls outside the boundaries of “Fair Use”, please send a notice of infringement by contacting email@example.com.
We will respond and take necessary action immediately.
If notice is given of an alleged copyright violation we will act expeditiously to remove or disable access to the material(s) in question.
All 3rd party material posted on this website is copyright the respective owners / authors. Aletho News makes no claim of copyright on such material.