Aletho News

ΑΛΗΘΩΣ

Growing Doubt: a Scientist’s Experience of GMOs

By Jonathan R. Latham, PhD | Independent Science News | August 31, 2015

By training, I am a plant biologist. In the early 1990s I was busy making genetically modified plants (often called GMOs for Genetically Modified Organisms) as part of the research that led to my PhD. Into these plants we were putting DNA from various foreign organisms, such as viruses and bacteria.

I was not, at the outset, concerned about the possible effects of GM plants on human health or the environment. One reason for this lack of concern was that I was still a very young scientist, feeling my way in the complex world of biology and of scientific research. Another reason was that we hardly imagined that GMOs like ours would be grown or eaten. So far as I was concerned, all GMOs were for research purposes only.

Gradually, however, it became clear that certain companies thought differently. Some of my older colleagues shared their skepticism with me that commercial interests were running far ahead of scientific knowledge. I listened carefully and I didn’t disagree. Today, over twenty years later, GMO crops, especially soybeans, corn, papaya, canola and cotton, are commercially grown in numerous parts of the world.

Depending on which country you live in, GMOs may be unlabeled and therefore unknowingly abundant in your diet. Processed foods (e.g. chips, breakfast cereals, sodas) are likely to contain ingredients from GMO crops, because they are often made from corn or soy. Most agricultural crops, however, are still non-GMO, including rice, wheat, barley, oats, tomatoes, grapes and beans.

For meat eaters the nature of GMO consumption is different. There are no GMO animals used in farming (although GM salmon has been pending FDA approval since 1993); however, animal feed, especially in factory farms or for fish farming, is likely to be GMO corn and GMO soybeans. In which case the labeling issue, and potential for impacts on your health, are complicated.

I now believe, as a much more experienced scientist, that GMO crops still run far ahead of our understanding of their risks. In broad outline, the reasons for this belief are quite simple. I have become much more appreciative of the complexity of biological organisms and their capacity for benefits and harms. As a scientist I have become much more humble about the capacity of science to do more than scratch the surface in its understanding of the deep complexity and diversity of the natural world. To paraphrase a cliché, I more and more appreciate that as scientists we understand less and less.

The Flawed Processes of GMO Risk Assessment

Some of my concerns with GMOs are “just” practical ones. I have read numerous GMO risk assessment applications. These are the documents that governments rely on to ‘prove’ their safety. Though these documents are quite long and quite complex, their length is misleading in that they primarily ask (and answer) trivial questions. Furthermore, the experiments described within them are often very inadequate and sloppily executed. Scientific controls are often missing, procedures and reagents are badly described, and the results are often ambiguous or uninterpretable. I do not believe that this ambiguity and apparent incompetence is accidental. It is common, for example, for multinational corporations, whose labs have the latest equipment, to use outdated methodologies. When the results show what the applicants want, nothing is said. But when the results are inconvenient, and raise red flags, they blame the limitations of the antiquated method. This bulletproof logic, in which applicants claim safety no matter what the data shows, or how badly the experiment was performed, is routine in formal GMO risk assessment.

To any honest observer, reading these applications is bound to raise profound and disturbing questions: about the trustworthiness of the applicants and equally of the regulators. They are impossible to reconcile with a functional regulatory system capable of protecting the public.

The Dangers of GMOs

Aside from grave doubts about the quality and integrity of risk assessments, I also have specific science-based concerns over GMOs. I emphasise the ones below because they are important but are not on the lists that GMO critics often make.

Many GMO plants are engineered to contain their own insecticides. These GMOs, which include maize, cotton and soybeans, are called Bt plants. Bt plants get their name because they incorporate a transgene that makes a protein-based toxin (usually called the Cry toxin) from the bacterium Bacillus thuringiensis. Many Bt crops are “stacked,” meaning they contain a multiplicity of these Cry toxins. Their makers believe each of these Bt toxins is insect-specific and safe. However, there are multiple reasons to doubt both safety and specificity. One concern is that Bacillus thuringiensis is all but indistinguishable from the well known anthrax bacterium (Bacillus anthracis). Another reason is that Bt insecticides share structural similarities with ricin. Ricin is a famously dangerous plant toxin, a tiny amount of which was used to assassinate the Bulgarian writer and defector Georgi Markov in 1978. A third reason for concern is that the mode of action of Bt proteins is not understood (Vachon et al 2012); yet, it is axiomatic in science that effective risk assessment requires a clear understanding of the mechanism of action of any GMO transgene. This is so that appropriate experiments can be devised to affirm or refute safety. These red flags are doubly troubling because some Cry proteins are known to be toxic towards isolated human cells (Mizuki et al., 1999). Yet we put them in our food crops.

A second concern follows from GMOs being often resistant to herbicides. This resistance is an invitation to farmers to spray large quantities of herbicides, and many do. As research recently showed, commercial soybeans routinely contain quantities of the herbicide Roundup (glyphosate) that its maker, Monsanto, once described as “extreme” (Bøhn et al 2014).

Glyphosate has been in the news recently because the World Health Organisation no longer considers it a relatively harmless chemical, but there are other herbicides applied to GMOs which are easily of equal concern. The herbicide Glufosinate (phosphinothricin, made by Bayer) kills plants because it inhibits the important plant enzyme glutamine synthetase. This enzyme is ubiquitous, however, it is found also in fungi, bacteria and animals. Consequently, Glufosinate is toxic to most organisms. Glufosinate is also a neurotoxin of mammals that doesn’t easily break down in the environment (Lantz et al. 2014). Glufosinate is thus a “herbicide” in name only.

Thus, even in conventional agriculture, the use of glufosinate is hazardous; but With GMO plants the situation is worse yet. With GMOs, glufosinate is sprayed on to the crop but its degradation in the plant is blocked by the transgene, which chemically modifies it slightly. This is why the GMO plant is resistant to it; but the other consequence is that when you eat Bayers’ Glufosinate-resistant GMO maize or canola, even weeks or months later, glufosinate, though slightly modified, is probably still there (Droge et al., 1992). Nevertheless, though the health hazard of glufosinate is much greater with GMOs, the implications of this science have been ignored in GMO risk assessments of Glufosinate-tolerant GMO crops.

A yet further reason to be concerned about GMOs is that most of them contain a viral sequence called the cauliflower mosaic virus (CaMV) promoter (or they contain the similar figwort mosaic virus (FMV) promoter). Two years ago, the GMO safety agency of the European Union (EFSA) discovered that both the CaMV promoter and the FMV promoter had wrongly been assumed by them (for almost 20 years) not to encode any proteins. In fact, the two promoters encode a large part of a small multifunctional viral protein that misdirects all normal gene expression and that also turns off a key plant defence against pathogens. EFSA tried to bury their discovery. Unfortunately for them, we spotted their findings in an obscure scientific journal. This revelation forced EFSA and other regulators to explain why they had overlooked the probability that consumers were eating an untested viral protein.

This list of significant scientific concerns about GMOs is by no means exhaustive. For example, there are novel GMOs coming on the market, such as those using double stranded RNAs (dsRNAs), that have the potential for even greater risks (Latham and Wilson 2015).

The True Purpose of GMOs

Science is not the only grounds on which GMOs should be judged. The commercial purpose of GMOs is not to feed the world or improve farming. Rather, they exist to gain intellectual property (i.e. patent rights) over seeds and plant breeding and to drive agriculture in directions that benefit agribusiness. This drive is occurring at the expense of farmers, consumers and the natural world. US Farmers, for example, have seen seed costs nearly quadruple and seed choices greatly narrow since the introduction of GMOs. The fight over GMOs is not of narrow importance. It affects us all.

Nevertheless, specific scientific concerns are crucial to the debate. I left science in large part because it seemed impossible to do research while also providing the unvarnished public scepticism that I believed the public, as ultimate funder and risk-taker of that science, was entitled to.

Criticism of science and technology remains very difficult. Even though many academics benefit from tenure and a large salary, the sceptical process in much of science is largely lacking. This is why risk assessment of GMOs has been short-circuited and public concerns about them are growing. Until the damaged scientific ethos is rectified, both scientists and the public are correct to doubt that GMOs should ever have been let out of any lab.

References

Bøhn, T, Cuhra, M, Traavik, T, Sanden, M, Fagan, J and Primicerio, R (2014) Compositional differences in soybeans on the market: Glyphosate accumulates in Roundup Ready GM soybeans. Food Chemistry 153: 207-215.

Droge W, Broer I, and Puhler A. (1992) Transgenic plants containing the phosphinothricin-N-acetyltransferase gene metabolize the herbicide L-phosphinothricin (glufosinate) differently from untransformed plants. Planta 187: 142-151.

Lantz S et al., (2014) Glufosinate binds N-methyl-D-aspartate receptors and increases neuronal network activity in vitro. Neurotoxicology 45: 38-47.

Latham JR and Wilson AK (2015) Off -­ target Effects of Plant Transgenic RNAi: Three Mechanisms Lead to Distinct Toxicological and Environmental Hazards.

Mizuki, E, Et Al., (1999) Unique activity associated with non-insecticidal Bacillus thuringiensis parasporal inclusions: in vitro cell- killing action on human cancer cells. J. Appl. Microbiol. 86: 477–486.

Vachon V, Laprade R, Schwartz JL (2012) Current models of the mode of action of Bacillus thuringiensis insecticidal crystal proteins: a critical review. Journal of Invertebrate Pathology 111: 1–12.

September 2, 2015 Posted by | Economics, Science and Pseudo-Science | , , | Leave a comment

Pacific walruses hauled out at Point Lay Alaska again this year

By Susan Crockford | Polar Bear Science | August 28, 2015

A photo of a mass walrus haulout at Point Lay, Alaska taken a few days ago from a distance show thousands of animals. But no one’s counting because apparently, no one’s interested.

Walrus Pt Lay ADN story Aug 27 screencap
The picture on the left (above, courtesy Alaska Dispatch News ) was taken 23 August by global warming activist photographer Gary Braasch, the day after a news report appeared about the US Fish & Wildlife Service and aviation authorities asking the media to approach USFWS about walrus photos and information that gave no hint that a large haulout of walruses was already in place (22 August 2015, “Federal agencies, Point Lay seek to minimize walrus disturbances” ):

“Federal agencies are stepping in to shield a North Slope village from the possibility of a deluge of international attention should a large walrus haulout develop nearby, as it has in years past — agreeing to act as an information clearinghouse on behalf of the Native Village of Point Lay.” [my bold]

Here is what the global warming activist site that published the pictures says about the haulout:

“Thousands of Pacific walrus are coming ashore near Point Lay, NW Arctic coast of Alaska. The huge sea mammals and young began coming up on this barrier island along Kasegaluk Lagoon about August 20, according to local natives. This is one of the earliest known summer haul outs of the walrus along the Alaska coast of the Chukchi Sea, according to wildlife biologists.” [my bold]

They say “thousands.” But the photos taken, reproduced in the Alaska Dispatch News story I read, were taken from a greater distance than the famous photo of ~35,000 animals released by government officials last year and looks like the total could be as large, or larger, than the 2014 haulout.

Said a Washington Post story (27 August 2015):

The U.S. Fish and Wildlife Service confirmed to the Post Wednesday evening that a mass of walruses had “hauled out,” or gathered on shore, near the remote community of Point Lay. The service did not estimate the number or provide images. But photojournalist Gary Braasch has posted dramatic photographs, taken during an Aug. 23 flyover, of what appear to be at least several thousand walruses crowding onto a barrier island.” [my bold]

Walrus haulouts_NOAA status report 2011 PolarBearScience

According to The Guardian “Extreme Arctic sea ice melt forces thousands of walruses ashore in Alaska”

“Braasch has spent about a decade photographing evidence of climate change in Alaska, and had been tracking the movement of tagged walruses through the US Geological Survey mapping projects.

“What they looked like by eye was three brown smudges along the beach. They were not visible as individual animals,” he said. But he said the blown-up images revealed large numbers of animals. “Certainly they were in the low thousands at that point.”

Maybe I’m wrong. I’m not an expert on estimating numbers from aerial photos. But have a look and ask yourself – why didn’t the FWS say there was a haulout last week when they made their announcement about protecting the herd from disturbance?

And now that the cat is out of the bag, why have they not released any official photos or made an official estimate? Considering the hype they generated last year, why the reticence now? It just seems odd to me.

Is it that they don’t want to emphasize how many walruses there really are? That all the walruses didn’t all die last year due to global warming? Or did they simply realize that their alarmist rhetoric and the media storm it generated last year made the situation worse for the animals?

Compare this year from the new required great distance (below upper) to an official photo from last year (below lower):

Walrus at Pt Lay 2015 vs 2014 lg

Apparently, photographer Gary Braasch was “at least a mile away”:

“Braasch said he took precautions, including flying in a small aircraft, and at a distance, “so as not to disturb the herd.”

“I used a very long telephoto lens,” Braasch said. “My pilot says we were at least a mile away.”

The high resolution copy of the photos at the World View of Global Warming website shows lots of animals in the water in the 2015 photo, indicating the walruses were actively feeding, as the animals were last year – not sitting on the beach starving to death. It’s likely that in a few days they’ll be gone, if they aren’t already.

Now the Feds are threatening to press charges against the photographer who flew over the haulout without permission, saying he may have broken their rules (“Feds investigating if photographer flew too near walruses hauled out at Point Lay”)

As far as I know, there is no new population information on walrus that wasn’t available last year, when I covered this topic extensively (Crockford 2014; video below).

August 31, 2015 Posted by | Science and Pseudo-Science, Video | Leave a comment

Hottest Month Claims

By Ken Haapala | Science and Environmental Policy Project (SEPP) | August 29, 2015

Divergence: It is summertime in the US, and temperatures are warmer. Several readers have asked TWTW for comments on the recent claims that July 2015 was the hottest month ever and similar announcements by certain US government entities, including branches of the National Oceanic and Atmospheric Administration (NOAA) and the National Aeronautics and Space Administration (NASA). These entities are making strong public statements that the globe continues to warm, and the future is dire. A humorist could comment that the closer we are to the 21st session of the Conference of the Parties (COP-21) of the United Nations Framework Convention on Climate Change (UNFCCC) to be held in Paris from November 30 to December 11, the hotter the globe becomes.

However, there are three significant forms of divergence that are being demonstrated. One divergence is the increasing difference between atmospheric temperatures and surface temperatures. The second divergence is the growing difference between temperatures forecast by models and observed temperatures, particularly atmospheric temperatures. This leads to the third divergence, the difference between the activities of what can be called the Climate Establishment and what is observed in nature.

The atmospheric temperatures are reported by two independent entities: the largely NASA-financed UAH entity at the University of Alabama in Huntsville, and Remote Sensing Systems (RSS) in California. The surface temperatures are reported by NOAA, NASA, and Hadley Centre of the UK Met Office, combined with those of the Climatic Research Unit (CRU) of the University of East Anglia. These measurements depend, in part, on the historic record maintained by NOAA’s National Climatic Data Center (NCDC). Unfortunately, for more than two decades, the historic record of the surface temperatures has been adjusted numerous times, without adequate records of the details and the effects. The net effect is an inflation of a warming trend, particularly obvious in the US where excellent historic records continue to exist. The UAH data have been adjusted, but the adjustments and effects have been publically recorded.

The divergence between the temperatures forecasted by the global climate models and the observed temperatures is becoming extremely obvious, particularly with the observed atmospheric temperatures. The adjustments to surface temperatures lessen this divergence somewhat, particular with the latest adjustments by the NCDC, where superior measurements taken by fixed or floating buoys were inflated to correspond with earlier, inferior measurements taken by ships. The director of NCDC, Tom Karl, was a lead author in the paper announcing this change. As a result, we should see announcements that sea surface temperatures, and global surface temperatures, are increasing, although the increase may be strictly an artifact of human adjustments rather than an occurrence in nature.

The questionable adjustments in reported surface temperatures leads to the third form of increasing divergence – the differences between what is reported by the Climate Establishment and what is occurring in nature. The Climate Establishment can be defined as those who embrace the findings of the UN Intergovernmental Panel on Climate Change (IPCC), particularly the assertion of a high confidence, a high degree of certainty, that human emissions of carbon dioxide and other greenhouse gases are causing unprecedented and dangerous global warming. Simply because data is adjusted to reflect the IPCC view, does not mean that the IPCC view is occurring.

The greenhouse effect takes place in the atmosphere, yet it is not being observed in the atmosphere. The satellite data, independently verified by four sets of weather balloon data, clearly shows it is not. There has been no significant warming for about 18 years. These data are the most comprehensive temperature data existing and are largely independent of other human influences that bias surface data such as urbanization, including building of structures and impervious surfaces, and other changes in land use. Those who broadcast claims of the hottest year ever, based on adjusted surface data, are actually emphasizing the divergence between science practiced by the Climate Establishment and Nature, and are not engaged in a natural science.

Unfortunately, many government entities and government-funded entities are involved in the Climate Establishment. The leaders of such government entities and funding entities demonstrate a lack of concern for institutional credibility, no respect for the scientific bases on which such institutions were built, including those who came before them and those who will replace them, and will leave their institutions in an inferior condition, rather than strengthen them.

It is important to note that not all government-funded entities are so involved. The National Space Science & Technology Center (NSSTC) at the University of Alabama in Huntsville (UAH), which is largely funded by the federal government (NASA) is a notable exception.

August 31, 2015 Posted by | Deception, Science and Pseudo-Science | , , , | Leave a comment

German Media On The Prophets Of NASA: “Prophesizing Gigantic Floods” – 200 Years In The Future!

Pre-Paris hype

By P Gosselin | No Tricks Zone | August 28, 2015

The German media have been buzzing some with the recent NASA publication warning of rising sea levels for the future, and that we need to be very worried.

Maybe I’m reading more into the lines than I should, but I get the feeling that the increasingly dubious NASA climate science organization is no longer being taken 100% seriously by some major German outlets, who have started to label NASA scenarios and projections as “prophecies”.

For example Germany’s normally politically correct, devout green NTV here has the article bearing the title: “NASA prophesizes gigantic floods.”

Prophecies are more the sort of things one typically expects to hear from prophets. The trouble today is that anyone who claims to be a prophet or to possess prophet-like powers almost always gets equated to being a kook, quack, or charlatan. Moreover being labeled a prophet doesn’t get you much respect either. So you have to wonder about the NTV’s choice of words for the title of its story.

Could NTV journalists really be so dim and naïve as to actually believe in climate prophets?

NTV writes of an organization that seems to fancy itself as having visionary power to see the end of the world. NTV tells us:

An unavoidable sea level rise of at least one meter in the coming 100 to 200 years is the result of the latest research data.”

The NTV report then cites NASA prophet Tom Wagner:

NASA scientist Tom Wagner says that when the ice sheets break down on each other, even the risk of a sea level rise of three meters over the coming 100 to 200 years is thinkable.”

Okay, these visions may be still a bit fuzzy, but the NASA scientists prophets know almost for sure they are out there. And again the prophecy of doom gets repeated at the end of the article by prophet Steve Nerem:

‘Things will probably get worse in the future,’ prophesizes Nerem as a result of global warming.”

Again this is the NTV using the word “prophesizes”.

Of course there are only a few teensy-weensy problems with NASA’s prophecies of doom:

1) The hundreds of coastal tide gauges show no acceleration in sea level rise and they show a rise that is much less than what has been measured by the seemingly poorly calibrated satellites,

2) polar sea ice has recovered over the past years,

3) polar temperatures have flattened, or are even declining,

4) global temperatures have flattened, and 5) there’s a growing number of scientists who are now telling us that we should be expecting global cooling over the coming decades.

Moreover, new Greenland data show growing ice (more on this tomorrow).

I’ll let the readers judge for themselves on whether NASA scientists are true prophets, or if they are behaving more like snake oil peddling charlatans.

Myself I’ve lost all respect for the space organization. It’s become a grossly distorted caricature of what scientific research is about.

200 years in the future… yeah, right!

August 29, 2015 Posted by | Deception, Science and Pseudo-Science | , | 1 Comment

Long term exposure to tiny amounts of Monsanto’s Roundup may damage liver, kidneys – study

RT | August 29, 2015

Long-term intake of the Monsanto’s most popular Roundup herbicide, even in very small amounts lower than permissible in US water, may lead to kidney and liver damage, a new study claims.

The research, conducted by an international group of scientists from the UK, Italy and France, studied the effects of prolonged exposure to small amounts of the Roundup herbicide and one of its main components – glyphosate.

In their study, published in Environmental Health on August 25, the scientists particularly focused on the influence of Monsanto’s Roundup on gene expression in the kidneys and liver.

In the new two-year study, which extended the findings from one conducted in 2012, the team added tiny amounts of Roundup to water that was given to rats in doses much smaller than allowed in US drinking water.

Scientists say that some of the rats experienced “25 percent body weight loss, presence of tumors over 25 percent bodyweight, hemorrhagic bleeding, or prostration.”

The study’s conclusions indicate that there is an association between wide-scale alterations in liver and kidney gene expression and the consumption of small quantities of Roundup, even at admissible glyphosate-equivalent concentrations. As the dose used is “environmentally relevant in terms of human, domesticated animals and wildlife levels of exposure,” the results potentially have significant health implications for animal and human populations, the study warned.

“There were more than 4,000 genes in the liver and kidneys [of the rats that were fed Roundup] whose levels of expression had changed,” the study’s leading scientist, Michael Antoniou, head of the Gene Expression and Therapy Group at King’s College London, said, as quoted by the Environmental Health News.

“Given even very low levels of exposure, Roundup can potentially result in organ damage when it comes to liver and kidney function,” he added. “The severity we don’t know, but our data say there will be harm given enough time.”

The results of the study have received mixed reviews in the scientific community, although many scientists have expressed their concern about possible negative health effects from Roundup use.

Taking into account that the team “used very low dose levels in drinking water … this study should have some kind of public health influence,” said Nichelle Harriott, the science and regulatory director at Beyond Pesticides, a Washington, DC based nonprofit organization, as quoted by the Environmental Health News.

“We don’t know what to make of such changes, they may be meaningful and may not,” said Bruce Blumberg, a professor from the University of California, who did not take part in the study.

“They can’t say which caused what, but what you have is an association – the group treated with a little Roundup had a lot of organ damage and the gene expression findings supported that,” he added.

Meanwhile, according to the New England Journal of Medicine, the use of glyphosate in herbicides has increased by more than 250 times in the United States in the last 40 years.

Research conducted in 2014 and published in the International Journal of Environmental Research and Public Health linked the use of Monsanto’s Roundup to widespread chronic kidney disease that took the form of an epidemic in Sri Lanka. Another study showed that Monsanto agrochemicals may have caused cellular and genetic diseases in Brazilian soybean workers.

Additionally, the World Health Organization’s International Agency for Research on Cancer has recently determined that Roundup’s glyphosate is ‘number one’ among carcinogens, “possibly” causing cancer.

However, Monsanto has continuously and consistently insisted that its products are safe, citing other research supporting their claims. The latest such study was conducted by the German Federal Institute for Risk Assessments (BfR) and deemed that Monsanto’s Roundup was safe.

So far, Monsanto has made no comment concerning the research conducted by the group led by Michael Antoniou.

August 29, 2015 Posted by | Environmentalism, Science and Pseudo-Science | , , | 1 Comment

Gospel science: We found only one-third of published psychology research is reliable – now what?

What does it mean if the majority of what’s published in journals can’t be reproduced?

By Maggie Villiger | The Conversation | August 27, 2015

The ability to repeat a study and find the same results twice is a prerequisite for building scientific knowledge. Replication allows us to ensure empirical findings are reliable and refines our understanding of when a finding occurs. It may surprise you to learn, then, that scientists do not often conduct – much less publish – attempted replications of existing studies.

Journals prefer to publish novel, cutting-edge research. And professional advancement is determined by making new discoveries, not painstakingly confirming claims that are already on the books. As one of our colleagues recently put it, “Running replications is fine for other people, but I have better ways to spend my precious time.”

Once a paper appears in a peer-reviewed journal, it acquires a kind of magical, unassailable authority. News outlets, and sometimes even scientists themselves, will cite these findings without a trace of skepticism. Such unquestioning confidence in new studies is likely undeserved, or at least premature.

A small but vocal contingent of researchers – addressing fields ranging from physics to medicine to economics – has maintained that many, perhaps most, published studies are wrong. But how bad is this problem, exactly? And what features make a study more or less likely to turn out to be true?

We are two of the 270 researchers who together have just published in the journal Science the first-ever large-scale effort trying to answer these questions by attempting to reproduce 100 previously published psychological science findings.

Attempting to re-find psychology findings

Publishing together as the Open Science Collaboration and coordinated by social psychologist Brian Nosek from the Center for Open Science, research teams from around the world each ran a replication of a study published in three top psychology journals – Psychological Science ; Journal of Personality and Social Psychology ; and Journal of Experimental Psychology : Learning, Memory, and Cognition. To ensure the replication was as exact as possible, research teams obtained study materials from the original authors, and worked closely with these authors whenever they could.

Almost all of the original published studies (97%) had statistically significant results. This is as you’d expect – while many experiments fail to uncover meaningful results, scientists tend only to publish the ones that do.

What we found is that when these 100 studies were run by other researchers, however, only 36% reached statistical significance. This number is alarmingly low. Put another way, only around one-third of the rerun studies came out with the same results that were found the first time around. That rate is especially low when you consider that, once published, findings tend to be held as gospel.

The bad news doesn’t end there. Even when the new study found evidence for the existence of the original finding, the magnitude of the effect was much smaller — half the size of the original, on average.

One caveat: just because something fails to replicate doesn’t mean it isn’t true. Some of these failures could be due to luck, or poor execution, or an incomplete understanding of the circumstances needed to show the effect (scientists call these “moderators” or “boundary conditions”). For example, having someone practice a task repeatedly might improve their memory, but only if they didn’t know the task well to begin with. In a way, what these replications (and failed replications) serve to do is highlight the inherent uncertainty of any single study – original or new.

More robust findings more replicable

Given how low these numbers are, is there anything we can do to predict the studies that will replicate and those that won’t? The results from this Reproducibility Project offer some clues.

There are two major ways that researchers quantify the nature of their results. The first is a p-value, which estimates the probability that the result was arrived at purely by chance and is a false positive. (Technically, the p-value is the chance that the result, or a stronger result, would have occurred even when there was no real effect.) Generally, if a statistical test shows that the p-value is lower than 5%, the study’s results are considered “significant” – most likely due to actual effects.

Another way to quantify a result is with an effect size – not how reliable the difference is, but how big it is. Let’s say you find that people spend more money in a sad mood. Well, how much more money do they spend? This is the effect size.

We found that the smaller the original study’s p-value and the larger its effect size, the more likely it was to replicate. Strong initial statistical evidence was a good marker of whether a finding was reproducible.

Studies that were rated as more challenging to conduct were less likely to replicate, as were findings that were considered surprising. For instance, if a study shows that reading lowers IQs, or if it uses a very obscure and unfamiliar methodology, we would do well to be skeptical of such data. Scientists are often rewarded for delivering results that dazzle and defy expectation, but extraordinary claims require extraordinary evidence.

Although our replication effort is novel in its scope and level of transparency – the methods and data for all replicated studies are available online – they are consistent with previous work from other fields. Cancer biologists, for instance, have reported replication rates as low as 11%25%.

We have a problem. What’s the solution?

Some conclusions seem warranted here.

We must stop treating single studies as unassailable authorities of the truth. Until a discovery has been thoroughly vetted and repeatedly observed, we should treat it with the measure of skepticism that scientific thinking requires. After all, the truly scientific mindset is critical, not credulous. There is a place for breakthrough findings and cutting-edge theories, but there is also merit in the slow, systematic checking and refining of those findings and theories.

Of course, adopting a skeptical attitude will take us only so far. We also need to provide incentives for reproducible science by rewarding those who conduct replications and who conduct replicable work. For instance, at least one top journal has begun to give special “badges” to articles that make their data and materials available, and the Berkeley Initiative for Transparency in the Social Sciences has established a prize for practicing more transparent social science.

Better research practices are also likely to ensure higher replication rates. There is already evidence that taking certain concrete steps – such as making hypotheses clear prior to data analysis, openly sharing materials and data, and following transparent reporting standards – decreases false positive rates in published studies. Some funding organizations are already demanding hypothesis registration and data sharing.

Although perfect replicability in published papers is an unrealistic goal, current replication rates are unacceptably low. The first step, as they say, is admitting you have a problem. What scientists and the public now choose to do with this information remains to be seen, but our collective response will guide the course of future scientific progress.

August 29, 2015 Posted by | Corruption, Deception, Science and Pseudo-Science | , | Leave a comment

The conceits of consensus

By Judith Curry | Climate Etc. | August 27, 2015

Critiques, the 3%, and is 47 the new 97?

For background, see my previous post The 97% feud.

Cook et al. critiques

At the heart of the consensus controversy is the paper by Cook et al. (2013), which inferred a 97% consensus by classifying abstracts from published papers.The study was based on a search of broad academic literature using casual English terms like “global warming”, which missed many climate science papers but included lots of non-climate-science papers that mentioned climate change – social science papers, surveys of the general public, surveys of cooking stove use, the economics of a carbon tax, and scientific papers from non-climate science fields that studied impacts and mitigation.

The Cook et al. paper has been refuted in the published literature in an article by Richard Tol:  Quantifying the consensus on anthropogenic global warming in the literature: A re-analysis (behind paywall).  Summary points from the abstract:

A trend in composition is mistaken for a trend in endorsement. Reported results are inconsistent and biased. The sample is not representative and contains many irrelevant papers. Overall, data quality is low. Cook׳s validation test shows that the data are invalid. Data disclosure is incomplete so that key results cannot be reproduced or tested.

Social psychologist Jose Duarte has a series of blog posts that document the ludicrousness of the selection and categorization of papers by Cook et al., including citation of specific articles that they categorized as supporting the climate change consensus:

From this analysis, Duarte concludes: ignore climate consensus studies based on random people rating journal article abstracts.  I find it difficult to disagree with him on this.

The 3%

So, does all this leave you wondering what the 3% of papers not included in the consensus had to say?  Well, wonder no more. There is a new paper out, published by Cook and colleagues:

Learning from mistakes

Rasmus Benestad, Dana Nuccitelli, Stephan Lewandowski, Katherine Hayhoe, Hans Olav Hygen, Rob van Dorland, John Cook

Abstract.  Among papers stating a position on anthropogenic global warming (AGW), 97 % endorse AGW. What is happening with the 2 % of papers that reject AGW? We examine a selection of papers rejecting AGW. An analytical tool has been developed to replicate and test the results and methods used in these studies; our replication reveals a number of methodological flaws, and a pattern of common mistakes emerges that is not visible when looking at single isolated cases. Thus, real-life scientific disputes in some cases can be resolved, and we can learn from mistakes. A common denominator seems to be missing contextual information or ignoring information that does not fit the conclusions, be it other relevant work or related geophysical data. In many cases, shortcomings are due to insufficient model evaluation, leading to results that are not universally valid but rather are an artifact of a particular experimental setup. Other typical weaknesses include false dichotomies, inappropriate statistical methods, or basing conclusions on misconceived or incomplete physics. We also argue that science is never settled and that both mainstream and contrarian papers must be subject to sustained scrutiny. The merit of replication is highlighted and we discuss how the quality of the scientific literature may benefit from replication.

Published in Theoretical and Applied Climatology [link to full paper].

A look at the Supplementary Material shows that they considered credible skeptical papers (38 in total) – by Humlum, Scafetta, Solheim and others.

The gist of their analysis is that the authors were ‘outsiders’, not fully steeped in consensus lore and not referencing their preferred papers.

RealClimate has an entertaining post on the paper, Let’s learn from mistakes, where we learn that this paper was rejected by five journals before being published by Theoretical and Applied Climatology. I guess the real lesson from this paper is that you can get any kind of twaddle published, if you keep trying and submit it to different journals.

A consensus on what, exactly?

The consensus inferred from the Cook et al. analysis is a vague one indeed; exactly what are these scientists agreeing on? The ‘97% of the world’s climate scientists agree that humans are causing climate change’ is a fairly meaningless statement unless the relative amount (%) of human caused climate change is specified. Roy Spencer’s 2013 Senate testimony included the following statement:

“It should also be noted that the fact that I believe at least some of recent warming is human-caused places me in the 97% of researchers recently claimed to support the global warming consensus (actually, it’s 97% of the published papers, Cook et al., 2013). The 97% statement is therefore rather innocuous, since it probably includes all of the global warming “skeptics” I know of who are actively working in the field. Skeptics generally are skeptical of the view that recent warming is all human-caused, and/or that it is of a sufficient magnitude to warrant immediate action given the cost of energy policies to the poor. They do not claim humans have no impact on climate whatsoever.

The only credible way to ascertain whether scientists support the consensus on climate change is through surveys of climate scientists. This point is eloquently made in another post by Joe Duarte: The climate science consensus is 78-84%. Now I don’t agree with Duarte’s conclusion on that, but he makes some very salient points:

Tips for being a good science consumer and science writer. When you see an estimate of the climate science consensus:

  • Make sure it’s a direct survey of climate scientists. Climate scientists have full speech faculties and reading comprehension. Anyone wishing to know their views can fruitfully ask them. Also, be alert to the inclusion of people outside of climate science.
  • Make sure that the researchers are actual, qualified professionals. You would think you could take this for granted in a study published in a peer-reviewed journal, but sadly this is simply not the case when it comes to climate consensus research. They’ll publish anything with high estimates.
  • Be wary of researchers who are political activists. Their conflicts of interest will be at least as strong as that of an oil company that had produced a consensus study – moral and ideological identity is incredibly powerful, and is often a larger concern than money.
  • In general, do not trust methods that rest on intermediaries or interpreters, like people reviewing the climate science literature. Thus far, such work has been dominated by untrained amateurs motivated by political agendas.
  • Be mindful of the exact questions asked. The wording of a survey is everything.
  • Be cautious about papers published in climate science journals, or really in any journal that is not a survey research journal. Our experience with the ERL fraud illustrated that climate science journals may not be able to properly review consensus studies, since the methods (surveys or subjective coding of text) are outside their domains of expertise. The risk of junk science is even greater if the journal is run by political interests and is motivated to publish inflated estimates. For example, I would advise strong skepticism of anything published by Environmental Research Letters on the consensus – they’re run by political people like Kammen.

Is 47 the new 97?

The key question is to what extent climate scientists agree with key consensus statement of the IPCC:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

Several surveys of climate scientists have addressed using survey questions that more or less address the issue of whether humans are the dominant cause of recent warming (discussed in the previous post by Duarte and summarized in my post The 97% feud).

The survey that I like the best is:

Verheggan et al. (2014) Scientists view about attribution of climate change. Environmental Science & Technology [link]

Recently, a more detailed report on the survey was made available [link]. Fabius Maximus has a fascinating post New study undercuts key IPCC finding (the text below draws liberally from this post). This survey examines agreement with the keynote statement of the IPCC AR5:

“It is extremely likely {95%+ certainty} that more than half of the observed increase in global average surface temperature from 1951 to 2010 was caused by the anthropogenic increase in greenhouse gas concentrations and other anthropogenic forcings together. ”

The survey examines both facets of the attribution statement – how much warming is caused by humans, and what is the confidence in that assessment.

In response to the question: What fraction of global warming since the mid 20th century can be attributed to human induced increases in atmospheric greenhouse gas concentrations? A total of 1,222 of 1,868 (64% of respondents) agreed with AR5 that the answer was over 50%. Excluding the 164 (8.8%) “I don’t know” respondents, yields 72% agree with the IPCC.

 

Slide1

The second question is: “What confidence level would you ascribe to your estimate that the anthropogenic greenhouse gas warming is more than 50%?” Of the 1,222 respondents who said that the anthropogenic contribution was over 50%, 797 (65%) said it was 95%+ certain (which the IPCC defines as “virtually certain” or “extremely likely”).

Slide2The 797 respondents who are highly confident that more than 50% of the warming is human caused) are 43% of all 1,868 respondents (47% excluding the “don’t know” group). Hence this survey finds that slightly less than half of climate scientists surveyed agree with the AR5 keynote statement in terms of confidence in the attribution statement.

 Who’s opinion ‘counts’?

Surveys of actual climate scientists is a much better way to elicit the actual opinions of scientist on this issue. But surveys raise the issue as to exactly who are the experts on the issue of attribution of climate change? The Verheggan et al. study was criticized in a published comment by Duarte, in terms of the basis for selecting participants to respond to the survey:

“There is a deeper problem. Inclusion of mitigation and impacts papers – even from physical sciences or engineering – creates a structural bias that will inflate estimates of consensus, because these categories have no symmetric disconfirming counterparts. These researchers have simply imported a consensus in global warming. They then proceed to their area of expertise. [These papers] do not carry any data or epistemic information about climate change or its causes, and the authors are unlikely to be experts on the subject, since it is not their field.

Increased public interest in any topic will reliably draw scholars from various fields. However, their endorsement (or rejection) of human-caused warming does not represent knowledge or independent assessments. Their votes are not quanta of consnsensus, but simply artifacts of career choices, and the changing political climate. Their inclusion will artificially inflate sample sizes, and will likely bias the results.”

Roy Spencer also addresses this issue in his Senate testimony (cited above):

“(R)elatively few researchers in the world – probably not much more than a dozen – have researched how sensitive today’s climate system is based upon actual measurements. This is why popular surveys of climate scientists and their beliefs regarding global warming have little meaning: very few of them have actually worked on the details involved in determining exactly how much warming might result from anthropogenic greenhouse gas emissions.”

The number of real experts on the detection and attribution of climate change is small, only a fraction of the respondents to these surveys. I raised this same issue in the pre-Climate Etc. days in response to the Anderegg et al. paper, in a comment at Collide-a-Scape (referenced by Columbia Journalism Review ):

The scientific litmus test for the paper is the AR4 statement: “anthropogenic greenhouse gases have been responsible for “most” of the “unequivocal” warming of the Earth’s average global temperature over the second half of the 20th century”.

The climate experts with credibility in evaluating this statement are those scientists that are active in the area of detection and attribution. “Climate” scientists whose research areas is ecosystems, carbon cycle, economics, etc speak with no more authority on this subject than say Freeman Dyson.

I define the 20th century detection and attribution field to include those that create datasets, climate dynamicists that interpret the variability, radiative forcing, climate modeling, sensitivity analysis, feedback analysis. With this definition, 75% of the names on the list disappear. If you further eliminate people that create datasets but don’t interpret the datasets, you have less than 20% of the original list.

Apart from Anderegg’s classification of the likes of Freeman Dyson as not a ‘climate expert’ (since he didn’t have 20 peer reviewed publications that they classed as ‘climate papers’), they also did not include solar – climate experts such as Syun Akasofu (since apparently Akasofu’s solar papers do not count as ‘climate’).

But perhaps the most important point is that of the scientists who are skeptical of the IPCC consensus, a disproportionately large number of these skeptical scientists are experts on climate change detection/attribution. Think Spencer, Christy, Lindzen, etc. etc.

Bottom line: inflating the numbers of ‘climate scientists’ in such surveys attempts to hide that there is a serious scientific debate about the detection and attribution of recent warming, and that scientists who are skeptical of the IPCC consensus conclusion are disproportionately expert in the area of climate change detection and attribution.

Conceits of consensus

And finally, a fascinating article The conceits of ‘consensus’ in Halakhic rhetoric.  Read the whole thing, it is superb.  A few choice excerpts:

The distinguishing characteristic of these appeals to consensus is that the legitimacy or rejection of an opinion is not determined by intrinsic, objective, qualifiable criteria or its merits, but by its adoption by certain people. The primary premise of such arguments is that unanimity or a plurality of agreement among a given collective is halakhically binding on the Jewish population  and cannot be further contested or subject to review.

Just as the appeal to consensus stresses people over logic, subsequent debate will also focus on the merits of individuals and their worthiness to be included or excluded from the conversation. This situation runs the risk of the ‘No True Scotsman’ fallacy whereby one excludes a contradictory opinion on the grounds that no one who could possibly hold such an opinion is worth consideration.

Debates over inclusion and exclusion for consensus are susceptible to social manipulations as well. Since these determinations imply a hierarchy or rank of some sort, attempts which disturb an existing order may be met with various forms of bullying or intimidation – either in terms of giving too much credit to one opinion or individual or not enough deference to another. Thus any consensus reached on this basis would not be not based out of genuine agreement, but fear of reprisals. The consensus of the collective may be similarly manipulated through implicit or overt marketing as a way to artificially besmirch or enhance someone’s reputation.

The next premise to consider is the correlation between consensus and correctness such that if most (or all) people believe something to be true, then by the value of its widespread acceptance and popularity, it must be correct. This is a well known logical fallacy known as argumentum ad populum, sometimes called the ‘bandwagon fallacy’. This should be familiar to anyone who has ever been admonished, “if all your friends would jump off a bridge would you follow?” It should also be obvious that at face value that Jews, especially Orthodox Jews, ought to reject this idea as a matter of principle.

Appeals to consensus are common and relatively simply to assert, but those who rely on consensus rarely if ever acknowledge, address, or defend, the assumptions inherent with the invoking of consensus as a source – if not the determinant – of practical Jewish law. As I will demonstrate, appeals to consensus are laden with problematic logical and halakhic assumptions such that while “consensus” may constitute one factor in determining a specific psak, it is not nearly the definitive halakhic criterion its proponents would like to believe.

August 27, 2015 Posted by | Deception, Science and Pseudo-Science | , , | Leave a comment

Climatologist Dr. Tim Ball On 97% Consensus: “Completely False And Was Deliberately Manufactured”!

By P Gosselin | No Tricks Zone | August 24, 2015

Canadian climate scientist Dr. Tim Ball recently published a new book on climate science: The Deliberate Corruption of Climate Science. What follows later (below) is a short interview with Dr. Ball.

“Government propaganda” … “corrupt science”

In the book Ball writes that the failed predictions of the Intergovernmental Panel on Climate Change (IPCC), coupled with failed alarmist stories such as the complete loss of Arctic sea ice by 2013, are making the public increasingly skeptical of government propaganda about global warming. People were already skeptical because they knew weather forecasts, especially beyond forty-eight hours, were invariably wrong, and so today more people understand there is no substance to global warming claims and that it is based on corrupt science. Now they are asking: Who perpetrated the deception and could a small group of people deceive the world?

In his book The Deliberate Corruption of Climate Science Dr. Ball explains who did it and why.

Ball was among the earlier dissidents and as a result he became the target of media articles and false information promoted by a scurrilous website funded by a chairman of a large environmental foundation. He was a real threat because they couldn’t say he wasn’t qualified.

Dr. Ball has been the subject of three lawsuits from a lawyer operating in British Columbia. For the first one, he decided to avoid the expense of a challenge and so he withdrew what he had written. Then, within nine days, he received two more from the same lawyer suing for defamation because of harsh criticism he made of a climate scientist. At that point, he and his family decided they had to fight back.

As Ball carries on his legal battle he maintains that climate deception continues and that the public is paying a high price for completely unnecessary energy and economic policies based on the pseudoscience of the IPCC. Not to mention the social devastation of communities devastated by job losses.

“Their last effective chance”

Dr. Balls says the rhetoric and stream of misinformation increases as the perpetrators, now including the Pope, build up to their last effective chance to influence an increasingly skeptical world. When the Global Warming theme failed, they tried Climate Change. The Climate Change theme has failed, so now they are trying Climate Disruption as defined by President Obama’s science Czar, John Holdren—all to justify expensive government programs. The impetus for a global carbon tax and global governance represent the central theme of a climate conference scheduled for Paris in December 2015, the United Nations Climate Change Conference or COP21.

INTERVIEW

What follows are some questions that Dr. Ball kindly answered:

By what scientific reason do you think CO2’s role is far less?

Water vapor is 95% of the total greenhouse gases by volume, while CO2 is approximately 4%. The human portion is only 3.4% of the total CO2. They try to claim CO2 is more effective, but it’s a false claim called “climate sensitivity”. The number the IPCC use for sensitivity has constantly declined and will reach zero.

What factor has been the most responsible for the warming over the past 25 years?

The same factor as it has always been, changes in the sun. The IPCC dismiss the sun because they only look at variation in radiative output, but that is only one of three ways the Sun affects global climate.

What do you think the global temperature will do over the next few decades?

Decline. The major short-term control of global temperature is variation in the strength of the Sun’s magnetic field. As it varies it determines the amount of cosmic radiation reaching the Earth. The cosmic radiation creates clouds in the lower atmosphere and it, like a shutter in the greenhouse it determines the sunlight reaching the surface and therefore the temperature.

What do you think of the claimed “97% consensus”?

It is completely false and was deliberately manufactured by John Cook at the University of Queensland. There are more detailed analyses of the corruption but this is the best layman’s account. www.forbes.com/sites/alexepstein/.

On a scale of 1 to 10, how honest have the major climate institutes been with the public?

-10. If they knew what was wrong it is deliberate and criminal. If they didn’t know they are grossly incompetent. 

Other comments by Dr. Ball:

The biggest problem for the public is they can’t believe that an apparent majority of scientists seem to support the IPCC science. The simple answer is, very few are familiar with the science. They, like most of the public, assume other scientists would not distort, manipulate, or do anything other than proper science. When scientists find out, they are shocked, as exemplified in German meteorologist Klaus-Eckert Puls’s comment:

“Ten years ago I simply parroted what the IPCC told us. One day I started checking the facts and data—first I started with a sense of doubt but then I became outraged when I discovered that much of what the IPCC and the media were telling us was sheer nonsense and was not even supported by any scientific facts and measurements. To this day I still feel shame that as a scientist I made presentations of their science without first checking it.”

August 27, 2015 Posted by | Book Review, Science and Pseudo-Science | , , | Leave a comment

Hawaii Sees 10 Fold Increase in Birth Defects After Becoming GM Corn Testing Grounds

By Jay Syrmopoulos | The Free Thought Project | August 24, 2015

Waimea, HI – Doctors are sounding the alarm after noticing a disturbing trend happening in Waimea, on the island of Kauai, Hawaii. Over the past five years, the number of severe heart malformations has risen to more than ten times the national rate, according to an analysis by local physicians.

Pediatrician Carla Nelson, after seeing four of these defects in three years, is extremely concerned with the severe health anomalies manifesting in the local population.

Nelson, as well as a number of other local doctors, find themselves at the center of a growing controversy about whether the substantial increase in severe illness and birth defects in Waimea stem from the main cash crop on four of the six islands, genetically modified corn, which has been altered to resist pesticide.

Hawaii has historically been used as a testing ground for almost all GMO corn grown in the United States. Over 90% of GMO corn grown in the mainland U.S. was first developed in Hawaii, with the island of Kauai having the largest area used.

According to a report in The Guardian :

In Kauai, chemical companies Dow, BASF, Syngenta and DuPont spray 17 times more pesticide per acre (mostly herbicides, along with insecticides and fungicides) than on ordinary cornfields in the US mainland, according to the most detailed study of the sector, by the Center for Food Safety.

That’s because they are precisely testing the strain’s resistance to herbicides that kill other plants. About a fourth of the total are called Restricted Use Pesticides because of their harmfulness. Just in Kauai, 18 tons – mostly atrazine, paraquat (both banned in Europe) and chlorpyrifos – were applied in 2012. The World Health Organization this year announced that glyphosate, sold as Roundup, the most common of the non-restricted herbicides, is “probably carcinogenic in humans”.

Waimea is a small town that lies directly downhill from the 12,000 acres of GMO test fields leased mainly from the state. Spraying takes place often, sometimes every couple of days. Residents have complained that when the wind blows downhill from the fields, the chemicals have caused headaches, vomiting, and stinging eyes.

“Your eyes and lungs hurt, you feel dizzy and nauseous. It’s awful,” local middle school special education teacher Howard Hurst told the Guardian. “Here, 10% of the students get special-ed services, but the state average is 6.3%,” he says. “It’s hard to think the pesticides don’t play a role.”

To add insult to injury, Dow AgraSciences’ main lobbyist in Honolulu, until recently, actually ran the main hospital in town. Although only 1,700ft away from a Syngenta field, the hospital has never done any research into the effects of pesticides on its patients.

Hawaiians have attempted to reign in the industrial chemical/farming machine on four separate occasions over the past two years. On August 9 an estimated 10,000 people marched through Honolulu’s main tourist district to protest the collusion of big business and state putting profits over citizens’ health.

“The turnout and the number of groups marching showed how many people are very frustrated with the situation,” native Hawaiian activist Walter Ritte said.

Hawaiians have also attempted to use a ballot initiative to force a moratorium on the planting of GMO crops, according to The Guardian:

In Maui County, which includes the islands of Maui and Molokai, both with large GMO corn fields, a group of residents calling themselves the Shaka Movement sidestepped the company-friendly council and launched a ballot initiative that called for a moratorium on all GMO farming until a full environmental impact statement is completed there.

The companies, primarily Monsanto, spent $7.2m on the campaign ($327.95 per “no” vote, reported to be the most expensive political campaign in Hawaii history) and still lost.

Again, they sued in federal court, and, a judge found that the Maui County initiative was preempted by federal law. Those rulings are also being appealed.

Even amidst strong public pressure, the chemical companies that grow the GMO corn have continued to refuse to disclose the chemicals they are using, as well as the specific amounts of each chemical being used. The industry and its political cronies have continually insisted that pesticides are safe.

“We have not seen any credible source of statistical health information to support the claims,” said Bennette Misalucha, executive director of Hawaii Crop Improvement Association in a written statement distributed by a publicist.

Nelson pointed out that American Academy of Pediatrics’ report, Pesticide Exposure in Children, found “an association between pesticides and adverse birth outcomes, including physical birth defects,” going on to note that local schools have twice been evacuated and kids sent to the hospital due to pesticide drift. “It’s hard to treat a child when you don’t know which chemical he’s been exposed to.”

Sidney Johnson, a pediatric surgeon at the Kapiolani Medical Center for Women and Children who oversees all children born in Hawaii with major birth defects says he’s noticed that the number of babies born here with their abdominal organs outside. This is a rare condition known as gastroschisis and has grown from three a year in the 1980s to about a dozen now, according to The Guardian.

Johnson and a team of medical students have been studying hospital records to determine if any of the parents of the infants with gastroschisis were residing near fields that were undergoing spraying during conception and early pregnancy.

“We have cleanest water and air in the world,” Johnson said. “You kind of wonder why this wasn’t done before,” he says. “Data from other states show there might be a link, and Hawaii might be the best place to prove it.”

It was recently revealed that these chemical companies, unlike farmers, are allowed to operate under an antiquated decades-old Environmental Protection Agency permit. This permit was grandfathered in from the days of sugar plantations when the amounts and toxicities were significantly lower, and which allowed for toxic chemicals to be discharged into water. Tellingly the state of Hawaii has asked for a federal exemption to allow these companies to continue to not comply with modern standards.

The ominous reality of collusion between these mega-corporations and the political class in Hawaii has seemingly left the citizens of the state with virtually no ability to safeguard their children’s health. We tread dangerously close to corporate fascism when profits are put above the health of the people.

August 26, 2015 Posted by | Civil Liberties, Corruption, Science and Pseudo-Science | , , | Leave a comment

After Decades of Denial National Cancer Institute Finally Admits that “Cannabis Kills Cancer”

By Jay Syrmopoulos | The Free Thought Project | August 21, 2015

After decades of claiming that cannabis has no medicinal value, the U.S. government is finally admitting that cannabis can kill cancer cells.

Although still claiming, “there is not enough evidence to recommend that patients inhale or ingest cannabis as a treatment for cancer-related symptoms or side effects of cancer therapy,” the admission that “cannabis has been shown to kill cancer cells in the laboratory,” highlights a rapidly changing perspective on medicinal cannabis treatments.

In the most recent update to the National Cancer Institute’s (NCI) website included a listing of studies, which indicated anti-tumor effects of cannabis treatment.

Preclinical studies of cannabinoids have investigated the following activities:

Antitumor activity
• Studies in mice and rats have shown that cannabinoids may inhibit tumor growth by causing cell death, blocking cell growth, and blocking the development of blood vessels needed by tumors to grow. Laboratory and animal studies have shown that cannabinoids may be able to kill cancer cells while protecting normal cells.
• A study in mice showed that cannabinoids may protect against inflammation of the colon and may have potential in reducing the risk of colon cancer, and possibly in its treatment.
• A laboratory study of delta-9-THC in hepatocellular carcinoma (liver cancer) cells showed that it damaged or killed the cancer cells. The same study of delta-9-THC in mouse models of liver cancer showed that it had antitumor effects. Delta-9-THC has been shown to cause these effects by acting on molecules that may also be found in non-small cell lung cancer cells and breast cancer cells.
• A laboratory study of cannabidiol (CBD) in estrogen receptor positive and estrogen receptor negative breast cancer cells showed that it caused cancer cell death while having little effect on normal breast cells. Studies in mouse models of metastatic breast cancer showed that cannabinoids may lessen the growth, number, and spread of tumors.
• A laboratory study of cannabidiol (CBD) in human glioma cells showed that when given along with chemotherapy, CBD may make chemotherapy more effective and increase cancer cell death without harming normal cells. Studies in mouse models of cancer showed that CBD together with delta-9-THC may make chemotherapy such as temozolomide more effective.

The NCI, part of the U.S. Department of Health, advises that ‘cannabinoids may be useful in treating the side effects of cancer and cancer treatment’ by smoking, eating it in baked products, drinking herbal teas or even spraying it under the tongue.

The site goes on to list other beneficial uses, which include: anti-inflammatory activity, blocking cell growth, preventing the growth of blood vessels that supply tumors, antiviral activity and relieving muscle spasms caused by multiple sclerosis.

Several scientific studies have given indications of these beneficial properties in the past, and this past April the US government’s National Institute on Drug Abuse (NIDA) revised their publications to suggest cannabis could shrink brain tumors by killing off cancer cells, stating, “marijuana can kill certain cancer cells and reduce the size of others.”

“Evidence from one animal study suggests that extracts from whole-plant marijuana can shrink one of the most serious types of brain tumors,” the NIDA said. “Research in mice showed that these extracts, when used with radiation, increased the cancer-killing effects of the radiation.”

Research on marijuana’s potential as a medicine has been stifled for decades by federal restrictions, even though nearly half of the states and the District of Columbia have legalized medical marijuana in some form.

Although cannabis has been increasingly legalized by states, the federal government still classifies marijuana as a Schedule 1 drug — along with heroin and ecstasy — defining it as having no medical benefits and a potential for abuse.

The vast majority of the $1.4 billion spent on marijuana research, by the National Institute of Health, absurdly involves the study of abuse and addiction, with only $297 million being spent researching potential medical benefits.

Judging by the spending levels, it seems the feds have a vested interest in keeping public opinion of cannabis negative. Perhaps “Big Pharma” is utilizing their financial influence over politicians in an effort to maintain a stranglehold on the medical treatment market.

August 22, 2015 Posted by | Corruption, Economics, Science and Pseudo-Science | , | 2 Comments

Unspoken Death Toll of Fukushima: Nuclear Disaster Killing Japanese Slowly

Sputnik – 20.08.2015

According to London-based independent consultant on radioactivity in the environment Dr. Ian Fairlie, the health toll from the Fukushima nuclear catastrophe is horrific: about 12,000 workers have been exposed to high levels of radiation (some up to 250 mSv); between 2011 and 2015, about 2,000 died from the effects of evacuations, ill-health and suicide related to the disaster; furthermore, an estimated 5,000 will most likely face lethal cancer in the future, and that is just the tip of the iceberg.

What makes matters even worse, the nuclear disaster and subsequent radiation exposure lies at the root of the longer term health effects, such as cancers, strokes, CVS (cyclic vomiting syndrome) diseases, hereditary effects and many more.

Embarrassingly, “[t]he Japanese Government, its advisors, and most radiation scientists in Japan (with some honorable exceptions) minimize the risks of radiation. The official widely-observed policy is that small amounts of radiation are harmless: scientifically speaking this is untenable,” Dr. Fairlie pointed out.

The Japanese government even goes so far as to increase the public limit for radiation in Japan from 1 mSv to 20 mSv per year, while its scientists are making efforts to convince the International Commission on Radiological Protection (ICRP) to accept this enormous increase.

“This is not only unscientific, it is also unconscionable,” Dr. Fairlie stressed, adding that “there is never a safe dose, except zero dose.”

However, while the Japanese government is turning a blind eye to radiogenic late effects, the evidence “is solid”: the RERF Foundation which is based in Hiroshima and Nagasaki is observing the Japanese atomic bomb survivors and still registering nuclear radiation’s long-term effects.

“From the UNSCEAR estimate of 48,000 person Sv [the collective dose to the Japanese population from Fukushima], it can be reliably estimated (using a fatal cancer risk factor of 10% per Sv) that about 5,000 fatal cancers will occur in Japan in the future from Fukushima’s fallout,” he noted.

Dr. Fairlie added that in addition to radiation-related problems, former inhabitants of Fukushima Prefecture suffer Post-Trauma Stress Disorder (PTSD), depression, anxiety disorders that apparently cause increased suicide.

The expert also pointed to the 15 percent drop in the number of live births in the prefecture in 2011, as well as higher rates of early spontaneous abortions and a 20 percent rise in the infant mortality rate in 2012.

“It is impossible not to be moved by the scale of Fukushima’s toll in terms of deaths, suicides, mental ill-health and human suffering,” the expert said.

August 21, 2015 Posted by | Deception, Nuclear Power, Science and Pseudo-Science | | 2 Comments

US Jails People for Cannabis While Govt Promotes It as Cancer Treatment

Sputnik – August 21, 2015

Cannabis is, in fact, extremely effective in fighting cancer, the US government admitted last week. The drug, illegal throughout most of the United States, is now recommended by the government’s official cancer advice website.

Criminalized by the US federal government since 1937, cannabis is being advertised by the US Department of Health as “useful in treating the side effects of cancer and cancer treatment” on the agency’s official cancer advice website.

The National Cancer Institute claims cannabinoids, which are the active chemicals in cannabis, can be smoked, inhaled, eaten in baked products, drank in herbal teas, or even sprayed under the tongue as treatment.

The drug can do even more than just treat side effects. Cannabis can also act as an anti-inflammatory agent, prevent the growth of cancer cells, block the flow of blood vessels to tumours, and help relieve muscle spasms caused by multiple sclerosis.

The results were based partially on lab tests which showed the decline of cancer cells in mice after exposure to cannabis.

Some activists in the mass media, as well as Hollywood stars, have long touted the medical benefits of the drug.

In response to the multiple scientific studies which have proven marijuana’s efficacy, the US Food and Drug Administration recently approved two cancer treatment drugs which contain cannabinoids.

Several states, including California, New York, and Maine have already legalized marijuana for medical purposes. Four states, as well as the District of Columbia, have legalized the drug for recreational use, although it remains prohibited by federal law.

Despite these studies, as well as a general push for decriminalization across the country, the US penal system imprisons a shocking number of individuals for nonviolent crimes related to marijuana.

In 2013 alone, 609,423 individuals were arrested for possession of a substance which is now recommended by the US Department of Health.

Background:

US Study Concludes Marijuana Can Kill Cancer Cells

August 20, 2015 Posted by | Civil Liberties, Science and Pseudo-Science, Timeless or most popular | , , | 2 Comments

Follow

Get every new post delivered to your Inbox.

Join 1,066 other followers