Aletho News

ΑΛΗΘΩΣ

Are NYT Climate Alarmists Glassy-Eyed Cultists?

By Francis Menton | Manhattan Contrarian | February 16, 2017

Will Happer is an eminent physicist at Princeton who has chosen (along with his colleague Freeman Dyson) to plant a flag on the skeptic side of the climate debate. I’ve had the pleasure of meeting Happer on a couple of occasions. Recently his name has been floated as a potential candidate for the position of Science Advisor to President Trump. (This is the position that has been held by eco-fanatic John Holdren during the Obama presidency.) Although it is not final, and others remain in the running, Happer has said that he will take the position if offered.

Yesterday Happer gave an interview to the Guardian newspaper.  When it came to the issue of “climate change,” Happer didn’t pull any punches. Here is my favorite quote:

“There’s a whole area of climate so-called science that is really more like a cult,” Happer told the Guardian. “It’s like Hare Krishna or something like that. They’re glassy-eyed and they chant. It will potentially harm the image of all science.”

I would only comment that in my experience Hare Krishnas don’t takes tens of billions of dollars of government money for themselves, and don’t seek to impose energy poverty on everyone else while they themselves jet around on private jets. Other than that, Happer was spot on.

If you are still considering the question of whether what Happer calls “climate so-called science” is real science versus a cult, you may want to review a few articles from the New York Times about the recent California drought and its end. For example, from August 2015, we have an article headlined “California Drought Is Made Worse by Global Warming, Scientists Say.”

Global warming caused by human emissions has most likely intensified the drought in California by 15 to 20 percent, scientists said on Thursday, warning that future dry spells in the state are almost certain to be worse than this one as the world continues to heat up. . . . The paper provides new scientific support for political leaders, including President Obama and Gov. Jerry Brown of California, who have cited human emissions and the resulting global warming as a factor in the drought.

Or try this one from January 5, 2017 (just six weeks ago!), headlined “A Winery Battles Climate Change.”

After decades in the business, the Jacksons are sensitive to slight variations in the weather, and they are convinced of one thing: It is getting hotter and drier. . . . Climate change is forcing the Jacksons to confront questions both practical and existential: Can you make fine wine with less water? . . . Already, winemakers in the region are noticing distinct changes that signal a hotter, drier future.

And then, of course, things promptly turned around and the rains came — as they always do. Suddenly California is in the news because it has had so much rain that some of its dams are threatened with overflowing. Well, what caused that? You guessed it — climate change! From yesterday’s NYT, here is the lead headline from the National Section: “A Climate Change Warning for California’s Dams.” What, does “climate change” cause both wet and dry?

Scientists have said for years that a warming atmosphere should lead to more intense and frequent storms in many regions.

Now you tell us! As usual, climate change as the cause of everything is the classic unfalsifiable proposition. The word “cult” may be a little over the top, but whatever it is, it sure isn’t science.

February 20, 2017 Posted by | Deception, Mainstream Media, Warmongering, Science and Pseudo-Science, Timeless or most popular | , | Leave a comment

New York Times Manipulates NOAA’s Climate Science Scandal

By Marc Morano | Climate Depot | February 12, 2017

If you were only to read the New York Times’ latest article on the most recent Climate Change scandal first reported by the Mail and the Daily Mail, you would never know that there was any scandal to speak of in the first place.

Headline: “No Data Manipulation in 2015 Climate Study, Researchers Say.” Well, not all researchers. The background of the data manipulation story revolves around accusations made by David Bates, a recently retired scientist at the National Oceanic and Atmospheric Administration (NOAA). Among his several accusations is that NOAA “rushed to publish a landmark paper that exaggerated global warming and was timed to influence the historic Paris agreement on climate change,” a paper which would have been welcomed with open arms by the Obama administration. On February 4, Bates wrote a lengthy blog post at his website detailing the accusations. Here is a brief list of some of the charges:

1. Climate scientist, Tom Karl, failed to archive the land temperature data set and thus also failed to “follow the policy of his own Agency [and] the guidelines in Science magazine for dataset archival and documentation.”

2. The authors also chose to “use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95%,” and according to Bates, the authors failed to give a justification for this when pressed.

3. Karl routinely “had his ‘thumb on the scale’ — in the documentation, scientific choices, and release of datasets — in an effort to discredit the notion of a global warming hiatus and rush to time the publication of the paper to influence national and international deliberations on climate policy.” Bates adds, “[a] NOAA NCEI supervisor remarked how it was eye-opening to watch Karl work the co-authors, mostly subtly but sometimes not, pushing choices to emphasize warming.”

4. Experimental datasets were used that were not run through operational readiness review (ORR) and were not archived.

To sum up, the “data manipulation,” as characterize by the Mail, consisted in not following proper protocols, selecting certain data sets which had not been properly analyzed, and manipulating scientific methodology with a political and not purely scientific end.

February 12, 2017 Posted by | Deception, Fake News, Mainstream Media, Warmongering, Science and Pseudo-Science | , | Leave a comment

Exposed: How world leaders were duped into investing billions over manipulated global warming data

By David Rose | The Mail on Sunday | February 4, 2017

The Mail on Sunday today reveals astonishing evidence that the organisation that is the world’s leading source of climate data rushed to publish a landmark paper that exaggerated global warming and was timed to influence the historic Paris Agreement on climate change.

A high-level whistleblower has told this newspaper that America’s National Oceanic and Atmospheric Administration (NOAA) breached its own rules on scientific integrity when it published the sensational but flawed report, aimed at making the maximum possible impact on world leaders including Barack Obama and David Cameron at the UN climate conference in Paris in 2015.

The report claimed that the ‘pause’ or ‘slowdown’ in global warming in the period since 1998 – revealed by UN scientists in 2013 – never existed, and that world temperatures had been rising faster than scientists expected. Launched by NOAA with a public relations fanfare, it was splashed across the world’s media, and cited repeatedly by politicians and policy makers.

But the whistleblower, Dr John Bates, a top NOAA scientist with an impeccable reputation, has shown The Mail on Sunday irrefutable evidence that the paper was based on misleading, ‘unverified’ data.

It was never subjected to NOAA’s rigorous internal evaluation process – which Dr Bates devised.

His vehement objections to the publication of the faulty data were overridden by his NOAA superiors in what he describes as a ‘blatant attempt to intensify the impact’ of what became known as the Pausebuster paper.

His disclosures are likely to stiffen President Trump’s determination to enact his pledges to reverse his predecessor’s ‘green’ policies, and to withdraw from the Paris deal – so triggering an intense political row.

In an exclusive interview, Dr Bates accused the lead author of the paper, Thomas Karl, who was until last year director of the NOAA section that produces climate data – the National Centers for Environmental Information (NCEI) – of ‘insisting on decisions and scientific choices that maximised warming and minimised documentation… in an effort to discredit the notion of a global warming pause, rushed so that he could time publication to influence national and international deliberations on climate policy’.
Dr Bates was one of two Principal Scientists at NCEI, based in Asheville, North Carolina.

Official delegations from America, Britain and the EU were strongly influenced by the flawed NOAA study as they hammered out the Paris Agreement – and committed advanced nations to sweeping reductions in their use of fossil fuel and to spending £80 billion every year on new, climate-related aid projects.

The scandal has disturbing echoes of the ‘Climategate’ affair which broke shortly before the UN climate summit in 2009, when the leak of thousands of emails between climate scientists suggested they had manipulated and hidden data. Some were British experts at the influential Climatic Research Unit at the University of East Anglia.

NOAA’s 2015 ‘Pausebuster’ paper was based on two new temperature sets of data – one containing measurements of temperatures at the planet’s surface on land, the other at the surface of the seas.

Both datasets were flawed. This newspaper has learnt that NOAA has now decided that the sea dataset will have to be replaced and substantially revised just 18 months after it was issued, because it used unreliable methods which overstated the speed of warming. The revised data will show both lower temperatures and a slower rate in the recent warming trend.

The land temperature dataset used by the study was afflicted by devastating bugs in its software that rendered its findings ‘unstable’.

The paper relied on a preliminary, ‘alpha’ version of the data which was never approved or verified.

A final, approved version has still not been issued. None of the data on which the paper was based was properly ‘archived’ – a mandatory requirement meant to ensure that raw data and the software used to process it is accessible to other scientists, so they can verify NOAA results.

Dr Bates retired from NOAA at the end of last year after a 40-year career in meteorology and climate science. As recently as 2014, the Obama administration awarded him a special gold medal for his work in setting new, supposedly binding standards ‘to produce and preserve climate data records’.

Yet when it came to the paper timed to influence the Paris conference, Dr Bates said, these standards were flagrantly ignored.

The paper was published in June 2015 by the journal Science. Entitled ‘Possible artifacts of data biases in the recent global surface warming hiatus’, the document said the widely reported ‘pause’ or ‘slowdown’ was a myth.

Less than two years earlier, a blockbuster report from the UN Intergovernmental Panel on Climate Change (IPCC), which drew on the work of hundreds of scientists around the world, had found ‘a much smaller increasing trend over the past 15 years 1998-2012 than over the past 30 to 60 years’. Explaining the pause became a key issue for climate science. It was seized on by global warming sceptics, because the level of CO2 in the atmosphere had continued to rise.

Some scientists argued that the existence of the pause meant the world’s climate is less sensitive to greenhouse gases than previously thought, so that future warming would be slower. One of them, Professor Judith Curry, then head of climate science at the Georgia Institute of Technology, said it suggested that computer models used to project future warming were ‘running too hot’.

However, the Pausebuster paper said while the rate of global warming from 1950 to 1999 was 0.113C per decade, the rate from 2000 to 2014 was actually higher, at 0.116C per decade. The IPCC’s claim about the pause, it concluded, ‘was no longer valid’.

The impact was huge and lasting. On publication day, the BBC said the pause in global warming was ‘an illusion caused by inaccurate data’.

One American magazine described the paper as a ‘science bomb’ dropped on sceptics.

Its impact could be seen in this newspaper last month when, writing to launch his Ladybird book about climate change, Prince Charles stated baldly: ‘There isn’t a pause… it is hard to reject the facts on the basis of the evidence.’

Data changed to make the sea appear warmer

The sea dataset used by Thomas Karl and his colleagues – known as Extended Reconstructed Sea Surface Temperatures version 4, or ERSSTv4, tripled the warming trend over the sea during the years 2000 to 2014 from just 0.036C per decade – as stated in version 3 – to 0.099C per decade. Individual measurements in some parts of the globe had increased by about 0.1C and this resulted in the dramatic increase of the overall global trend published by the Pausebuster paper. But Dr Bates said this increase in temperatures was achieved by dubious means. Its key error was an upwards ‘adjustment’ of readings from fixed and floating buoys, which are generally reliable, to bring them into line with readings from a much more doubtful source – water taken in by ships. This, Dr Bates explained, has long been known to be questionable: ships are themselves sources of heat, readings will vary from ship to ship, and the depth of water intake will vary according to how heavily a ship is laden – so affecting temperature readings.

Dr Bates said: ‘They had good data from buoys. And they threw it out and “corrected” it by using the bad data from ships. You never change good data to agree with bad, but that’s what they did – so as to make it look as if the sea was warmer.’

ERSSTv4 ‘adjusted’ buoy readings up by 0.12C. It also ignored data from satellites that measure the temperature of the lower atmosphere, which are also considered reliable. Dr Bates said he gave the paper’s co-authors ‘a hard time’ about this, ‘and they never really justified what they were doing.’

Now, some of those same authors have produced the pending, revised new version of the sea dataset – ERSSTv5. A draft of a document that explains the methods used to generate version 5, and which has been seen by this newspaper, indicates the new version will reverse the flaws in version 4, changing the buoy adjustments and including some satellite data and measurements from a special high-tech floating buoy network known as Argo. As a result, it is certain to show reductions in both absolute temperatures and recent global warming.

The second dataset used by the Pausebuster paper was a new version of NOAA’s land records, known as the Global Historical Climatology Network (GHCN), an analysis over time of temperature readings from about 4,000 weather stations spread across the globe.

This new version found past temperatures had been cooler than previously thought, and recent ones higher – so that the warming trend looked steeper. For the period 2000 to 2014, the paper increased the rate of warming on land from 0.15C to 0.164C per decade.

In the weeks after the Pausebuster paper was published, Dr Bates conducted a one-man investigation into this. His findings were extraordinary. Not only had Mr Karl and his colleagues failed to follow any of the formal procedures required to approve and archive their data, they had used a ‘highly experimental early run’ of a programme that tried to combine two previously separate sets of records.

This had undergone the critical process known as ‘pairwise homogeneity adjustment’, a method of spotting ‘rogue’ readings from individual weather stations by comparing them with others nearby.

However, this process requires extensive, careful checking which was only just beginning, so that the data was not ready for operational use. Now, more than two years after the Pausebuster paper was submitted to Science, the new version of GHCN is still undergoing testing.

Moreover, the GHCN software was afflicted by serious bugs. They caused it to become so ‘unstable’ that every time the raw temperature readings were run through the computer, it gave different results. The new, bug-free version of GHCN has still not been approved and issued. It is, Dr Bates said, ‘significantly different’ from that used by Mr Karl and his co-authors.

Dr Bates revealed that the failure to archive and make available fully documented data not only violated NOAA rules, but also those set down by Science. Before he retired last year, he continued to raise the issue internally. Then came the final bombshell. Dr Bates said: ‘I learned that the computer used to process the software had suffered a complete failure.’

The reason for the failure is unknown, but it means the Pausebuster paper can never be replicated or verified by other scientists.

The flawed conclusions of the Pausebuster paper were widely discussed by delegates at the Paris climate change conference. Mr Karl had a longstanding relationship with President Obama’s chief science adviser, John Holdren, giving him a hotline to the White House.

Mr Holdren was also a strong advocate of robust measures to curb emissions. Britain’s then Prime Minister David Cameron claimed at the conference that ‘97 per cent of scientists say climate change is urgent and man-made and must be addressed’ and called for ‘a binding legal mechanism’ to ensure the world got no more than 2C warmer than in pre-industrial times.

President Obama stressed his Clean Power Plan at the conference, which mandates American power stations to make big emissions cuts.

President Trump has since pledged he will scrap it, and to withdraw from the Paris Agreement.

Whatever takes its place, said Dr Bates, ‘there needs to be a fundamental change to the way NOAA deals with data so that people can check and validate scientific results. I’m hoping that this will be a wake-up call to the climate science community – a signal that we have to put in place processes to make sure this kind of crap doesn’t happen again.

‘I want to address the systemic problems. I don’t care whether modifications to the datasets make temperatures go up or down. But I want the observations to speak for themselves, and for that, there needs to be a new emphasis that ethical standards must be maintained.’

He said he decided to speak out after seeing reports in papers including the Washington Post and Forbes magazine claiming that scientists feared the Trump administration would fail to maintain and preserve NOAA’s climate records.

Dr Bates said: ‘How ironic it is that there is now this idea that Trump is going to trash climate data, when key decisions were earlier taken by someone whose responsibility it was to maintain its integrity – and failed.’

NOAA not only failed, but it effectively mounted a cover-up when challenged over its data. After the paper was published, the US House of Representatives Science Committee launched an inquiry into its Pausebuster claims. NOAA refused to comply with subpoenas demanding internal emails from the committee chairman, the Texas Republican Lamar Smith, and falsely claimed that no one had raised concerns about the paper internally.

Last night Mr Smith thanked Dr Bates ‘for courageously stepping forward to tell the truth about NOAA’s senior officials playing fast and loose with the data in order to meet a politically predetermined conclusion’. He added: ‘The Karl study used flawed data, was rushed to publication in an effort to support the President’s climate change agenda, and ignored NOAA’s own standards for scientific study.’

Professor Curry, now the president of the Climate Forecast Applications Network, said last night: ‘Large adjustments to the raw data, and substantial changes in successive dataset versions, imply substantial uncertainties.’

It was time, she said, that politicians and policymakers took these uncertainties on board.

Last night Mr Karl admitted the data had not been archived when the paper was published. Asked why he had not waited, he said: ‘John Bates is talking about a formal process that takes a long time.’ He denied he was rushing to get the paper out in time for Paris, saying: ‘There was no discussion about Paris.’

They played fast and loose with the figures

He also admitted that the final, approved and ‘operational’ edition of the GHCN land data would be ‘different’ from that used in the paper’.

As for the ERSSTv4 sea dataset, he claimed it was other records – such as the UK Met Office’s – which were wrong, because they understated global warming and were ‘biased too low’. Jeremy Berg, Science’s editor-in-chief, said: ‘Dr Bates raises some serious concerns. After the results of any appropriate investigations… we will consider our options.’ He said that ‘could include retracting that paper’. NOAA declined to comment.

It’s not the first time we’ve exposed dodgy climate data, which is why we’ve dubbed it: Climate Gate 2

Dr John Bates’s disclosures about the manipulation of data behind the ‘Pausebuster’ paper is the biggest scientific scandal since ‘Climategate’ in 2009 when, as this paper reported, thousands of leaked emails revealed scientists were trying to block access to data, and using a ‘trick’ to conceal embarrassing flaws in their claims about global warming.

Both scandals suggest a lack of transparency and, according to Dr Bates, a failure to observe proper ethical standards.

Because of NOAA ’s failure to ‘archive’ data used in the paper, its results can never be verified.

Like Climategate, this scandal is likely to reverberate around the world, and reignite some of science’s most hotly contested debates.

Has there been an unexpected pause in global warming? If so, is the world less sensitive to carbon dioxide than climate computer models suggest?

And does this mean that truly dangerous global warming is less imminent, and that politicians’ repeated calls for immediate ‘urgent action’ to curb emissions are exaggerated?


Judith Curry has also blogged on the same story.

February 5, 2017 Posted by | Corruption, Deception, Science and Pseudo-Science, Timeless or most popular | , , , , | Leave a comment

Climate scientists versus climate data

By John Bates | Climate Etc. | February 4, 2017

A look behind the curtain at NOAA’s climate data center.

I read with great irony recently that scientists are “frantically copying U.S. Climate data, fearing it might vanish under Trump” (e.g., Washington Post 13 December 2016). As a climate scientist formerly responsible for NOAA’s climate archive, the most critical issue in archival of climate data is actually scientists who are unwilling to formally archive and document their data. I spent the last decade cajoling climate scientists to archive their data and fully document the datasets. I established a climate data records program that was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs), which accurately describe the Earth’s changing environment.

The most serious example of a climate scientist not archiving or documenting a critical climate dataset was the study of Tom Karl et al. 2015 (hereafter referred to as the Karl study or K15), purporting to show no ‘hiatus’ in global warming in the 2000s (Federal scientists say there never was any global warming “pause”). The study drew criticism from other climate scientists, who disagreed with K15’s conclusion about the ‘hiatus.’ (Making sense of the early-2000s warming slowdown). The paper also drew the attention of the Chairman of the House Science Committee, Representative Lamar Smith, who questioned the timing of the report, which was issued just prior to the Obama Administration’s Clean Power Plan submission to the Paris Climate Conference in 2015.

In the following sections, I provide the details of how Mr. Karl failed to disclose critical information to NOAA, Science Magazine, and Chairman Smith regarding the datasets used in K15. I have extensive documentation that provides independent verification of the story below. I also provide my suggestions for how we might keep such a flagrant manipulation of scientific integrity guidelines and scientific publication standards from happening in the future. Finally, I provide some links to examples of what well documented CDRs look like that readers might contrast and compare with what Mr. Karl has provided.

Background

In 2013, prior to the Karl study, the National Climatic Data Center [NCDC, now the NOAA National Centers for Environmental Information (NCEI)] had just adopted much improved processes for formal review of Climate Data Records, a process I formulated [link]. The land temperature dataset used in the Karl study had never been processed through the station adjustment software before, which led me to believe something was amiss. When I pressed the co-authors, they said they had decided not to archive the dataset, but did not defend the decision. One of the co-authors said there were ‘some decisions [he was] not happy with’. The data used in the K15 paper were only made available through a web site, not in digital form, and lacking proper versioning and any notice that they were research and not operational data. I was dumbstruck that Tom Karl, the NCEI Director in charge of NOAA’s climate data archive, would not follow the policy of his own Agency nor the guidelines in Science magazine for dataset archival and documentation.

I questioned another co-author about why they choose to use a 90% confidence threshold for evaluating the statistical significance of surface temperature trends, instead of the standard for significance of 95% — he also expressed reluctance and did not defend the decision. A NOAA NCEI supervisor remarked how it was eye-opening to watch Karl work the co-authors, mostly subtly but sometimes not, pushing choices to emphasize warming. Gradually, in the months after K15 came out, the evidence kept mounting that Tom Karl constantly had his ‘thumb on the scale’—in the documentation, scientific choices, and release of datasets—in an effort to discredit the notion of a global warming hiatus and rush to time the publication of the paper to influence national and international deliberations on climate policy.

Defining an Operational Climate Data Record

For nearly two decades, I’ve advocated that if climate datasets are to be used in important policy decisions, they must be fully documented, subject to software engineering management and improvement processes, and be discoverable and accessible to the public with rigorous information preservation standards. I was able to implement such policies, with the help of many colleagues, through the NOAA Climate Data Record policies (CDR) [link].

Once the CDR program was funded, beginning in 2007, I was able to put together a team and pursue my goals of operational processing of important climate data records emphasizing the processes required to transition research datasets into operations (known as R2O). Figure 1 summarizes the steps required to accomplish this transition in the key elements of software code, documentation, and data.

slide1Figure 1. Research to operations transition process methodology from Bates et al. 2016.

Unfortunately, the NCDC/NCEI surface temperature processing group was split on whether to adopt this process, with scientist Dr. Thomas C. Peterson (a co-author on K15, now retired from NOAA) vigorously opposing it. Tom Karl never required the surface temperature group to use the rigor of the CDR methodology, although a document was prepared identifying what parts of the surface temperature processing had to be improved to qualify as an operational CDR.

Tom Karl liked the maturity matrix so much, he modified the matrix categories so that he could claim a number of NCEI products were “Examples of “Gold” standard NCEI Products  (Data Set Maturity Matrix Model Level 6).” See his NCEI overview presentation all NCEI employees [ncei-overview-2015nov-2 ] were told to use, even though there had never been any maturity assessment of any of the products.

NCDC/NCEI surface temperature processing and archival

In the fall of 2012, the monthly temperature products issued by NCDC were incorrect for 3 months in a row [link]. As a result, the press releases and datasets had to be withdrawn and reissued. Dr. Mary Kicza, then the NESDIS Associate Administrator (the parent organization of NCDC/NCEI in NOAA), noted that these repeated errors reflected poorly on NOAA and required NCDC/NCEI to improve its software management processes so that such mistakes would be minimized in the future. Over the next several years, NCDC/NCEI had an incident report conducted to trace these errors and recommend corrective actions.

Following those and other recommendations, NCDN/NCEI began to implement new software management and process management procedures, adopting some of the elements of the CDR R2O process. In 2014 a NCDC/NCEI Science Council was formed to review new science activities and to review and approve new science products for operational release. A draft operational readiness review (ORR) was prepared and used for approval of all operational product releases, which was finalized and formally adopted in January 2015. Along with this process, a contractor who had worked at the CMMI Institute (CMMI, Capability Maturity Model Integration, is a software engineering process level improvement training and appraisal program) was hired to improve software processes, with a focus on improvement and code rejuvenation of the surface temperature processing code, in particular the GHCN-M dataset.

The first NCDC/NCEI surface temperature software to be put through this rejuvenation was the pairwise homogeneity adjustment portion of processing for the GHCN-Mv4 beta release of October 2015. The incident report had found that there were unidentified coding errors in the GHCN-M processing that caused unpredictable results and different results every time code was run.

The generic flow of data used in processing of the NCDC/NCEI global temperature product suite is shown schematically in Figure 2. There are three steps to the processing, and two of the three steps are done separately for the ocean versus land data. Step 1 is the compilation of observations either from ocean sources or land stations. Step 2 involves applying various adjustments to the data, including bias adjustments, and provides as output the adjusted and unadjusted data on a standard grid. Step 3 involves application of a spatial analysis technique (empirical orthogonal teleconnections, EOTs) to merge and smooth the ocean and land surface temperature fields and provide these merged fields as anomaly fields for ocean, land and global temperatures. This is the product used in K15. Rigorous ORR for each of these steps in the global temperature processing began at NCDC in early 2014.slide2Figure 2. Generic data flow for NCDC/NCEI surface temperature products.

In K15, the authors describe that the land surface air temperature dataset included the GHCN-M station data and also the new ISTI (Integrated Surface Temperature Initiative) data that was run through the then operational GHCN-M bias correction and gridding program (i.e., Step 2 of land air temperature processing in Figure 2). They further indicated that this processing and subsequent corrections were ‘essentially the same as those used in GHCN-Monthly version 3’. This may have been the case; however, doing so failed to follow the process that had been initiated to ensure the quality and integrity of datasets at NCDC/NCEI.

The GHCN-M V4 beta was put through an ORR in October 2015; the presentation made it clear that any GHCN-M version using the ISTI dataset should, and would, be called version 4. This is confirmed by parsing the file name actually used on the FTP site for the K15 dataset [link]; NOTE: placing a non-machine readable copy of a dataset on an FTP site does not constitute archiving a dataset). One file is named ‘box.12.adj.4.a.1.20150119’, where ‘adj’ indicates adjusted (passed through step 2 of the land processing) and ‘4.a.1’ means version 4 alpha run 1; the entire name indicating GHCN-M version 4a run 1. That is, the folks who did the processing for K15 and saved the file actually used the correct naming and versioning, but K15 did not disclose this. Clearly labeling the dataset would have indicated this was a highly experimental early GHCN-M version 4 run rather than a routine, operational update. As such, according to NOAA scientific integrity guidelines, it would have required a disclaimer not to use the dataset for routine monitoring.

In August 2014, in response to the continuing software problems with GHCNMv3.2.2 (version of August 2013), the NCDC Science Council was briefed about a proposal to subject the GHCNMv3 software, and particularly the pairwise homogeneity analysis portion, to a rigorous software rejuvenation effort to bring it up to CMMI level 2 standards and resolve the lingering software errors. All software has errors and it is not surprising there were some, but the magnitude of the problem was significant and a rigorous process of software improvement like the one proposed was needed. However, this effort was just beginning when the K15 paper was submitted, and so K15 must have used date with some experimental processing that combined aspects of V3 and V4 with known flaws. The GHCNMv3.X used in K15 did not go through any ORR process, and so what precisely was done is not documented. The ORR package for GHCNMv4 beta (in October 2015) uses the rejuvenated software and also includes two additional quality checks versus version 3.

Which version of the GHCN-M software K15 used is further confounded by the fact that GHCNMv3.3.0, the upgrade from version 3.2.2, only went through an ORR in April 2015 (i.e., after the K15 paper was submitted and revised). The GHCN-Mv3.3.0 ORR presentation demonstrated that the GHCN-M version changes between V3.2.2 and V3.3.0 had impacts on rankings of warmest years and trends. The data flow that was operational in June 2015 is shown in figure 3.

slide3Figure 3. Data flow for surface temperature products described in K15 Science paper. Green indicates operational datasets having passed ORR and archived at time of publication. Red indicates experimental datasets never subject to ORR and never archived.

It is clear that the actual nearly-operational release of GHCN-Mv4 beta is significantly different from the version GHCNM3.X used in K15. Since the version GHCNM3.X never went through any ORR, the resulting dataset was also never archived, and it is virtually impossible to replicate the result in K15.

At the time of the publication of the K15, the final step in processing the NOAAGlobalTempV4 had been approved through an ORR, but not in the K15 configuration. It is significant that the current operational version of NOAAGlobalTempV4 uses GHCN-M V3.3.0 and does not include the ISTI dataset used in the Science paper. The K15 global merged dataset is also not archived nor is it available in machine-readable form. This is why the two boxes in figure 3 are colored red.

The lack of archival of the GHCN-M V3.X and the global merged product is also in violation of Science policy on making data available [link]. This policy states: “Climate data. Data should be archived in the NOAA climate repository or other public databases”. Did Karl et al. disclose to Science Magazine that they would not be following the NOAA archive policy, would not archive the data, and would only provide access to a non-machine readable version only on an FTP server?

For ocean temperatures, the ERSST version 4 is used in the K15 paper and represents a major update from the previous version. The bias correction procedure was changed and this resulted in different SST anomalies and different trends during the last 15+ years relative to ERSST version 3. ERSSTV4 beta, a pre-operational release, was briefed to the NCDC Science Council and approved on 30 September 2014.

The ORR for ERSSTV4, the operational release, took place in the NCDC Science Council on 15 January 2015. The ORR focused on process and questions about some of the controversial scientific choices made in the production of that dataset will be discussed in a separate post. The review went well and there was only one point of discussion on process. One slide in the presentation indicated that operational release was to be delayed to coincide with Karl et al. 2015 Science paper release. Several Science Council members objected to this, noting the K15 paper did not contain any further methodological information—all of that had already been published and thus there was no rationale to delay the dataset release. After discussion, the Science Council voted to approve the ERSSTv4 ORR and recommend immediate release.

The Science Council reported this recommendation to the NCDC Executive Council, the highest NCDC management board. In the NCDC Executive Council meeting, Tom Karl did not approve the release of ERSSTv4, noting that he wanted its release to coincide with the release of the next version of GHCNM (GHCNMv3.3.0) and NOAAGlobalTemp. Those products each went through an ORR at NCDC Science Council on 9 April 2015, and were used in operations in May. The ERSSTv4 dataset, however, was still not released. NCEI used these new analyses, including ERSSTv4, in its operational global analysis even though it was not being operationally archived. The operational version of ERSSTv4 was only released to the public following publication of the K15 paper. The withholding of the operational version of this important update came in the middle of a major ENSO event, thereby depriving the public of an important source of updated information, apparently for the sole purpose of Mr. Karl using the data in his paper before making the data available to the public.

So, in every aspect of the preparation and release of the datasets leading into K15, we find Tom Karl’s thumb on the scale pushing for, and often insisting on, decisions that maximize warming and minimize documentation. I finally decided to document what I had found using the climate data record maturity matrix approach. I did this and sent my concerns to the NCEI Science Council in early February 2016 and asked to be added to the agenda of an upcoming meeting. I was asked to turn my concerns into a more general presentation on requirements for publishing and archiving. Some on the Science Council, particularly the younger scientists, indicated they had not known of the Science requirement to archive data and were not aware of the open data movement. They promised to begin an archive request for the K15 datasets that were not archived; however I have not been able to confirm they have been archived. I later learned that the computer used to process the software had suffered a complete failure, leading to a tongue-in-cheek joke by some who had worked on it that the failure was deliberate to ensure the result could never be replicated.

Where do we go from here?

I have wrestled for a long time about what to do about this incident. I finally decided that there needs to be systemic change both in the operation of government data centers and in scientific publishing, and I have decided to become an advocate for such change. First, Congress should re-introduce and pass the OPEN Government Data Act. The Act states that federal datasets must be archived and made available in machine readable form, neither of which was done by K15. The Act was introduced in the last Congress and the Senate passed it unanimously in the lame duck session, but the House did not. This bodes well for re-introduction and passage in the new Congress.

However, the Act will be toothless without an enforcement mechanism. For that, there should be mandatory, independent certification of federal data centers. As I noted, the scientists working in the trenches would actually welcome this, as the problem has been one of upper management taking advantage of their position to thwart the existing executive orders and a lack of process adopted within Agencies at the upper levels. Only an independent, outside body can provide the needed oversight to ensure Agencies comply with the OPEN Government Data Act.

Similarly, scientific publishers have formed the Coalition on Publishing Data in the Earth and Space Sciences (COPDESS) with a signed statement of commitment to ensure open and documented datasets are part of the publication process. Unfortunately, they, too, lack any standard checklist that peer reviewers and editors can use to ensure the statement of commitment is actually enforced. In this case, and for assessing archives, I would advocate a metric such as the data maturity model that I and colleagues have developed. This model has now been adopted and adapted by several different groups, applied to hundreds of datasets across the geophysical sciences, and has been found useful for ensuring information preservation, discovery, and accessibility.

Finally, there needs to be a renewed effort by scientists and scientific societies to provide training and conduct more meetings on ethics. Ethics needs to be a regular topic at major scientific meetings, in graduate classrooms, and in continuing professional education. Respectful discussion of different points of view should be encouraged. Fortunately, there is initial progress to report here, as scientific societies are now coming to grips with the need for discussion of and guidelines for scientific ethics.

There is much to do in each of these areas. Although I have retired from the federal government, I have not retired from being a scientist. I now have the luxury of spending more time on these things that I am most passionate about. I also appreciate the opportunity to contribute to Climate Etc. and work with my colleague and friend Judy on these important issues.

Postlude

A couple of examples of how the public can find and use CDR operational products, and what is lacking in a non-operational and non-archived product

  1. NOAA CDR of total solar irradiance – this is the highest level quality. Start at web site – https://data.nodc.noaa.gov/cgi-bin/iso?id=gov.noaa.ncdc:C00828

Here you will see a fully documented CDR. At the top, we have the general description and how to cite the data. Then below, you have a set of tabs with extensive information. Click each tab to see how it’s done. Note, for example, that in ‘documentation’ you have choices to get the general documentation, processing documents including source code, data flow diagram, and the algorithm theoretical basis document ATBD which includes all the info about how the product is generated, and then associated resources. This also includes a permanent digital object identifier (doi) to point uniquely to this dataset.

  1. NOAA CDR of mean layer temperature – RSS – one generation behind in documentation but still quite good – https://www.ncdc.noaa.gov/cdr/fundamental/mean-layer-temperature-rss

Here on the left you will find the documents again that are required to pass the CDR operations and archival. Even though it’s a slight cut below TSI in example 1, a user has all they need to use and understand this.

  1. The Karl hiatus paper can be found on NCEI here – https://www.ncdc.noaa.gov/news/recent-global-surface-warming-hiatus

If you follow the quick link ‘Download the Data via FTP’ you go here – ftp://ftp.ncdc.noaa.gov/pub/data/scpub201506/

The contents of this FTP site were entered into the NCEI archive following my complaint to the NCEI Science Council. However, the artifacts for full archival of an operational CDR are not included, so this is not compliant with archival standards.

Biosketch:  

John Bates received his Ph.D. in Meteorology from the University of Wisconsin-Madison in 1986. Post Ph.D., he spent his entire career at NOAA, until his retirement in 2016.  He spent the last 14 years of his career at NOAA’s National Climatic Data Center (now NCEI) as a Principal Scientist, where he served as a Supervisory Meteorologist until 2012.

Dr. Bates’ technical expertise lies in atmospheric sciences, and his interests include satellite observations of the global water and energy cycle, air-sea interactions, and climate variability. His most highly cited papers are in observational studies of long term variability and trends in atmospheric water vapor and clouds.

NOAA Administrator’s Award 2004 for “outstanding administration and leadership in developing a new division to meet the challenges to NOAA in the area of climate applications related to remotely sensed data”. He was awarded a U.S. Department of Commerce Gold Medal in 2014 for visionary work in the acquisition, production, and preservation of climate data records (CDRs). He has held elected positions at the American Geophysical Union (AGU), including Member of the AGU Council and Member of the AGU Board. He has played a leadership role in data management for the AGU.

He is currently President of John Bates Consulting Inc., which puts his recent experience and leadership in data management to use in helping clients improve data management to improve their preservation, discovery, and exploitation of their and others data. He has developed and applied techniques for assessing both organizational and individual data management and applications. These techniques help identify how data can be managed more cost effectively and discovered and applied by more users.

David Rose in the Mail on Sunday

David Rose of the UK Mail on Sunday is working on a comprehensive expose of this issue [link].

Here are the comments that I provided to David Rose, some of which were included in his article:

Here is what I think the broader implications are.  Following ClimateGate, I made a public plea for greater transparency in climate data sets, including documentation. In the U.S., John Bates has led the charge in developing these data standards and implementing them.  So it is very disturbing to see the institution that is the main U.S. custodian of climate data treat this issue so cavalierly, violating its own policy. The other concern that I raised following ClimateGate was overconfidence and inadequate assessments of uncertainty.  Large adjustments to the raw data, and substantial changes in successive data set versions, imply substantial uncertainties. The magnitude of these uncertainties influences how we interpret observed temperature trends, ‘warmest year’ claims, and how we interpret differences between observations and climate model simulations. I also raised concerns about bias; here we apparently see Tom Karl’s thumb on the scale in terms of the methodologies and procedures used in this publication.

Apart from the above issues, how much difference do these issues make to our overall understanding of global temperature change? All of the global surface temperature data sets employ NOAA’s GHCN land surface temperatures. The NASA GISS data set also employs the ERSST datasets for ocean surface temperatures. There are global surface temperature datasets, such as Berkeley Earth and HadCRUT that are relatively independent of the NOAA data sets, that agree qualitatively with the new NOAA data set. However, there remain large, unexplained regional discrepancies between the NOAA land surface temperatures and the raw data. Further,  there are some very large uncertainties in ocean sea surface temperatures, even in recent decades. Efforts by the global numerical weather prediction centers to produce global reanalyses such as the European Copernicus effort is probably the best way forward for the most recent decades.

Regarding uncertainty, ‘warmest year’, etc. there is a good article in the WSJ : Change would be healthy at U.S. climate agencies (hockeyshtick has reproduced the full article).

I also found this recent essay in phys.org to be very germane:  Certainty in complex scientific research an unachievable goal. Researchers do a good job of estimating the size of errors in measurements but underestimate chance of large errors.

Backstory

I have known John Bates for about 25 years, and he served on the Ph.D. committees of two of my graduate students. There is no one, anywhere, that is a greater champion for data integrity and transparency.

When I started Climate Etc., John was one of the few climate scientists that contacted me, sharing concerns about various ethical issues in our field.

Shortly after publication of K15, John and I began discussing our concerns about the paper.  I encouraged him to come forward publicly with his concerns. Instead, he opted to try to work within the NOAA system to address the issues –to little effect. Upon his retirement from NOAA in November 2016, he decided to go public with his concerns.

He submitted an earlier, shorter version of this essay to the Washington Post, in response to the 13 December article (climate scientists frantically copying data). The WaPo rejected his op-ed, so he decided to publish at Climate Etc.

In the meantime, David Rose contacted me about a month ago, saying he would be in Atlanta covering a story about a person unjustly imprisoned [link]. He had an extra day in Atlanta, and wanted to get together. I told him I wasn’t in Atlanta, but put him in contact with John Bates. David Rose and his editor were excited about what John had to say.

I have to wonder how this would have played out if we had issued a press release in the U.S., or if this story was given to pretty much any U.S. journalist working for the mainstream media. Under the Obama administration, I suspect that it would have been very difficult for this story to get any traction. Under the Trump administration, I have every confidence that this will be investigated (but still not sure how the MSM will react).

Well, it will be interesting to see how this story evolves, and most importantly, what policies can be put in place to prevent something like this from happening again.

I will have another post on this topic in a few days.

Being retired sure is liberating . . .

February 5, 2017 Posted by | Corruption, Deception, Science and Pseudo-Science, Timeless or most popular | , | Leave a comment

Follow the Money: How to Read Peer-Reviewed Science

By George Wuerthner | CounterPunch | January 30, 2017

I regularly hear or read arguments from agencies compromising our natural heritage that such and such studies support their management decisions. However, often the agencies overlook or ignore contrary science that does not support the policy or management decision.

To give them a break, the average district ranger or even specialists like wildlife biologists, fire managers, and others often do not have time to keep up with the latest science. So, recognize that they may not really know the “best” science.

Yet, the public, and often the media, naively accepts without question the assertions of public agencies as “unbiased” observers. One often hears agencies defend their statements and policies by suggesting if everyone is angry with their positions, they must be “doing something right.”

Well not everything is splitting the baby. The Earth is round, not flat or half way flat. Gravity exists whether you believe it or not—just try jumping off a cliff. There are some things that are more accurate than others.

Both proponents and opponents of various public policies rely upon scientific studies to give credibility to their positions and boost confidence in their assertions. So how does one determine whose science is reliable?

There are several ways that I decide the relative veracity of scientific research and whether to grant authority to agency representatives—I follow the money.

The Upton Sinclair quote that “It is difficult to get a man to understand something when his salary depends upon his not understanding it” often is a good starting point for determining the accuracy of statements.

I know few foresters, for instance, who are opposed to logging. If you are a forester, your job depends on cutting trees.

The first thing I do is look at the occupation and affiliation of the spokesperson. Obviously if you are reading a study about the safety of smoking cigarettes and the authors work for tobacco companies, this would raise a yellow flag of caution. But it is not always as obvious that there are conflicts.

For instance, one of the ideas we hear surrounding livestock grazing is that grazing can prevent large wildfires by targeted removal of the fuel—grasses—that sustains fires.

There have been several studies that purport to show that “targeted” grazing can halt large fires. So, the first thing I do is look at who the authors are and where they are employed. The studies that suggest that livestock grazing—at least the ones I’m aware of—are all done by people in range departments.

Why is this important? Because if you are a professor or graduate student in a range program, your entire budget and survival as a professor and department depends on maintaining livestock grazing on public lands. Hence you have a vested interested in promoting livestock “benefits” whether real or imagined, and minimizing any known negative effects.

Next I look at the journal where the research was published. Not all journals are equal. Some are much more discriminating in the papers accepted for publication. Some journals also have unspoken biases. For instance, the Journal of Rangeland Ecology and Management published by the Society for Range Management is biased towards promoting livestock production.

You will find few papers in this journal that recommends removal of livestock as the best management option, even when the authors may document significant resource damage from livestock. They almost always recommend “proper grazing management” as the solution to problems, whether proper management can work under field conditions.

Beyond the journal publication, one then must look at the funding source for the study. If you are doing range studies in the western United States, most of your funding will be coming from either livestock organizations, and/or federal/state money appropriated to demonstrate why livestock grazing is a beneficial use of the landscape.

In all instances of the above examples, money and jobs dictates the perspective of the individual, and woe to the person who steps over the line and does not promote the accepted policy positions.

So, in the case of targeted grazing, you must read crucially the study methods, and conclusions. Most scientists have integrity—even though who are advocates of grazing, logging, oil extraction or whatever. They do not out-right lie or distort their findings.

What they do instead is restrict the kinds of questions they ask, how they set up their experimental design, how they interpret their findings and what they propose for solutions.

For instance, I studied wildlife biology in college. There was not one professor of mine that questioned or even raised the question whether hunting wildlife was appropriate and perhaps harmful to the long-term survival of the animals. (And yes, there is evidence that even “regulated” hunting can negatively impact wildlife).

We simply never discussed this idea because almost all wildlife biology professors get a substantial amount of their research funding from Fish and Game agencies.

As an example of the unexamined assumptions, one of the papers widely referenced by the BLM in its management plans to save sage grouse champions “targeted grazing” to reduce western range fires.

The original study was based on the grazing of several small plots where the cattle were confined by fences and herding. While the grazed areas did have less fuels, the applicability of this management of confinement to large public lands allotments is questionable. The cost of such confinement would be prohibitive.

While the research might show that “targeted” grazing might reduce some wildfires, the practical application of this approach to wide-open public lands allotments is questionable.

Beyond the costs and the lack of landscape scale application, the study relied on “models” of fire behavior to conclude that grazing would reduce wildfires.

Models are notoriously imprecise. As the saying goes, what goes into the model affects what comes out.

One of the factors in their model was limiting the wind speed in the fire models.

Why is this important? Because nearly all large wildfires are driven by high winds. Under less than high winds, wildfires do not spread rapidly and are easy to control—whether grazed or not.

Models are better than nothing, but they are no substitute for empirical data. In other words, direct observation of how real wildfires interact with grazed landscapes.

Finally, in at least one of these studies, the authors admitted in the very final paragraph that their findings only applied to wildfires burning under low to moderate weather conditions.

One had to read the entire paper to find this one line which is a dead giveaway that targeted grazing is not likely to significantly influence the large wildfires burning across the West. These large wildfires all burn under extreme fire weather conditions.

The same caution applies to other science as well. Nearly all the science supporting thinning/logging to reduce high severity wildfire is done by forestry schools and/or researchers who work for federal or state forestry agencies like the US Forest Service.

For instance, the Oregon State University Forestry School gets 10% of its funding from a tax on logging, which alone would be enough incentive for the department to have a favorable perspective on logging issues, not to mention that timber industry dollars also directly fund some of the department’s research.

No one wants to bite the hand that feeds them.

I hasten to add that this does not mean all research done by forestry schools, nor are all professors in such departments minions for industry. Nevertheless, there are often unquestioned assumptions that permeates the science. A reasonable person would exercise caution in accepting all “peer reviewed” science as equally valid.


George Wuerthner has published 36 books including Wildfire: A Century of Failed Forest Policy. He serves on the board of the Western Watersheds Project.

January 30, 2017 Posted by | Corruption, Science and Pseudo-Science, Timeless or most popular | Leave a comment

There’s Nothing Parochial About the Issue of GMO Food Labeling

By Jonathan Latham, PhD | Independent Science News | January 24, 2017

The GMO labeling issue has quieted down some but there is still plenty to discuss. Just this week the USDA proposed its definition of a GMO for labeling purposes and it includes loopholes for gene editing. However, it is also possible for reasonable people to imagine that GMO labeling is a sideshow to the real business of the food movement. After all, most GMO foods and GMO crops are visually indistinguishable from non-GMOs, and tiny non-GMO labels can look pretty irrelevant on the side of a soda bottle containing whole cupfuls of sugar. Last week, Michael Pollan, Olivier de Schutter, Mark Bittman and Ricardo Salvador made that error, calling GMO labeling “parochial.” Granted, they wrote “important but parochial”, but qualifying the significance of GMO labeling in any way was a mistake.

The first issue is that GMOs are legally distinct from non-GMO crop varieties. They possess an enhanced legal status that has enabled GMOs to become a gushing profit centre for agribusiness. These rights not only allow their owners to steer farmers’ herbicide use, which also increases profits, they can also legally prevent independent research which would otherwise show up their advertising claims. The share price of Monsanto reached $142 in 2008, reflecting the enormous profitability of massively increasing seed prices on the back of GMO introductions.

Monsanto Prop 37

California Proposition 37

Those profits have in turn fuelled a set of key agribusiness activities. One was the acquisition of almost the entire independent global seed business, which now resides in very few hands. The second was a cluster of enhanced PR and lobbying activities that were necessary to defend GMOs. Rather than hide in the shadows agribusiness corporations needed to come out swinging in defence of the indefensible, which necessitated, among other things, a much higher degree of control than previously over teaching content and research at public universities.

Thus their special legal status enabled an unprecedented ability to control both the present and the future of agriculture.

GMOs are also conflated with science and thus progress. They have the intellectual role of presenting agribusiness as the innovative and dynamic frontier of agriculture, in contrast to those people who base their efforts on ecological diversity, local expertise, or deep knowledge. This cutting edge image is key to the agribusiness business model of reaping tax breaks and subsidies (Lima, 2015).

All around the world, taxpayer money supports and subsidises agribusiness without which benefits it would not exist (Capellesso et al., 2016). In the final analysis, however, the GMOs-as-progress argument is circular. Agribusiness is innovative because it uses GMOs and GMOs show how innovative they are. Smoke and mirrors, but politicians fall for it every day, delivering massive transfers of wealth every year from the public to the private sector (Lima, 2015).

The biological truth of GMOs is equally disturbing. At one end of the food chain are the crops in the field. Many people have noticed the virtual disappearance of Monarch butterflies. There are three leading explanations of this disappearance. The loss from farmland of their larval host plants, milkweeds, is one possibility; poisoning of their caterpillar larvae after consuming insecticide-filled pollen from Bt insect-resistant GMOs is a second; and toxicity from the neonicotinoid pesticides used to treat GMO seeds is the third. The first two both stem directly or indirectly from GMO use in agricultural fields since before GMOs, milkweeds could not be eradicated and now they can. Most likely is that all three causes are true and that along with milkweeds GMO agriculture also decimated, or eradicated entirely, many other species too.

Monarchs are lovely, but they are not otherwise special. Their significance is as sentinels. Planting milkweeds and pollinator way stations to specially preserve a sentinel species does not rescue an agricultural ecosystem, but it will mask the symptoms. Agribusiness is right now hoping that no one will notice the difference, and that by bringing back monarchs it can obscure the facts of their killing fields.

Internationally too, GMOs threaten to transform agriculture in places like India where millions of people who make a living by labouring in fields could be displaced by herbicide-tolerant crops such as mustard.

At the human consumption end of the food chain, if you live in the US, no one is protecting you from potential health hazards due to GMOs. Makers of GMO crop varieties don’t even have to notify the FDA of a new product. And if the maker deems the product is not a pesticide they don’t have to notify the EPA either. Trump won’t make it worse because it can’t be worse. It is non-partisan contempt for public health.

What are those potential health hazards? One important example is the famous (or infamous) rat study of NK603 corn by the French research group of professor Gilles-Eric Séralini . It is the only longterm study of the effects of GMOs on a mammal. If you ignore the tumours that most people focused on, the study found major kidney and liver dysfunction in the treated animals (Séralini et al., 2014). This dysfunction was evident from biochemical measurements and was also visually apparent under the microscope. These results are of no interest to US regulators, even in principle, since they fall between jurisdictions.

From this we can conclude that GMOs are often harmful, directly and indirectly, and further, that they are the leading edge of the business model of agribusiness.

The question, however, was labeling. Imagine that organic food was not allowed to be labeled. Would there be such an organised and powerful challenge to industrial food? What labeling does for the agriculture and food system is to allow the public to express its dismay and disagreement with the direction of corporate agriculture and assert their democratic rights to protect themselves. Labeling allows the public to engage with specific policies and products within the vast complexity of the food system and push back in a focused way against corruption and dishonesty, in real time. There aren’t too many chances to do that in America today.

References

Capellesso AJ, Ademir Antonio Cazella, Abdon Luiz Schmitt Filho, Joshua Farley, and Diego Albino Martins (2016) Economic and environmental impacts of production intensification in agriculture: comparing transgenic, conventional, and agroecological maize crops. AGROECOLOGY AND SUSTAINABLE FOOD SYSTEMS 40: 215–236. Lima T. (2015) Agricultural Subsidies for Non-farm Interests: An Analysis of the US Agro-industrial Complex. Agrarian South: Journal of Political Economy 4(1) 54–84o-industrial Complex
Séralini G-E, Emilie Clair, Robin Mesnage, Steeve Gress, Nicolas Defarge, Manuela Malatesta, Didier Hennequin and Joël Spiroux de Vendômois (2014) Republished study: long-term toxicity of a Roundup herbicide and a Roundup-tolerantgenetically modified maize. Environmental Sciences Europe 26:14 DOI: 10.1186/s12302-014-0014-5

January 24, 2017 Posted by | Deception, Environmentalism, Science and Pseudo-Science, Timeless or most popular | | 1 Comment

NOAA’s Tornado Fraud

By Paul Homewood | Not A Lot Of People Know That | January 15, 2017

https://www.ncdc.noaa.gov/sotc/tornadoes/201613

According to NOAA, the number of tornadoes has been steadily growing since the 1950s, despite a drop in numbers in the last five years.

They show the above chart prominently in their Tornadoes – Annual 2016 Report.

However, they know full well that it is meaningless to compare current data with the past, as they explain themselves in the section Historical Records and Trends, which is hidden away on their own website:

One of the main difficulties with tornado records is that a tornado, or evidence of a tornado must have been observed. Unlike rainfall or temperature, which may be measured by a fixed instrument, tornadoes are short-lived and very unpredictable. If a tornado occurs in a place with few or no people, it is not likely to be documented. Many significant tornadoes may not make it into the historical record since Tornado Alley was very sparsely populated during the 20th century.

Much early work on tornado climatology in the United States was done by John Park Finley in his book Tornadoes, published in 1887. While some of Finley’s safety guidelines have since been refuted as dangerous practices, the book remains a seminal work in tornado research. The University of Oklahoma created a PDF copy of the book and made it accessible at John Finley’s Tornadoes (link is external).

Today, nearly all of the United States is reasonably well populated, or at least covered by NOAA’s Doppler weather radars. Even if a tornado is not actually observed, modern damage assessments by National Weather Service personnel can discern if a tornado caused the damage, and if so, how strong the tornado may have been. This disparity between tornado records of the past and current records contributes a great deal of uncertainty regarding questions about the long-term behavior or patterns of tornado occurrence. Improved tornado observation practices have led to an increase in the number of reported weaker tornadoes, and in recent years EF-0 tornadoes have become more prevelant in the total number of reported tornadoes. In addition, even today many smaller tornadoes still may go undocumented in places with low populations or inconsistent communication facilities.

With increased National Doppler radar coverage, increasing population, and greater attention to tornado reporting, there has been an increase in the number of tornado reports over the past several decades. This can create a misleading appearance of an increasing trend in tornado frequency. To better understand the variability and trend in tornado frequency in the United States, the total number of EF-1 and stronger, as well as strong to violent tornadoes (EF-3 to EF-5 category on the Enhanced Fujita scale) can be analyzed. These tornadoes would have likely been reported even during the decades before Doppler radar use became widespread and practices resulted in increasing tornado reports. The bar charts below indicate there has been little trend in the frequency of the stronger tornadoes over the past 55 years.

EF1-EF5-t

EF3-EF5-t

https://www.ncdc.noaa.gov/climate-information/extreme-events/us-tornado-climatology/trends

Of course it is nonsensical to claim that the bar charts below indicate there has been little trend in the frequency of the stronger tornadoes over the past 55 years – there has clearly been a large reduction.

Note as well that they have not even bothered to update the graph for 2015. Could it be they would rather the public did not find out the truth?

Meanwhile, over at the Storm Prediction Center (SPC) you can see that, when allowance is made for changing reporting procedures, last year may well have had the lowest number of tornadoes on record.

torgraph-big

http://www.spc.noaa.gov/wcm/#data

The SPC is also part of NOAA, but is the department that actually deals with tornado events and data on a day to day basis. As such, they tend to be more interested in the facts, rather than a political agenda.

While we still await the final numbers and classification for last year, but what we do know is that there was no EF-5. Indeed the last occurrence was the Moore, OK tornado in May 2013.

It is unusual to go nearly four years without one, as there have been 59 since 1953, effectively one a year on average.

The bottom line is that the NOAA headline graph is grossly dishonest. Indeed, if a company published something like that in their Annual Accounts, they would probably end up in jail!

NOAA themselves know all of this full well.

Which raises the question – why are they perpetuating this fraud?

January 16, 2017 Posted by | Deception, Fake News, Science and Pseudo-Science | , | 1 Comment

Beyond Physics: Advanced Biology and Climate Change

By Clive Hambler | Climate Etc. | January 16, 2017

Reflections on the stabilization of Earth’s climate by life.

People frequently believe the claim that basic physics, established in the 19th Century, is sufficient to predict that Earth will warm in response to increasing CO2. However, I argue here that negative feedbacks due to life (‘Gaia’) may have stabilized the planet’s climate — on geological timescales and in recent decades. The biology of any such stabilization is far from settled, with a mechanistic understanding delayed by evolutionary debate. I conclude that even with such advanced biology we have little power to predict global climate changes.

There is a basic flaw in the basic physics argument of climate change: biology. Indeed, just one word should be enough to cast doubt on all models of the atmosphere: “oxygen”. No educated person is unaware of one aspect of Earth’s basic biology: most atmospheric oxygen results from living organisms. Physics and chemistry therefore cannot explain atmospheric composition or properties. Basic chemistry would leave the planet a rusty ball (like Mars or Venus). So, as James Lovelock articulated in his Gaia hypothesis in the 1970s, the properties of our atmosphere result from the tight coupling of living and non living components (biota and abiota). Earth’s obvious and massive departure from chemical equilibrium is unique in the solar system. So, if it’s easy to understand that life is central to atmospheric chemistry, why have many people found it much harder to understand life could be pivotal in atmospheric energy and climate? And if life is so intimately involved, predictive models would need to include it — which I’ll argue they can’t because the biology is too complex.

An initial response, I anticipate, will be that oxygen is not a climatically-active gas, because it is not radiatively active. However, that does not weaken the argument that life changes Earth far from the state which non-biological “basic” science would predict — an example of the planetary power of life. Moreover, few realise that oxygen could have major implications for the long-term temperature trajectory of the planet, if it is helping to keep Earth wet. This controversial idea was discussed in meetings on Gaia in Oxford in the 1990s, postulating that in the absence of life and oxygen, the splitting of water by sunlight would eventually lead to desiccation of the planet (as hydrogen bled away into space). Photo-dissociation might be offset by the presence of atmospheric oxygen, scavenging hydrogen and restoring water. If so, the dominant climatically-active gas in the atmosphere — water — also owes its abundance to life.

Whether the planet is wet due to life requires further study and discussion. Fortunately my argument — that life is largely missing from the models — does not depend on this. What is more important is that people who believe basic physics is sufficient to predict climate should consider cloud condensation.

It is very widely accepted that clouds are hard to model, yet central to understanding climate sensitivity to CO2. It is not even known if the overall cloud feedback effect in a warming world is positive or negative. Indeed, the IPCC (2013) state: “Clouds and aerosols continue to contribute the largest uncertainty to estimates and interpretations of the Earth’s changing energy budget….some aspects of the overall cloud response vary substantially among models…”.

The basic physics of absorption and emission of infrared radiation have been combined with complex and uncertain physics to estimate that doubling of CO2 would warm the Earth by about one degree Celsius. Feedbacks involving water vapour and clouds are required to invoke larger climate changes from a doubling of CO2. Unsurprisingly, cloud feedbacks estimated from models vary substantially. Cloud-related feedbacks could be net positive (because condensed water emits infrared radiation). Cloud-related feedbacks could be net negative (because clouds reflect sunlight back into space). Further, cloud processes and convection induce and modify complex atmospheric motions, from very small scales to planetary scales. The uncertainty of cloud behaviour might eventually be tractable with complex physical models for a lifeless planet (which somehow retained water), but I think that the uncertainty is amplified to unmanageable levels on our biologically-active Earth.

It was James Lovelock who identified a potentially huge impact of life on the climate. No wonder, then, that he now argues that “anybody who tries to predict more than five to ten years is a bit of an idiot, because so many things can change unexpectedly”. Consider this: some unknown fraction of the cloud of this planet, of unknown type and altitude and climate activity, is produced for unknown reasons by unknown numbers of living species with unknowable population dynamics. If there are any modellers who think this is tractable, I hope they will indicate how in the Comments below.

How, how much, and why is life involved in cloud formation? Nobody knows. I’ll outline a few of these unsettled elements of the science of climate change.

The question “how” is life involved is the simplest: some species release chemicals that become cloud condensation nuclei (CCN), without which water remains a vapour. Some species secrete a gas, DMS (dimethyl sulphide), which seeds some clouds. Some plants secrete gases with similar properties, including Volatile Organic Compounds such as isoprene and pinene. Clouds are often observed rising over rainforest trees and other forests. It has been known for hundreds of years that some forests create rainfall (and I hypothesize that life in lakes similarly creates some of the clouds associated with them).

Unfortunately, “how much” cloud is created by life is unknown, a problem worsened by paucity of data on how much of each type of cloud cover there is and was (particularly before satellite observations). Some argue that life creates a substantive fraction of the global cloud cover, others less – and the fraction will vary through time.

“Why” does life create clouds remains unknown, but two fascinating evolutionary reasons have been proposed. Hamilton and Lenton (1998) suggested that “microbes fly with their clouds”. This is a proposal I expect many scientists will too-readily dismiss — even if they understand the track record of Hamilton as the biologist central to modern evolutionary theory (through his initially controversial ideas). However, the ‘selfish’ reason microbes of oceans, forests (and lakes?) secrete a cloud-forming gas (at metabolic cost) could be to generate latent heat of condensation, thence uplift of air — and thus dispersal of life to sites with more opportunities. And a plausible reason for plants to generate clouds is that they use rainfall. Predictions that clouds should increase when plankton become stressed (such as by nutrient deficiency or irradiance) will require long-term and large-scale observation.

I guess climate modellers will counter that they have performed sensitivity analyses, and that life and its interations with clouds, are not needed to predict the future climate accurately enough, or have small effects. Such arguments might have convinced me whilst models appeared to fit the unadjusted observations. However, several inexplicable (but biologically evident) warmer periods in the Holocene and Eemian damage climate model credibility. It’s not possible to do sensitivity analysis for an element of a system if there is no reliable benchline against which to measure the effects of manipulations.

Biology is very poorly represented in all of ‘climate science’, be it the mechanisms, ecological effects or policy response. Tellingly, the IPCC Assessment Report (2013) calls its first volume ‘The Physical Science Basis’. As one of the few scientists publishing on the evolutionary mechanisms of ‘Gaia,’ I know that very little attention has been paid to this topic. Perhaps if Bill Hamilton were still alive and researching the stability of the Earth system, things would be different. Because Lovelock’s original version of Gaia has an evolutionary flaw, I redefined Gaia as “planetary stability due to life”, and worked with Hamilton and Peter Henderson to seek mechanisms compatible with evolutionary biology. (Amongst the reasons few biologists have taken an interest in Gaia are that the original theory and models, such as ‘Daisyworld’, had an evolutionary bias, required ‘group-selection’, or implied natural selection amongst communities or planets). Instead, Hamilton, Henderson and I looked for negative feedbacks through two biological processes: i) ecology (density-dependent population growth); and 2) evolution (frequency-dependent selection – a mechanism also postulated by Richard Dawkins in The Extended Phenotype in 1982). The frequency of cloud-producing living organisms (abundance or biomass) is likely to be responsive to CO2, generating positive and/or negative biological feedbacks (Canney & Hambler, 2002, Biological Feedback, in: The Encyclopedia of Global Change).

At the risk of adding yet another failure to the litany of failed climate predictions, I predict climate models will struggle to include biology. No amount of physics, basic or complex, will overcome this deficiency. It is not possible to model population changes of even one species of organism several generations into the future. The unpredictability of complex systems is well known in ecosystems – as Robert May and colleagues demonstrated in the 1970s for multi-species fisheries. Populations of species that influence each other’s survival, reproduction or dispersal in ways related to abundance are likely often to demonstrate ‘deterministic chaos’, in which simple equations including time lags often generate superficially chaotic population changes. Even two species coupled through the Lotka-Volterra differential equations may show such behaviour. Imagine the problems, then, of modelling millions, billions or even trillions of microbial ‘species’ on Earth – when not even the number of species is known, let alone each of their requirements and climatic influences. Whether multi-species systems have more predictable emergent stability remains to be seen; this would make incorporation of ecology into climate models easier. Such stability is being investigated by Peter Henderson in the ‘Dam World’ model of Gaia he created with Bill Hamilton (Canney & Hambler, 2013, Conservation).

Modelling changes in plankton becomes even more implausible when one considers the responses to changing CO2: ‘ocean acidification’ might boost plankton through improved bicarbonate availability, and thence even cool the planet through DMS induced clouds. Or it might impact plankton through metabolic costs, thereby reducing calcification and a carbon sink and creating a positive feedback. The population and metabolic consequences of interactions (including those between warming water, CO2 outgassing, pH changes, thermoclines, nutrient and carbon dioxide availability for photosynthesis) are not known for any planktonic species, let alone entire hyper-complex marine ecosystems. Even if population changes could be predicted, we could not predict their cloud production behaviour — or the overall effect on albedo or convection.

It should come as no surprise to scientists and the public that wildlife has climate impacts – yet few realise how large these can be. When and if people accept that life can greatly change the chemistry of the atmosphere, they may be ready for another logical step. In this paradigm, temperature drives life drives CO2 levels. As Murry Salby (2012) deduced (Physics of the Atmosphere and Climate), CO2 lags temperature on a wide range of timescales (including glacial to interglacial oscillations, the last few hundred years, decades, and within a year). About 5% of the CO2 emitted to the atmosphere each year is from human activities, leaving ample scope for minor changes (perhaps in solar activity) to change the major biological sinks and sources of this gas and overwhelm human influences on radiative forcing. Perhaps the paradigm shift required to understand causality in climate is comparable to discovering the ancient nature of fossils, or plate tectonics, or neo-Darwinism, or the inhibitory models of plant succession. I’ve witnessed and taught through some of these shifts, so know how hard they are.

The ecology and evolution of negative feedbacks and Gaia might provide a framework to reconcile climate data and theory – but with very different theory to the basic physics of the climate. Instead, climate becomes — as many others have noted — a perhaps intractable and wicked problem. Prediction and attribution of useful climate detail may be beyond any science. If ‘the pause’ continues, or the world now cools or warms, we may never know why. It might be that negative biological and other feedbacks prevented runaway warming in the past, and have already begun to act. Or solar activity might be driving the carbon cycle, stifling CO2 increase. Or both. If extinction rates continue to rise such feedback may collapse — a perverse outcome of climate policy that destroys habitat. We hear a lot about high risk justifying high expenditure on reducing CO2 emissions, despite low probability of such risk. If we applied those expenditures to protecting the biological component of climate, we would conserve the climatically-active ecosystems — not, perversely, destroy them though renewable energy impacts and opportunity costs.

I anticipate many of the suggestions above will raise calls for publication in journals. Perhaps that’s the way physics works. Yet many key biological advances have been published in books or informal articles. Some of Hamilton’s ideas were published only in less formal articles and in a film on clouds (which very few people have watched). Moreover, conventional peer review demonstrably does not work well in some areas of climate science.

I thank Judith Curry for yet another brave move in hosting this entry. I hope policy makers will focus on no-regrets actions (such as protecting forests and marine life) which are relatively cheap and would work even if I’m wrong.


Link to essay published in the Bulletin of the British Ecological Society:  ‘Thank you for Gaia’, by Clive Hambler [hambler-bes-gaia-paper]

Biosketch.  Clive has been an Oxford College Lecturer in biology at Merton, St Anne’s, Pembroke and Oriel. He joined Hertford in 1998 and is the college’s director of studies for Human Sciences. He works in Oxford’s faculties of Zoology, Geography and Anthropology. He is coauthor of the acclaimed book Conservation, published by Cambridge University Press (see reviews).

January 16, 2017 Posted by | Science and Pseudo-Science, Timeless or most popular | | 2 Comments

Beware Anti-“Pseudo-Science” Agitation

By Denis G. Rancourt, PhD | Activist Teacher | December 26, 2016

I was asked to write this short article to be published in the January newsletter of the Society for Academic Freedom and Scholarship (SAFS). A longer version of the article, with references, will be published in a 2017 SAFS conference proceeding.

If we accept an operational definition of “pseudo-science” as whatever any critic of so-called “pseudo-science” probably means, then vehement criticisms of the said “pseudo-sciences” are generally made for one of four reasons:

  1. To invalidate unworthy ideas, as part of the normal course of science itself — a classic example is the 1989 case of “cold fusion” and its fallout, in the field of condensed matter physics and chemistry
  2. To celebrate and maintain the middle-class belief that modern society is based on scientific knowledge; to fight against idolatry in the realm of ideas; to participate in improving public discourse and consciousness
  3. To provide false legitimacy for problematic areas of establishment science that survive owing to systemic financial and professional interests — the preeminent example being establishment medicine (see below)
  4. To attack a legitimate criticism of a dominant scientific position (collateral attack by appeal to authority or “consensus”, using denigration)

Thus, the full array of motives for engaging in the sport of “pseudo-science” bashing spans a spectrum from good scientific practice to ordinary social behaviour in structured society to support for organized fraud to outright base competition that is incompatible with the science ideal. Here, I outline the last three reasons, as follows. A longer version of this article, with references, will be published elsewhere.

Popular support for establishment science as state religion

Given the epidemic lack of understanding of science concepts, it is not surprising that there is a wide array of beliefs that are at odds with the school lessons about science, including: astrology, “intelligent design”, “free energy”, “orgone”, “creation biology”, and homeopathy.

Realistically, virtually all citizens are entirely unable to critically evaluate what we take as being scientific truth, regarding public policy and regulatory questions. Thus, “public education” means state propaganda. We are reduced to “scientists have concluded” or “there is a scientific consensus that” and so on.

Systemically, from an operational perspective, establishment science is a state religion. It is not anchored in empirical evidence that can be evaluated by the non-expert individual using reason and intellectual discernment. It frames and supports the established order. It provides legitimacy to government programs. It purports to appease our deepest quests for meaning, and supplies a creationist mythology (cosmology, string theory, and so on). Its high priests are venerated and occupy top ranks in the class hierarchy.

Ordinary well-educated citizens have invested in many beliefs delivered by establishment science, and have integrated these beliefs into their personal identities. It is therefore natural that middle-class and professional-class individuals have a learned and reflexive impulse to attack “pseudo-science”. These attacks can be individual or can coalesce via the animal behavioural collective phenomenon known as mobbing.

Legitimacy for problematic areas of establishment science

A stunning example is the organized barrage of criticism and legislation against “alternative medicine” that is largely benign and harmless, intended to imply that establishment medicine — said to be scientifically sound — is the only trustworthy system for repairing individual health.

The problem here is that establishment medicine is anything but shaped by objectively evaluated empirical evidence, and anything but scientifically sound. The eminent medical researcher Dr. John P.A. Ioannidis has demonstrated that “most published research findings are false”.

In North America, between 6% and 8% of citizens will be killed by medical errors of all types. In just one area of establishment medicine, Professor Dr. Peter C. Gøtzsche has come to the point of flatly concluding that long term use of psychiatric drugs cause more harm than good. In his words, based on a decade of research: “Psychiatric drugs are responsible for the deaths of more than half a million people aged 65 and older each year in the Western world, as I show below. Their benefits would need to be colossal to justify this, but they are minimal. … Overstated benefits and understated deaths …”

Attacking legitimate criticisms of establishment positions

Climate science has major domestic and geopolitical implications. It is routine to attack critics as immoral or crazy, and for influential actors and groups to seek legal instruments of intimidation and enforcement. The Wikipedia list of “pseudo-sciences” includes “climate change denial”.

This is a remarkable inclusion because several high-profile establishment climate scientists expressly reject the so-called “consensus”, including: Judy Curry (Georgia Institute of Technology), Richard Lindzen (MIT), Hendrik Tennekes (Royal Dutch Meteorological Institute), Nir Shaviv (Racah Institute of Physics), Craig D. Idso (Center for the Study of Carbon Dioxide and Global Change), and many others. Furthermore, detailed studies contradict claims that industrial-era CO2 has had a causal effect on climate and extreme-weather events.

Conclusion

Agitation against “pseudo-science” has two illegitimate interrelated societal mechanisms: Institutionally, it is propaganda (by word and by action) intended to legitimize and impose establishment science. Individually, it serves to preserve the identity-tied personal investment in belief of the teachings of establishment science.

For those of us who cling to the ideal of the university, a review of anti-“pseudo-science” agitation should lead us to support a strict meaning of academic freedom, which does not admit institutional suppression or containment of any chosen research direction and expression. We must trust that actual freedoms of research and expression lead to the best that society can be, through the discourse that arises, whatever that discourse will be.


Denis Rancourt is a former tenured full professor of physics at the University of Ottawa, Canada. He has published over 100 articles in leading scientific journals, and writes social theory articles. He is the author of the book Hierarchy and Free Expression in the Fight Against Racism

January 8, 2017 Posted by | Corruption, Science and Pseudo-Science, Timeless or most popular | Leave a comment

Birds migrating earlier as temperatures rise

By Paul Homewood | Not A Lot Of People Know That | December 30, 2016

This one was doing the rounds yesterday.

From the BBC:

Migrating birds are arriving at their breeding grounds earlier as global temperatures rise, a study has found.

Birds have reached their summer breeding grounds on average about one day earlier per degree of increasing global temperatures, according to the research by Edinburgh University.

The study looked at hundreds of species across five continents.

It is hoped it will help scientists predict how different species may respond to future environmental change.

Reaching their summer breeding grounds at the wrong time – even by a few days – may cause birds to miss out on maximum availability of vital resources such as food and nesting places.

Late arrival to breeding grounds may, in turn, affect the timing of offspring hatching and their chances of survival.

Long-distance migrants, which are shown to be less responsive to rising temperatures, may suffer most as other birds gain advantage by arriving at breeding grounds ahead of them.
Flowering and breeding

Takuji Usui, of Edinburgh University’s school of biological sciences, said: “Many plant and animal species are altering the timing of activities associated with the start of spring, such as flowering and breeding.

“Now we have detailed insights into how the timing of migration is changing and how this change varies across species.

“These insights may help us predict how well migratory birds keep up with changing conditions on their breeding grounds.”

The study examined how various species, which take flight in response to cues such as changing seasonal temperatures and food availability, have altered their behaviour over time and with increasing temperatures.

The researchers examined records of migrating bird species dating back almost 300 years.

The study drew upon records from amateur enthusiasts and scientists, including notes from 19th-century American naturalist Henry David Thoreau.

Species that migrate huge distances – such as the swallow and pied flycatcher – and those with shorter migrations – such as the lapwing and pied wagtail – were included in the research.

The study, published in Journal of Animal Ecology, was supported by the Natural Environment Research Council.

So, let’s get this straight.

One day earlier for each degree of global warming. That means birds are migrating a whole day earlier than during the 19thC.

And we are supposed to be concerned about this?

In fact, given the inter-annual variability, I simply do not believe that these results have any statistical significance whatsoever. The error margins must dwarf the results.

But here’s the thing. Birds have been adapting to changing climate for millennia. It is not the climate that forces them to do anything. Quite the reverse in fact. Birds will adopt the strategy that is most beneficial for them.

The longer they can stay at their summer breeding grounds, the better it is for them, as it allows more time for them to raise their chicks.

The project was funded by the NERC. Isn’t it time we stopped wasting taxpayers’ money on such rubbish?

January 1, 2017 Posted by | Mainstream Media, Warmongering, Science and Pseudo-Science | , , | Leave a comment

Skeptical Climate Scientists Coming In From the Cold

By James Varney | RealClearInvestigations | December 31, 2016

In the world of climate science, the skeptics are coming in from the cold.

Researchers who see global warming as something less than a planet-ending calamity believe the incoming Trump administration may allow their views to be developed and heard. This didn’t happen under the Obama administration, which denied that a debate even existed. Now, some scientists say, a more inclusive approach – and the billions of federal dollars that might support it – could be in the offing.

“Here’s to hoping the Age of Trump will herald the demise of climate change dogma, and acceptance of a broader range of perspectives in climate science and our policy options,” Georgia Tech scientist Judith Curry wrote this month at her popular Climate Etc. blog.

William Happer, professor emeritus of physics at Princeton University and a member of the National Academy of Sciences, is similarly optimistic. “I think we’re making progress,” Happer said. “I see reassuring signs.”

Despite harsh criticism of their contrarian views, a few scientists like Happer and Curry have pointed to evidence that global warming is less pronounced than predicted. They have also argued that this slighter warming would bring positive developments along with problems. For the first time in years, skeptics believe they can find a path out of the wilderness into which they’ve been cast by the “scientific consensus.” As much as they desire a more open-minded reception by their colleagues, they are hoping even more that the spigot of government research funding – which dwarfs all other sources – will trickle their way.

President-elect Donald Trump, who has called global warming a “hoax,” has chosen for key cabinet posts men whom the global warming establishment considers lapdogs of the oil and gas industry: former Texas Gov. Rick Perry to run the Energy Department; Attorney General Scott Pruitt of Oklahoma to run the Environmental Protection Agency; and Exxon chief executive Rex Tillerson as secretary of state.

But while general policy may be set at the cabinet level, significant and concrete changes would likely be spelled out below those three – among the very bureaucrats the Trump transition team might have had in mind when, in a move some saw as intimidation, it sent a questionnaire to the Energy Department this month (later disavowed) trying to determine who worked on global warming.

It isn’t certain that federal employees working in various environmental or energy sector-related agencies would willingly implement rollbacks of regulations, let alone a redirection of scientific climate research, but the latter prospect heartens the skeptical scientists. They cite an adage: You only get answers to the questions you ask.

“In reality, it’s the government, not the scientists, that asks the questions,” said David Wojick, a longtime government consultant who has closely tracked climate research spending since 1992. If a federal agency wants models that focus on potential sea-level rise, for example, it can order them up. But it can also shift the focus to how warming might boost crop yields or improve drought resistance.

While it could take months for such expanded fields of research to emerge, a wider look at the possibilities excites some scientists. Happer, for one, feels emboldened in ways he rarely has throughout his career because, for many years, he knew his iconoclastic climate conclusions would hurt his professional prospects.

When asked if he would voice dissent on climate change if he were a younger, less established physicist, he said: “Oh, no, definitely not. I held my tongue for a long time because friends told me I would not be elected to the National Academy of Sciences if I didn’t toe the alarmists’ company line.”

That sharp disagreements are real in the field may come as a shock to many people, who are regularly informed that climate science is settled and those who question this orthodoxy are akin to Holocaust deniers. Nevertheless, new organizations like the CO2 Coalition, founded in 2015, suggest the debate is more evenly matched intellectually than is commonly portrayed. In addition to Happer, the CO2 Coalition’s initial members include scholars with ties to world-class institutions like MIT, Harvard and Rockefeller University. The coalition also features members of the American Geophysical Union and the American Meteorology Society, along with policy experts from the Manhattan Institute, the George C. Marshall Institute and Tufts University’s Fletcher School.

With such voices joining in, the debate over global warming might shift. Until now, it’s normally portrayed as enlightened scholars vs. anti-science simpletons. A more open debate could shift the discussion to one about global warming’s extent and root causes.

Should a scientific and research funding realignment occur, it could do more than shatter what some see as an orthodoxy stifling free inquiry. Bjorn Lomborg, who has spent years analyzing potential solutions to global warming, believes that a more expansive outlook toward research is necessary because too much government funding has become expensive and ineffective corporate welfare. Although not a natural scientist, the social scientist Lomborg considers climate change real but not cataclysmic.

“Maybe now we’ll have a smarter conversation about what actually works,” Lomborg told RealClearInvestigations. “What has been proposed costs a fortune and does very little. With more space opening up, we can invest more into research and development into green energy. We don’t need subsidies to build something. They’ve been throwing a lot of money at projects that supposedly will cut carbon emissions but actually accomplish very little. That’s not a good idea. The funding should go to universities and research institutions; you don’t need to give it to companies to do it.”

Such new opportunities might, in theory, calm a field tossed by acrimony and signal a détente in climate science. Yet most experts are skeptical that a kumbaya moment is at hand. The mutual bitterness instilled over the years, the research money at stake, and the bristling hostility toward Trump’s appointees could actually exacerbate tensions.

“I think that the vast ‘middle’ will want and seek a more collegial atmosphere,” Georgia Tech’s Curry told RealClearInvestigations. “But there will be some hardcore people (particularly on the alarmed side) whose professional reputation, funding, media exposure, influence etc. depends on cranking up the alarm.”

Michael E. Mann, another climate change veteran, is also doubtful about a rapprochement. Mann, director of the Earth System Science Center at Penn State and author of the “hockey stick” graph, which claims a sharp uptick in global temperatures over the past century, believes ardently that global warming is a dire threat. He concluded a Washington Post op-ed this month with this foreboding thought: “The fate of the planet hangs in the balance.” Mann acknowledges a brutal war of words has engulfed climate science. But in an e-mail exchange with RealClearInvestigations, he blamed opponents led by “the Koch brothers” for the polarization.

Mann did hint, however, there may be some room for discussion.

“In that poisonous environment it is difficult to have the important, more nuanced and worthy debate about what to do about the problem,” he wrote. “There are Republicans like Arnold Schwarzenegger, Bob Inglis and George Shultz trying to create space for that discussion, and that gives me hope. But given that Donald Trump is appointing so many outright climate deniers to key posts in this administration, I must confess that I – and many of my fellow scientists – are rather concerned.”

Neither side of the debate has been immune from harsh and sinister attacks. Happer said he stepped down from the active faculty at Princeton in part “to deal with all this craziness.” Happer and Mann, like several other climate scientists, have gotten death threats. They provided RealClearInvestigations with some of the e-mails and voice messages they have received.

“You are an educated Nazi and should hang from the neck,” a critic wrote Happer in October 2014.

“You and your colleagues who have promoted this scandal ought to be shot, quartered and fed to the pigs along with your whole damn families,” one e-mailed Mann in Dec. 2009.

Similar threats have bedeviled scientists and writers across the climate research spectrum, from Patrick Michaels, a self-described “lukewarmer” who dealt with death threats at the University of Virginia before moving to the Cato Institute, to Rajendra Pachauri, who protested anonymous death threats while heading the United Nations Intergovernmental Panel on Climate Change (IPCC).

Putting such ugliness aside, some experts doubt that the science will improve even if the Trump administration asks new research questions and funding spreads to myriad proposals. Richard Lindzen, the Alfred P. Sloan Professor of Meteorology at MIT and a member of the National Academy of Sciences who has long questioned climate change orthodoxy, is skeptical that a sunnier outlook is upon us.

“I actually doubt that,” he said. Even if some of the roughly $2.5 billion in taxpayer dollars currently spent on climate research across 13 different federal agencies now shifts to scientists less invested in the calamitous narrative, Lindzen believes groupthink has so corrupted the field that funding should be sharply curtailed rather than redirected.

“They should probably cut the funding by 80 to 90 percent until the field cleans up,” he said. “Climate science has been set back two generations, and they have destroyed its intellectual foundations.”

The field is cluttered with entrenched figures who must toe the established line, he said, pointing to a recent congressional report that found the Obama administration got a top Department of Energy scientist fired and generally intimidated the staff to conform with its politicized position on climate change.

“Remember this was a tiny field, a backwater, and then suddenly you increased the funding to billions and everyone got into it,” Lindzen said. “Even in 1990 no one at MIT called themselves a ‘climate scientist,’ and then all of a sudden everyone was. They only entered it because of the bucks; they realized it was a gravy train. You have to get it back to the people who only care about the science.”

January 1, 2017 Posted by | Corruption, Science and Pseudo-Science | , | Leave a comment

100% Of US Warming Is Due To NOAA Data Tampering

By Tony Heller | The Deplorable Climate Science Blog | December 28, 2016

Climate Central just ran this piece, which the Washington Post picked up on. They claimed the US was “overwhelmingly hot” in 2016, and temperatures have risen 1,5°F since the 19th century.

The U.S. Has Been Overwhelmingly Hot This Year | Climate Central

The first problem with their analysis is that the US had very little hot weather in 2016. The percentage of hot days was below average, and ranked 80th since 1895. Only 4.4% of days were over 95°F, compared with the long term average of 4.9%. Climate Central is conflating mild temperatures with hot ones.

They also claim US temperatures rose 1.5°F since the 19th century, which is what NOAA shows.

Climate at a Glance | National Centers for Environmental Information (NCEI)

The problem with the NOAA graph is that it is fake data. NOAA creates the warming trend by altering the data. The NOAA raw data shows no warming over the past century

The adjustments being made are almost exactly 1.5°F, which is the claimed warming in the article.

The adjustments correlate almost perfectly with atmospheric CO2. NOAA is adjusting the data to match global warming theory. This is known as PBEM (Policy Based Evidence Making.)

The hockey stick of adjustments since 1970 is due almost entirely to NOAA fabricating missing station data. In 2016, more than 42% of their monthly station data was missing, so they simply made it up. This is easy to identify because they mark fabricated temperatures with an “E” in their database.

When presented with my claims of fraud, NOAA typically tries to arm wave it away with these two complaints.

  1. They use gridded data and I am using un-gridded data.
  2. They “have to” adjust the data because of Time Of Observation Bias and station moves.

Both claims are easily debunked. The only effect that gridding has is to lower temperatures slightly. The trend of gridded data is almost identical to the trend of un-gridded data.

Time of Observation Bias (TOBS) is a real problem, but is very small. TOBS is based on the idea that if you reset a min/max thermometer too close to the afternoon maximum, you will double count warm temperatures (and vice-versa if thermometer is reset in the morning.) Their claim is that during the hot 1930’s most stations reset their thermometers in the afternoon.

This is easy to test by using only the stations which did not reset their thermometers in the afternoon during the 1930’s. The pattern is almost identical to that of all stations. No warming over the past century. Note that the graph below tends to show too much warming due to morning TOBS.

NOAA’s own documents show that the TOBS adjustment is small (0.3°F) and goes flat after 1990.

https://www.ncdc.noaa.gov/img/climate/research/ushcn/ts.ushcn_anom25_diffs_pg.gif

Gavin Schmidt at NASA explains very clearly why the US temperature record does not need to be adjusted.

You could throw out 50 percent of the station data or more, and you’d get basically the same answers.

One recent innovation is the set up of a climate reference network alongside the current stations so that they can look for potentially serious issues at the large scale – and they haven’t found any yet.

NASA – NASA Climatologist Gavin Schmidt Discusses the Surface Temperature Record

NOAA has always known that the US is not warming.

U.S. Data Since 1895 Fail To Show Warming Trend – NYTimes.com

All of the claims in the Climate Central article are bogus. The US is not warming and 2016 was not a hot year in the US. It was a very mild year.

December 29, 2016 Posted by | Deception, Mainstream Media, Warmongering, Science and Pseudo-Science, Timeless or most popular | , , | 1 Comment