Aletho News

ΑΛΗΘΩΣ

Root Cause Analysis of the Modern Warming

By Matt Skaggs | Climate Etc. | October 23, 2014

For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works.

The concept of attribution is important in descriptive science, and is a key part of engineering. Engineers typically use the term “root cause analysis” rather than attribution. There is nothing particularly clever about root cause methodology, and once someone is introduced to the basics, it all seems fairly obvious. It is really just a system for keeping track of what you know and what you still need to figure out.

I have been performing root cause analysis throughout my entire, long career, generally in an engineering setting. The effort consists of applying well established tools to new problems. This means that in many cases, I am not providing subject matter expertise on the problem itself, although it is always useful to understand the basics. Earlier in my career I also performed laboratory forensic work, but these days I am usually merely a facilitator. I will refer to those that are most knowledgeable about a particular problem as the “subject matter experts” (SMEs).

This essay consists of three basic sections. First I will briefly touch on root cause methodology. Next I will step through how a fault tree would be conducted for a topic such as the recent warming, including showing what the top tiers of the tree might look like. I will conclude with some remarks about the current status of the attribution effort in global warming. As is typical for a technical blog post, I will be covering a lot of ground while barely touching on most topics, but I promise that I will do my best to explain the concepts as clearly and concisely as I can.

Part 1: Established Root Cause Methodology

Definitions and Scope

Formal root cause analysis requires very clear definitions and scope to avoid chaos. It is a tool specifically for situations in which we have detected an effect with no obvious cause, but discerning the cause is valuable in some way. This means that we can only apply our methodology to events that have already occurred, since predicting the future exploits different tools. We will define an effect subject to attribution as a significant excursion from stable output in an otherwise stable system. One reason this is important is that a significant excursion from stable behavior in an otherwise stable system can be assumed to have a single root cause. Full justification of this is beyond the scope of this essay, but consider that if your car suddenly stops progressing forward while you are driving, the failure has a single root cause. After having no trouble for a year, the wheel does not fall off at the exact same instant that the fuel pump seizes. I will define a “stable” system as one in which significant excursions are so rare in time that they can safely be assumed to have a single root cause.

Climate science is currently engaged in an attribution effort pertaining to a recent temperature excursion, which I will refer to as the “modern warming.” For purposes of defining the scope of our attribution effort, we will consider the term “modern warming” to represent the rise in global temperature since 1980. This is sufficiently precise to prevent confusion, we can always go back and tweak this date if the evidence warrants. 

Choosing a Tool from the Toolbox 

There are two basic methods to conclusively attribute an effect to a cause. The short route to attribution is to recognize a unique signature in the evidence that can only be explained by a single root cause. This is familiar from daily life; the transformer in front of your house shorted and there is a dead black squirrel hanging up there. The need for a systematic approach such as a fault tree only arises when there is no black squirrel. We will return to the question of a unique signature later, after discussing what an exhaustive effort would look like.

Once we have determined that we cannot simply look at the outcome of an event and see the obvious cause, and we find no unique signature in the data, we must take a more systematic approach. The primary tools in engineering root cause analysis are the fault tree and the cause map. The fault tree is the tool of choice for when things fail (or more generally, execute an excursion), while the cause map is a better tool for when a process breaks down. The fault tree asks “how?,” while the cause map asks “why?” Both tools are forms of logic trees with all logical bifurcations mapped out. Fault trees can be quite complex with various types of logic gates. The key attributes of a fault tree are accuracy, clarity, and comprehensiveness. What does it mean to be comprehensive? The tree must address all plausible root causes, even ones considered highly unlikely, but there is a limit. The limit concept here is euphemistically referred to as “comet strike” by engineers. If you are trying to figure out why a boiler blew up, you are not obligated to put “comet strike” on your fault tree unless there is some evidence of an actual comet.

Since we are looking at an excursion in a data set, we choose the fault tree as our basic tool. The fault tree approach looks like this:

  1. Verify that a significant excursion has occurred.
  2. Collect sufficient data to characterize the excursion.
  3. Assemble the SMEs and brainstorm possible root causes for the excursion.
  4. Build a formal fault tree showing all the plausible causes. If there is any dispute about plausibility, put the prospective cause on the tree anyway.
  5. Apply documented evidence to each cause. This generally consists of direct observations and experimental results. Parse the data as either supporting or refuting a root cause, and modify the fault tree accordingly.
  6. Determine where evidence is lacking, develop a plan to generate the missing evidence. Consider synthetically modeling the behavior when no better evidence is available.
  7. Execute plan to fill all evidence blocks. Continue until all plausible root causes are refuted except one, and verify that the surviving root cause is supported by robust evidence.
  8. Produce report showing all of the above, and concluding that the root cause of the excursion was the surviving cause on the fault tree.

I will be discussing these steps in more detail below.

The Epistemology of Attribution Evidence

As we work through a fault tree, we inevitably must weigh the value of various forms of evidence. Remaining objective here can be a challenge, but we do have some basic guidelines to help us.

The types of evidence used to support or refute a root cause are not all equal. The differences can be expressed in terms of “fidelity.” When we examine a failed part or an excursion in a data set, our direct observations are based upon evidence that has perfect fidelity. The physical evidence corresponds exactly to the effect of the true root cause upon the system of interest. We may misinterpret the evidence, but the evidence is nevertheless a direct result of the true root cause that we seek. That is not true when we devise experiments to simulate the excursion, nor is it true when we create synthetic models.

When we cannot obtain conclusive root cause evidence by direct observation of the characteristics of the excursion, or direct analysis of performance data, the next best approach is to simulate the excursion by performing input/output (I/O) experimentation on the same or an equivalent system. This requires that we make assumptions about the input parameters, and we cannot assume that our assumptions have perfect fidelity to the excursion we are trying to simulate. Once we can analyze the results of the experiment, we find that it either reproduced our excursion of interest, or it did not. Either way, the outcome of the experiment has high fidelity with respect to the input as long as the system used in test has high fidelity to the system of interest. If the experiment based upon our best guess of the pertinent input parameters does not reproduce the directly-observed characteristics of the excursion, we do not discard the direct observations in favor of the experiment results. We may need to go back and double check our interpretation, but if the experiment does not create the same outcome as the actual event, it means we chose the wrong input parameters. The experiment serves to refute our best guess. The outcomes from experimentation obviously sit lower on an evidence hierarchy than direct observations.

The fidelity of synthetic models is limited in exactly the same way with respect to the input parameters that we plug into the model. But models have other fidelity issues as well. When we perform our experiments on the same system that had the excursion (which is ideal if it is available), or on an equivalent system, we take great care to assure that our test system responds the same way to input as the original system that had the excursion of interest. We can sometimes verify this directly. In a synthetic model, however, an algorithm is substituted for the actual system, and there will always be assumptions that go into the algorithm. This adds up to a situation in which we are unsure of the fidelity of our input parameters, and unsure of the fidelity of our algorithm. The compounded effect of this uncertainty is that we do not apply the same level of confidence to model results that we do to observations or experiment results. So in summary, and with everything else being equal, direct observation will always trump experimental results, and experimental results will always trump model output. Of course, there is no way to conduct meaningful experiments on analogous climates, so one of the best tools is not of any use to us.

Similar objective value judgments can be made about the comparison of two data sets. When we look at two curves and they both seem to show an excursion that matches in onset, duration and amplitude, we consider that to be evidence of correlation. If the wiggles also closely match, that is stronger evidence. Two curves that obviously exhibit the same onset, magnitude, and duration prior to statistical analysis will always be considered better evidence than two curves that can be shown to be similar after sophisticated statistical analysis. The less explanation needed to correlate two curves, the stronger the evidence of correlation.

Sometimes we need to resolve plausible root causes but lack direct evidence and cannot simulate the excursion of interest by I/O testing. Under these circumstances, model output might be considered if it meets certain objective criteria. When attribution of a past event is the goal, engineers shun innovation. In order for model output to be considered in a fault tree effort, the model requires extensive validation, which means the algorithm must be well established. There must be a historical record of input parameters and how changes in those parameters affected the output. Ideally, the model will have already been used successfully to make predictions about system behavior under specific circumstances. Models can be both sophisticated and quite trustworthy, as we see with the model of planetary motion in the solar system. Also, some very clever methods have been developed to substitute for prior knowledge. An example is the Monte Carlo method, which can sometimes tightly constrain an estimation of output without robust data on input. Similarly, if you have good input and output data, we can sometimes develop a useful empirical relationship of the system behavior without really knowing much about how the system works. A simple way to think of this is to consider three types of information, input data, system behavior, and output data. If you know two of the three, you have some options for approximating the third. But if you only have adequate information on one or less of the types of information, your model approach is underspecified. Underspecified model simulations are on the frontier of knowledge and we shun their use on fault trees. To be more precise, simulations from underspecified models are insufficiently trustworthy to adequately refute root causes that are otherwise plausible.

Part 2: Established Attribution Methodology Applied to the Modern Warming

Now that we have briefly covered the basics of objective attribution and how we look at evidence, let’s apply the tools to the modern warming. Recall that attribution can only be applied to events in the past or present, so we are looking at only the modern warming, not the physics of AGW. A hockey stick shape in a data set provides a perfect opportunity, since the blade of the stick represents a significant excursion from the shaft of the stick, while the shaft represents the stable system that we need to start with.

I mentioned at the beginning that it is useful for an attribution facilitator to be familiar with the basics of the science. While I am not a climate scientist, I have put plenty of hours into keeping up with climate science, and I am capable of reading the primary literature as long as it is not theoretical physics or advanced statistics. I am familiar with the IPCC Annual Report (AR) sections on attribution, and I have read all the posts at RealClimate.org for a number of years. I also keep up with some of the skeptical blogs including Climate Etc. although I rarely enter the comment fray. I did a little extra reading for this essay, with some help from Dr. Curry. This is plenty of familiarity to act as a facilitator for attribution on a climate topic. Onward to the root cause analysis.

Step 1: Verify that a significant excursion has occurred.

Here we want to evaluate the evidence that the excursion of interest is truly beyond the bounds of the stability region for the system. When we look at mechanical failures, Step 1 is almost never a problem, there is typically indisputable visual evidence that something broke. In electronics, a part will sometimes seem to fail in a circuit but meet all of the manufacturer’s specifications after it is removed. When that happens we shift our analysis to the circuit and the component originally suspected of causing the failure becomes a refuted root cause.

In looking at the modern warming, we first ask whether there are similar multi-decadal excursions in the past millennium of unknown cause. We also need to consider the entire Holocene. While most of the available literature states that the modern excursion is indeed unprecedented, this part of the attribution analysis is not a democratic process. We find that there is at least one entirely plausible temperature reconstruction for the last millennium that shows comparable excursions. Holocene reconstructions suggest that the modern warming is not particularly significant. We find no consensus as to the cause of the Younger Dryas, the Minoan, Roman, and Medieval warmings, or the Little Ice Age, all of which may constitute excursions of at least similar magnitude. I am not comfortable with this because we need to understand the mechanisms that made the system stable in the first place before we can meaningfully attribute a single excursion.

When I am confronted with a situation like this in my role as facilitator, I would have a discussion with my customer as to whether they want to expend the funds to continue the root cause effort given the magnitude of uncertainly regarding the question of whether we even have a legitimate attribution target. I have grave doubts that we have survived Step 1 in this process, but let’s assume that the customer wants us to continue.

Step 2. Collect sufficient data to characterize the excursion.

The methodology can get a little messy here. Before we can meaningfully construct a fault tree, we need to carefully define the excursion of interest, which usually means studying both the input and output data. However, we are not really sure of what input data we need since some may be pertinent to the excursion while other data might not. We tend to rely upon common sense and prior knowledge as to what we should gather at this stage, but any omissions will be caught during the brainstorming so we need not get too worried.

The excursion of interest is in temperature data. We find that there is a general consensus that a warming excursion has occurred. The broad general agreement about trends in surface temperature indices is sufficient for our purposes.

The modern warming temperature excursion exists in the output side of the complex process known as “climate.” A fully characterized excursion would also include robust empirical input data, which for climate change would be tracking data for the climate drivers. When we look for input data at this stage, we are looking for empirical records of the climate both prior to and during the modern warming. We do not have a full list yet, but we know that greenhouse gases, aerosols, volcanoes, water vapor, and clouds are all important. Rather than continue on this topic here, I will discuss it in more detail after we construct the fault tree below. That way we can be specific about what input data we need.

Looking for a Unique Signature

Now that we have chosen to consider the excursion as anomalous and sufficiently characterized, this is a good time to look for a unique signature. Has the modern warming created a signature that is so unique that it can only be associated with a single root cause? If so, we want to know now so that we can save our customer the expense of the full fault tree that we would build in Steps 3 and 4.

Do any SMEs interpret some aspect of the temperature data as a unique signature that could not possibly be associated with more than one root cause? It turns out that some interpret the specific spatio-temporal heterogeneity pattern as being evidence that the warming was driven by the radiation absorbed by increased greenhouse gas (GHG) content in the atmosphere. Based upon what I have read, I don’t think there is anyone arguing for a different root cause creating a unique signature in the modern warming. The skeptic arguments seem to all reside under a claim that the signature is not unique, not that it is unique to something other than GHG warming. So let’s see whether we can take our shortcut to a conclusion that an increase in GHG concentration is the sole plausible root cause due to a unique data signature.

Spatial heterogeneity would be occurring up to the present day, and so can be directly measured. I have seen two spatial pattern claims about GHG warming, 1) the troposphere should warm more quickly, and 2) the poles should warm more quickly. Because this is important, I have attempted to track these claims back through time. The references mostly go back to climate modeling papers from the 1970s and 1980s. In the papers, I was unable to find a single instance where any of the feedbacks thought to enhance warming in specific locations were associated solely with CO2. Instead, some are associated with any GHG, while others such as arctic sea ice decrease occur due to any persistent warming. Nevertheless, the attribution chapter in IPCC AR 5 contains a paragraph that seems to imply that enhanced tropospheric warming supports attribution of the modern warming to anthropogenic CO2. I cannot make the dots connect. But here is one point that cannot be overemphasized: the search for a unique signature in the modern warming is the best hope we have for resolving the attribution question.

Step 3. Assemble the SMEs and brainstorm plausible root causes for the excursion.

Without an overwhelmingly strong argument that we have a unique signature situation, we must do the heavy lifting involved with the exhaustive approach. Of course, I am not going to be offered the luxury of a room full of climate SMEs, so I will have to attempt this myself for the purposes of this essay.

Step 4. Build a Formal Fault Tree

An attribution analysis is a form of communication, and the effort is purpose-driven in that we plan to execute a corrective action if that is feasible. As a communication tool, we want our fault tree to be in a form that makes sense to those that will be the most difficult to convince, the SMEs themselves. And when we are done, we want the results to clearly point to actions we may take. With these thoughts in mind, I try to find a format that is consistent with what the SMEs already do. Also, we need to emphasize anthropogenic aspects of causality because those are the only ones we can change. So we will base our fault tree on an energy budget approach similar to a General Circulation Model (GCM), and we will take care to ensure that we separate anthropogenic effects from other effects.

GCMs universally, at least as far as I know, use what engineers call a “control volume” approach to track an energy budget. In a control volume, you can imagine an infinitely thin and weightless membrane surrounding the globe at the top of the atmosphere. Climate scientists even have an acronym for the location “top of the atmosphere,” TOA. Energy that migrates inside the membrane must equal energy that migrates outside the membrane over very long time intervals, otherwise the temperature would ramp until all the rocks melted or everything froze. In the rather unusual situation of a planet in space, the control volume is equivalent to a “control mass” equation in which we would track the energy budget based upon a fixed mass. Our imaginary membrane defines a volume but it also contains all of the earth/atmosphere mass. For simplicity, I will continue with the term “control volume.”

The control volume equation in GCMs is roughly equivalent to:

[heat gained] – [heat lost] = [temperature change]

This is just a conceptual equation because the terms on the left are in units of energy, while the units on the right are in degrees of temperature. The complex function between the two makes temperature an emergent property of the climate system, but we needn’t get too wrapped up in this. Regardless of the complexity hidden behind this simple equation, it is useful to keep in mind that each equation term (and later, each fault tree box) represents a single number that we would like to know.

There is a bit of housekeeping we need to do at this point. Recall that we are only considering the modern warming, but we can only be confident about the fidelity of our control volume equation when we consider very long time intervals. To account for the disparity in duration, we need to consider the concept of “capacitance.” A capacitor is a device that will store energy under certain conditions, but then discharge that energy under a different set of conditions. As an instructive example, the argument that the current hiatus in surface temperature rise is being caused by energy storage in the ocean is an invocation of capacitance. So to fit our approach to a discrete time interval, we need the following modification:

[heat gained] + [capacitance discharge] – [heat lost] – [capacitance recharge] = [modern warming]

Note that now we are no longer considering the entire history of the earth, we are only considering the changes in magnitude during the modern warming interval. Our excursion direction is up, so we discard the terms for a downward excursion. Based upon the remaining terms in our control volume equation, the top tier of the tree is this:

Slide1From the control volume standpoint, we have covered heat that enters our imaginary membrane, heat that exits the membrane, and heat that may have been stashed inside the membrane and is only being released now. I should emphasize that this capacitance in the top tier refers to heat stored inside the membrane prior to the modern warming that is subsequently released to create the modern warming.

This top tier contains our first logical bifurcation. The two terms on the left, heat input and heat loss, are based upon a supposition that annual changes in forcing will manifest soon enough that that the change in temperature can be considered a direct response. This can involve a lag as long as the lag does not approach the duration of the excursion. The third term, capacitance, accounts for the possibility that the modern warming was not a direct response to a forcing with an onset near the onset of our excursion. An alternative fault tree can be envisioned here with something else in the top tier, but the question of lags must be dealt with near the top of the tree because it constitutes a basic division of what type of data we need.

The next tier could be based upon basic mechanisms rooted in physics, increasing the granularity:

Slide2The heat input leg represents heat entering the control volume, plus the heat generated inside. We have a few oddball prospective causes here that rarely see the light of day. The heat generated by anthropogenic combustion and geothermal heat are a couple of them. In this case, it is my understanding that there is no dispute that any increases above prior natural background combustion (forest fires, etc.) and geothermal releases are trivial. We put these on the tree to show that we have considered them, but we need not waste time here. Under heat loss, we cover all the possibilities with the two basic mechanisms of heat transfer, radiation and conduction. Conduction is another oddball. The conduction of heat to the vacuum of space is relatively low and would be expected to change only slightly in rough accordance to the temperature at TOA. With conduction changes crossed off, a decrease in outward radiation would be due to a decreased albedo, where albedo represents reflection across the entire electromagnetic spectrum. A control volume approach allows us to lump convection in with conduction.   The last branch in our third tier is the physical mechanism by which a temperature excursion occurs due to heat being released from a reservoir, which is a form of capacitance discharge.

I normally do not start crossing off boxes until the full tree is built. However, if we cross off the oddballs here, we see that the second tier of the tree decomposes to just three mechanisms, solar irradiance increase, albedo decrease, and heat reservoir release. This comes as no revelation to climate scientists.

This is as far as I am going in terms of building the full tree, because the next tier gets big and I probably would not get it right on my own. Finishing it is an exercise left to the reader! But I will continue down the “albedo decrease” leg until we reach anthropogenic CO2-induced warming, the topic du jour. A disclaimer: I suspect that this tier could be improved by the scrutiny of actual SMEs.

Slide3The only leg shown fully expanded is the one related to CO2, the reader is left to envision the entire tree if each leg were to be expanded in a similar manner. The bottom left corner of this tree fragment shows anthropogenic CO2-induced warming in proper context. Note that we could have separated anthropogenic effects at the first tier of the tree, but then we would have two almost identical trees.

Once every leg is completed in this manner, the next phase of adding evidence begins.

Step 5. Apply documented evidence to each cause.

Here we assess the available evidence and decide whether it supports or refutes a root cause. The actual method used is often dictated by how much evidence we are dealing with. One simple way is to make a numbered list of evidence findings. Then when a finding supports a root cause, we can add that number to the fault tree block in green. When the same finding refutes a different root cause, we can add the number to the block in red. All findings must be mapped across the entire tree.

The established approach to attribution looks at the evidence based upon the evidence hierarchy and exploits any reasonable manner of simplification. The entire purpose of a control volume approach is to avoid having to understand the complex relationship that exists between variables within the control volume. For example, if you treat an engine as a control volume, you can put flow meters on the fuel and air intakes, a pressure gauge on the exhaust, and an rpm measurement on the output shaft. With those parameters monitored, and a bit of historical data on them, you can make very good predictions about the trend in rpm of the engine based upon changes in inputs without knowing very much about how the engine translates fuel into motion. This approach does not involve any form of modeling and is, as I mentioned, the rationale for using control volume in the first place.

The first question the fault tree asks of us is captured in the first tier. Was the modern warming caused by a direct response to higher energy input, a direct response to lower energy loss, or as a result of heat stored during an earlier interval being released? If we consider this question in light of our control volume approach (we don’t really care how energy gets converted to surface temperature), we see that we can answer the question with simple data in units of energy, watts or joules. Envision data from, say, 1950 to 1980, in terms of energy. We might find that for the 30-year interval, heat input was x joules, heat loss was y joules, and capacitance release was z joules.   Now we compare that to the same data for the modern warming interval. If any one of the latter numbers is substantially more than the corresponding earlier numbers x, y, or z, we have come a long way already in simplifying our fault tree. A big difference would mean that we can lop off the other legs. If we see big changes in more than one of our energy quantities, we might have to reconsider our assumption that the system is stable.

In order to resolve the lower tiers, we need to take our basic energy change data and break it down by year, so joules/year. If we had reasonably accurate delta joules/year data relating to the various forcings, we could wiggle match between the data and the global temperature curve. If we found a close match, we would have strong evidence that forcings have an important near-term effect, and that (presumably) only one root cause matches the trend. If no forcing has an energy curve that matches the modern warming, we must assume capacitance complicates the picture.

Let’s consider how this would work. Each group of SMEs would produce a simple empirical chart for their fault tree block estimating how much energy was added or lost during a specific year within the modern warming, ideally based upon direct measurement and historical observation. These graphs would then be the primary evidence blocks for the tree. Some curves would presumable vary around zero with no real trend, others might decline, while others might increase. The sums roll up the tree. If the difference between the “heat gained” and “heat lost” legs shows a net positive upward trend in energy gained, we consider that as direct evidence that the modern warming was driven be heat gained rather than capacitance discharge. If those two legs sum to near zero, we can assume that the warming was caused by capacitance discharge. If the capacitance SMEs (those that study El Nino, etc.) estimate that a large discharge likely occurred during the modern warming, we have robust evidence that the warming was a natural cycle.

  1. Determine where evidence is lacking…

Once all the known evidence has been mapped, we look for empty blocks. We then develop a plan to fill those blocks as our top priority.

I cannot find the numbers to fill in the blocks in the AR documents. I suspect that the data does not exist for the earlier interval, and perhaps cannot even be well estimated for the modern warming interval.

  1. Execute plan to fill all evidence blocks.

Here we collect evidence specifically intended to address the fault tree logic. That consists of energy quantities from both before and during the modern warming. Has every effort been made to collect empirical data about planetary albedo prior to the modern warming? I suspect that this is a hopeless situation, but clever SMEs continually surprise me.

In a typical root cause analysis, we continue until we hopefully have just one unrefuted cause left. The final step is to exhaustively document the entire process. In the case of the modern warming, the final report would carefully lay out the necessary data, the missing data, and the conclusion that until and unless we can obtain the missing data, the root cause analysis will remain unresolved.

Part 3: The AGW Fault Tree, Climate Scientists, and the IPCC: A Sober Assessment of Progress to Date

I will begin this section by stating that I am unable to assess how much progress has been made towards resolving the basic fault tree shown above. That is not for lack of trying, I have read all the pertinent material in the IPCC Annual Reports (ARs) on a few occasions. When I read these reports, I am bombarded with information concerning the CO2 box buried deep in the middle of the fault tree. But even for that box, I am not seeing a number that I could plug into the equations above. For other legs of the tree, the ARs are even more bewildering. If climate scientists are making steady progress towards being able to estimate the numbers to go in the control volume equations, I cannot see it in the AR documents.

How much evidence is required to produce a robust conclusion about attribution when the answer is not obvious? For years, climate scientists have followed reasoning that goes from climate model simulations to expert opinion, declaring that to be sufficient. But that is not how attribution works. Decomposition of a fault tree requires either a unique signature, or sufficient data to support or refute every leg of the tree (not every box on the tree, but every leg). At one end of the spectrum, we would not claim resolution if we had zero information, while at the other end, we would be very comfortable with a conclusion if we knew everything about the variables. The fault tree provides guidance on the sufficiency of the evidence when we are somewhere in between. My customers pay me to reach a conclusion, not muck about with a logic tree. But when we lack the basic data to decompose the fault tree, maintaining my credibility (and that of the SMEs as well) demands that we tell the customer that the fault tree cannot be resolved because we lack sufficient information.

The curve showing CO2 rise and the curve showing the modern global temperature rise do not look the same, and signal processing won’t help with the correlation. Instead, there is hypothesized to be a complex function involving capacitance that explains the primary discrepancy, the recent hiatus. But we still have essentially no idea how much capacitance has contributed to historical excursions. We do not know whether there is a single mode of capacitance that swamps all others, or whether there are multiple capacitance modes that go in and out of phase. Ocean capacitance has recently been invoked as perhaps the most widely endorsed explanation for the recent hiatus in global warming, and there is empirical evidence of warming in the ocean. But invoking capacitance to explain a data wiggle down on the fifth tier of a fault tree, when the general topic of capacitance remains unresolved in the first tier, suggests that climate scientists have simply lost the thread of what they were trying to prove. The sword swung in favor of invoking capacitance to explain the hiatus turns out to have two edges. If the system is capable of exhibiting sufficient capacitance to produce the recent hiatus, there is no valid argument against why it could not also have produced the entire modern warming, unless that can be disproven with empirical data or I/O test results.

Closing Comments

Most of the time when corporations experience a catastrophe such as a chemical plant explosion resulting in fatalities, they look to outside entities to conduct the attribution analysis. This may come as a surprise given the large sums of money at stake and the desire to influence the outcome, but consider the value of a report produced internally by the corporation. If the report exonerates the corporation of all culpability, it will have zero credibility. Sure, they can blame themselves to preserve their credibility, but their only hope of a credible exoneration is if it comes from an independent entity. In the real world, the objectivity of an independent study may still leave something to be desired, given the fact that the contracted investigators get their paycheck from the corporation, but the principle still holds. I can only assume when I read the AR documents that this never occurred to climate scientists.

The science of AGW will not be settled until the fault tree is resolved to the point that we can at least estimate a number for each leg in our fault tree based upon objective evidence. The tools available have thus far not been up to the task. With so much effort put into modelling CO2 warming while other fault tree boxes are nearly devoid of evidence, it is not even clear that the available tools are being applied efficiently.

The terms of reference for the IPCC are murky, but it is clear that it was never set up to address attribution in any established manner. There was no valid reason to not use an established method, facilitated by an entity with expertise in the process, if attribution was the true goal. The AR documents are position papers, not attribution studies, as exemplified by the fact that supporting and refuting arguments cannot be followed in any logical manner and the arguments do not roll up into any logical framework. If AGW is really the most important issue that we face, and the science is so robust, why would climate scientists not seek the added credibility that could be gained from an independent and established attribution effort?

October 24, 2014 Posted by | Science and Pseudo-Science | | 2 Comments

The Long Battle Over Pesticides, Birth Defects and Mental Impairment

By Dr. JANETTE D. SHERMAN, MD | CounterPunch | October 24, 2014

The recent number of articles in the popular press concerning loss of intellect among children exposed to chlorpyrifos is important in the case of this pesticide. Although in-home use of chlorpyrifos was restricted in the U. S in 2000, it is widely used in agriculture, and is a serious risk to health and intellect for people working and living in proximity to fields. Detectable levels of chlorpyrifos detected in New York City children, raises the question of exposure via food.

Across the U. S. we learn that students are doing poorly in school, often blaming the teachers and their unions. Are teachers no longer competent to teach or have children been “dumbed-down” by exposure to this neurotoxin?

The State of California is considering restriction on use, but is prepared for strong opposition from the pesticide and big agricultural industries.

Back in the “Dark Ages” – a mere 50 years ago – when I was a medical student and intern at Wayne State University, I rotated through Children’s Hospital in Detroit. It was staffed by some of the most thoughtful and kind physician/professors I have ever met. I attended a clinic named “FLK” otherwise known as Funny Looking Kid clinic. There we saw children who had abnormal looking faces, abnormal body parts, and, often impaired intelligence. Many of the children required complicated medical care, but I don’t recall much discussion as to why they had these abnormalities that had dramatically cut short their futures and altered the lives of their families.

Realizing you have given birth to a child with birth defects is devastating – not only for the child, but for the family, and for society in general. If the child survives infancy, it means being “different” and having to cope with disability, and with having to learn alternative ways to function. For many families, it means 24/7 care of a child who can never live independently. For society the costs can be enormous – surgery (often multiple), medications, social services, special education, special equipment, then alternative living arrangements, if and when family cannot care for their child, now grown to a non-functional adult.

Although the neurotoxicity of pesticides has been known for decades, recently, several national magazines, have named the pesticide, chlorpyrifos (Dursban/ Lorsban), as an agent causing loss of intelligence, as well as birth defects and structural brain damage.

Dr. James Hamblin’s article in March 2014 issue of The Atlantic, titled “The Toxins that Threaten Our Brains.” listed 12 commonly used chemicals, including chlorpyrifos, which is marketed as Dursban and Lorsban. The exposures described in the Atlantic articles were urban, so we do not know exactly how widespread this epidemic is, especially if we do not include agricultural areas such as in California, Hawaii and the mid-West.

That same month, The Nation published articles by Susan Freinkel “Poisoned Politics” and Lee Fang “Warning Signs” who reported adverse effects from exposure to Dursban and Lorsban.

Dr. Hamblin’s article generously cites Drs. Philip Landrigan of Mt. Sinai in New York City and Philippe Grandjean of Harvard that a “’silent pandemic’ of toxins has been damaging the brains of unborn children.”

Dr. Landrigan chaired a 1998 meeting of the Collegium Ramazzini International Scientific Conference, held in Carpi, Italy.   In attendance was Dr. Grandjean, whose research found “Methylmercury as a hazard to brain development.” Dr. Richard Jackson, from the U. S. CDC was also in attendance, as well as U.S. governmental and university members.

At that Collegium Ramazzini International Scientific Conference, on October 25, 1998, I presented definitive data in my paper: “Chlorpyrifos (Dursban) exposure and birth defects: report of 15 incidents, evaluation of 8 cases, theory of action, and medical and social aspects.” This presentation followed my earlier publications beginning in 1994 wherein I reported damage to the unborn from the same pesticide.

The Ramazzini organization sent my paper to the European Journal of Oncology for publication. Since my paper reported birth defects, not cancer, the paper has received little notice, but the attendees, including the EPA, have known of the findings for 16 years.

Currently a new battle is occurring in Hawaii over the use of pesticides, especially by Dow AgroSciences, DuPont Pioneer, BASF Plant Science, and Syngenta on the island of Kauai where giant seed companies develop Genetically Modified Organisms (GMOs) and other specialized seeds. The pesticides used there include alachlor, atrazine, chlorpyrifos, methomyl, metalochlor, permethrin and paraquat. The author, Paul Koberstein from Cascadia Times estimates that annually, more than 2000 pounds of chlorpyrifos are used per acre per year on Kauai, compared to less than 0.025 averages for the U. S. Mainland.

In addition to Hawaii, areas in California include workers and families from the Imperial Valley and other intensive agricultural areas where pesticide use is extensive. Using the Koberstein data, annual use of chlorpyrifos in California is approximately 1500 pounds/ acre.

Neurological Damage: Before and After Birth

Birth defects arise as a result of two mechanisms – damage to a gene, prior to fertilization, or damage to the growing cells of the fetus after life in the womb has begun. Differing from genetic damage, such as occurs in Down syndrome or Trisomy-21, the latter damage results from exposure of the developing fetus to agents called teratogens. For many years Mongolism was the name applied to children with growth delays, similar facial and hand features and intellectual deficits.

Chlorpyrifos is a unique pesticide. It is a combination of an organophosphate and a trichlorinatedpyridinol (TCP.) TCP is not only the feedstock used in the manufacture of chlorpyrifos, but also a contaminant in the product, and a metabolic breakdown product that is known to cause central nervous system abnormalities (hydrocephaly and dilated brain ventricles), and other abnormalities (cleft palate, skull and vertebral abnormalities) in fetuses as reported by Dow Chemical Co.

In March 1995, I was asked to fly to Arkansas to see a child whose mother had been exposed to the pesticide Dursban (chlorpyrifos) early in the pregnancy of her daughter.

Mrs. S had been working in a bank when in mid-March, 1991, she noticed a man spraying the baseboards behind the station where she worked as a teller. She said she asked the man if was okay to be in the area since she was pregnant, and she said the man told her it was “perfectly safe. She said the spraying had occurred around 4 PM, and that she worked at the bank until 6:30 PM, and when she went home that evening she had nausea and a” bit of headache.” She said she retuned to work the next day, felt nausea, but worked most of the day. An electrical fire at the drive-in window followed the pesticide event, and a technician used of a fogger that sprayed a “citrus-like” chemical that was intended to deodorize the smoke odor. Mrs. S. said she worked at the bank until about April of that year, and then worked at a credit union until her daughter was born in September.

When Mrs. S. was about five months pregnant she had an ultrasound, which showed that her baby had enlarged ventricles in her brain. Further examination revealed absence of the septum pellucidum, a central portion of her brain. Mrs. S. had additional follow up at a university center as well as with her own physician that showed normal amniocentesis and normal chromosomes.

Both Mr. & Mrs. S. said that caring for the daughter A. has been a severe financial and emotional drain, sometimes requiring them to be up 72 hours to try to soothe A’s crying. A. had surgery to repair her cleft lip when she was six months old, and repair of her cleft palate and left eyelid when she was a year old.

Both cleft lip and palate can now be repaired (in areas with skilled surgeon, and insurance or other funds) but until they are, the child has difficulty feeding and risks poor nutrition, upper respiratory and lung problems as a result of aspiration of food.

Additional diagnostic procedures indicated that A has a cleft left eye (failure of her eye to fuse during development), and she cannot blink her eye or move the left side of her face.

A was unable to sit up on her own by the time she was a year old, had to have food pureed until she was two, then her parents realized that when A neared her 4th birthday, she could not hear, when they began a program of sign language with the aid of a speech therapist.

A’s brother B. was born two years later, and is well, sleeping thought the night when he was two weeks of age.

I was given a tour of the bank where Ms. S worked by its’ Senior Vice-President, and to minimize stress to A, I examined her in the office and presence of her pediatrician. I also accompanied her parents to their home where I could observe A. at her home.

A was a small-boned child who walked with a wide-based, unsteady gait and who made audible sounds, but no language content. Her head was enlarged with hydrocephaly and a small bruise due to a recent, commonly occurring fall.

Her abnormalities included the following, and were characteristic of findings in other children:

low-set, tapering ears, wide-spaced nipples, and frequent infections. This litany is not to horrify readers, but to bring to attention the burdens imposed upon this child, her parents, and society as a whole. I evaluated seven more children, two families each having two children with similar, but more severe medical conditions.

With the exception of child #1, the seven children were profoundly retarded, were diapered, could not speak, and required feeding.

I first met C & D in 1996, along with their parents and handsome, healthy older brother, at their attractive home on the West Coast. Both D (a girl) and C (a boy) were lying flat, diapered, mouths open, fists clenched, staring into space, and being fed by bottle. Even today, looking at the photographs reminds me what an enormous burden was dealt to that family.

Ultimately I evaluated eight children, and identified seven more, reported by Dow Chemical Co., the manufacturer, to EPA on November 2, 1994, with reporting delays of as long as seven years from when the corporation first learned of them. I obtained the reports via a Freedom of Information request (FOI) from EPA. The reports were labeled with the revealing name: “DERBI” – or – “Dow Elanco Research Business Index.”

When I saw seven more children, all of who looked like siblings, (much as Trisomy-21 or Down Syndrome children do) it became clear to me, that the cause was linked to Dursban, the pre-natal exposure common to each.

Among the Dursban-exposed children, all 8 had both brain and palate abnormalities, seven had widespread nipples and growth retardation, six had low vision or blindness and six had genital abnormalities, five had brain atrophy and external ear abnormalities, four children had absence of the corpus collosum that is the critical connection between the two hemispheres of the brain.   Chromosomal studies were normal in all 8 families. All families reported stress and enormous financial burden to care for their children.

In addition to the children with birth defects, I also evaluated a number of families and a group of adults who had been exposed at their work site. Of the workers, all 12 complained of headache, and three of dizziness. Eight had findings of central nervous system damage, and six had peripheral nervous system damage. The patients reported upper respiratory and chest symptoms, as well as nausea, vomiting, diarrhea, and four had incontinence. The families also reported abnormalities and deaths in their household pets.

In February 1996, my deposition in the first case was taken by three groups of attorneys representing the defendants, two principally defending Dow Elanco. I was questioned for three 8-hour days. Ultimately a list of 565 exhibits was accumulated that included over 10,000 pages of materials that I supplied and relied upon for my opinion. These materials included Dow documents and correspondence, EPA documents, legal depositions, basic embryology, biochemistry and toxicology of chlorpyrifos, medical records of other exposed children, patents, books, articles, etc, etc.

Chlorpyrifos was designed to be neurotoxic in action. It is an interesting pesticide, in that it has not only an organophosphate portion, but also it has three chlorine atoms attached to a pyridinol ring. This ring is trichloropyridinol (TCP), a significant hazard, because it is fat-soluble, and persistent, up to 18 years as claimed by Dow Chemical Co. TCP also forms the body of trichlophenoxyacetic acid, part of Agent Orange, also linked to birth defects and cancer. In a war that ended in 1975, Agent Orange continues as a risk to the Vietnamese, and to military troops that were stationed there.

According to multiple Dow documents, TCP is the feedstock for production of chlopryrifos, a contaminant in the product, and a metabolic breakdown product. TCP has been demonstrated to cause central nervous system anomalies (hydrocephaly and dilated brain ventricles) as well as cleft palate, skull and vertebral abnormalities in the fetus at doses nontoxic to the mother, similar to the defects seen in affected children.

That TCP caused birth defects was known by Dow in 1987, but not reported to EPA until five years later in 1992. TCP is used to manufacture chlorpyrifos, and as such, comes under regulation of Section 8(e) of the Toxic Substances Control Act (TSCA), rather than the Federal Insecticide, Fungicide and Rodenticide Control Act (FIFRA.) Though there was regulatory difference, TSCA states very clearly “any person who manufactures, processes or distributed in commerce a chemical substance or mixture, or who obtains information which reasonably supports the conclusion that such substance or mixture presents a substantial risk of injure to heath or the environment, shall immediately inform the Administrator of such information. From 1976 to 1982, I was a member of a 16 person Advisory Committee to the EPA for TSCA, Chairman of the Risk-Benefit Assessment Group from 1977 to 1979, and a member of the Carcinogen Policy Sub-group from 1977 to 1981. It was clear that risks and benefits do no accrue to the same party. In the case of chlorpyrifos, the risks are to the unaware public, and the benefits to the corporation.

The Legal System is Not the Same as the Justice System

Bernard P. Whetstone was a well-established attorney who handled the initial birth defects case in Little Rock, Arkansas, and was aware of another case in that state. Mr. Whetstone was a “Southern Gentleman” with a soft drawl who had earned both a bachelor and doctorate of jurisprudence, and started practice in 1934. In 1995, he was worked with Davidson and Associates until he retired in 1999 at age 86. Mr. Whetstone died in 2001.

I was required to appear In Court in Little Rock, where Judge Eisley ruled that I was not qualified. Hard to believe that 10,000 pages of documents is not adequate, but that opinion was softened because he ruled that all the plaintiff’s experts were not qualified. Another physician/ toxicology expert and I evaluated additional patients (adults) who developed multiple adverse effects, including central nervous system damage, so Dow, employing the Eisley decision, argued successfully in other court jurisdictions that we were not qualified to give an opinion.

The main Dow law firm was Barnes and Thornburg from Indianapolis, where DowElanco, the co-manufacturer Eli Lilly is located. Eli Lilly is a manufacturer of both pharmaceuticals and pesticides. Barnes & Thornburg has over 500 attorneys in 12 cities and appeared to be very well staffed and funded.

A recent news release noted that William W. Wales, who spent more than 30 years in the legal department of The Dow Chemical Company and Dow AgroSciences LLC, had joined Barnes & Thornburg LLP’s Indianapolis office as a partner in the firm’s litigation and corporate departments. “Bill’s depth and breadth of experience in a variety of matters will be a tremendous asset to many of our clients who are dealing with similar issues,” said Joseph G. Eaton, Vice Chair of the firm’s Litigation Department and Co-Chair of the Toxic Tort Practice Group. Joseph Eaton is one of the attorneys who took my extensive deposition. They were the most aggressive law firm I had ever encountered, and I have testified in more than 700 depositions and/or court appearances.

In defense of their product, the Dow attorneys argued that there were no reports of levels of pesticides used or existing levels – a questionable tactic, since the corporation has never suggested or requested that such records be obtained.

Although the EPA stopped home use of Dursban in 2000, Lorsban is widely used in agriculture, on ornamentals, and places where women, the unborn and children are exposed. For many, this exposure is without their knowledge or consent. How is this allowed to happen?

Is it successful advertising, recommendations from country and state agricultural agents, an inept or politically adept EPA such as when on September 11, 2001, the then administrator of the U.S. Environmental Protection Agency and former governor of New Jersey Christie Whitman said on September 13, 2001, “EPA is greatly relieved to have learned that there appears to be no significant levels of asbestos dust in the air in New York City.” A week

Whitman said: “Given the scope of the tragedy from last week, I am glad to reassure the people of New York and Washington, DC that their air is safe to breathe and their water is safe to drink.”

In 2008, the U. S. EPA named Dow as an Energy Star Partner of the Year for excellence in energy management and reductions in greenhouse gas emissions.

Dow’s fleet of skilled lawyers have managed to save Dow from liability, when they achieved a reversal of a $925 million judgment for the contamination of the area around Rocky Flats, the Colorado facility that produced plutonium triggers for hydrogen bombs. And, a lawsuit filed by Vietnamese, damaged by Agent Orange against Dow and Monsanto was dismissed.

Dow is a multinational corporation and the third largest chemical manufacturer in the world, with earnings more than $57 billion in 2013. In addition to the manufacture of insecticides, herbicides, fungicides, and genetically modified seeds, Dow also manufactures multiple plastics, polystyrene, polyurethane, synthetic rubber, biphenyl-A as well as many other chemicals.

What are the chances that the use of Lorsban will be curtailed in the agricultural areas of Hawaii, California and elsewhere? Given what we know of the financial strength of the Dow Corporation, the weakness of the EPA, and our paid-for Congress, it does not look promising.

The Burden of Brain Damage 

If the top corporate officials were required to care for one of these severely brain-damaged children for a week, would it change their minds about the ethics of manufacturing chlorpyrifos and corporate profits?

There is not a teacher who can teach brain-damaged children to read and do math, which raises the larger question being proposed: are children’s lack of learning due to poor teachers, or to subtle brain damage? If children are being damaged to various degrees, profoundly in the situation of the 15 children sited in my research, to “mild” learning and/or behavioral problems, ranging from decreased IQ, Asperbergers, hyperactivity, autism, etc., how much is attributable to exposure to pesticides such as Dursban/ Lorsban? If we blame poor teaching, and teachers’ unions, but don’t stop the use of brain-damaging pesticides, where does that leave our U.S. society as a source of creativity and intellect in this world?

Note: All of my chlorpyrifos/ Dursban documents have been accepted and will be archived at the National Library of Medicine, along with my other scientific, medical and legal research.

Janette D. Sherman, M. D. is the author of Life’s Delicate Balance: Causes and Prevention of Breast Cancer and Chemical Exposure and Disease, and is a specialist in internal medicine and toxicology. She edited the book Chernobyl: Consequences of the Catastrophe for People and Nature, written by A. V. Yablokov, V. B., Nesterenko and A. V. Nesterenko, published by the New York Academy of Sciences in 2009.  Her primary interest is the prevention of illness through public education.  She can be reached at:  toxdoc.js@verizon.netand www.janettesherman.com

October 24, 2014 Posted by | Deception, Economics, Environmentalism, Science and Pseudo-Science, Timeless or most popular | , , , , , , , | Leave a comment

Another IPCC modeling failure – so THAT’s where the atmospheric methane went

By Anthony Watts | Watts Up With That? | October 15, 2014

IPCC_AR5_draft_fig1-7_methane
From Oregon State University – Scientists discover carbonate rocks are unrecognized methane sink

CORVALLIS, Ore. – Since the first undersea methane seep was discovered 30 years ago, scientists have meticulously analyzed and measured how microbes in the seafloor sediments consume the greenhouse gas methane as part of understanding how the Earth works.

The sediment-based microbes form an important methane “sink,” preventing much of the chemical from reaching the atmosphere and contributing to greenhouse gas accumulation. As a byproduct of this process, the microbes create a type of rock known as authigenic carbonate, which while interesting to scientists was not thought to be involved in the processing of methane.

That is no longer the case. A team of scientists has discovered that these authigenic carbonate rocks also contain vast amounts of active microbes that take up methane. The results of their study, which was funded by the National Science Foundation, were reported today in the journal Nature Communications.

“No one had really examined these rocks as living habitats before,” noted Andrew Thurber, an Oregon State University marine ecologist and co-author on the paper. “It was just assumed that they were inactive. In previous studies, we had seen remnants of microbes in the rocks – DNA and lipids – but we thought they were relics of past activity. We didn’t know they were active.

“This goes to show how the global methane process is still rather poorly understood,” Thurber added.

Lead author Jeffrey Marlow of the California Institute of Technology and his colleagues studied samples from authigenic compounds off the coasts of the Pacific Northwest (Hydrate Ridge), northern California (Eel River Basin) and central America (the Costa Rica margin). The rocks range in size and distribution from small pebbles to carbonate “pavement” stretching dozens of square miles.

“Methane-derived carbonates represent a large volume within many seep systems and finding active methane-consuming archaea and bacteria in the interior of these carbonate rocks extends the known habitat for methane-consuming microorganisms beyond the relatively thin layer of sediment that may overlay a carbonate mound,” said Marlow, a geobiology graduate student in the lab of Victoria Orphan of Caltech.

These assemblages are also found in the Gulf of Mexico as well as off Chile, New Zealand, Africa, Europe – “and pretty much every ocean basin in the world,” noted Thurber, an assistant professor (senior research) in Oregon State’s College of Earth, Ocean, and Atmospheric Sciences.

The study is important, scientists say, because the rock-based microbes potentially may consume a huge amount of methane. The microbes were less active than those found in the sediment, but were more abundant – and the areas they inhabit are extensive, making their importance potential enormous. Studies have found that approximately 3-6 percent of the methane in the atmosphere is from marine sources – and this number is so low due to microbes in the ocean sediments consuming some 60-90 percent of the methane that would otherwise escape.

Now those ratios will have to be re-examined to determine how much of the methane sink can be attributed to microbes in rocks versus those in sediments. The distinction is important, the researchers say, because it is an unrecognized sink for a potentially very important greenhouse gas.

“We found that these carbonate rocks located in areas of active methane seeps are themselves more active,” Thurber said. “Rocks located in comparatively inactive regions had little microbial activity. However, they can quickly activate when methane becomes available.

“In some ways, these rocks are like armies waiting in the wings to be called upon when needed to absorb methane.”

The ocean contains vast amounts of methane, which has long been a concern to scientists. Marine reservoirs of methane are estimated to total more than 455 gigatons and may be as much as 10,000 gigatons carbon in methane. A gigaton is approximate 1.1 billion tons.

By contrast, all of the planet’s gas and oil deposits are thought to total about 200-300 gigatons of carbon.

~~~

Science and Technology Illustration – Monterey Bay

image

Illustration of methane mound on seafloor near Santa Monica Bay.
Credit: Kelly Lance ©2013 MBARI

October 15, 2014 Posted by | Science and Pseudo-Science, Timeless or most popular | Leave a comment

Climate Science: Is it Currently Designed to Answer Questions?

By Prof. Richard S. Lindzen | Global Research | September 22, 2014

Program in Atmospheres, Oceans and Climate. Massachusetts Institute of Technology (MIT) and Global Research 30 November 2009

Abstract

116330For a variety of inter-related cultural, organizational, and political reasons, progress in climate science and the actual solution of scientific problems in this field have moved at a much slower rate than would normally be possible.

Not all these factors are unique to climate science, but the heavy influence of politics has served to amplify the role of the other factors. By cultural factors, I primarily refer to the change in the scientific paradigm from a dialectic opposition between theory and observation to an emphasis on simulation and observational programs. The latter serves to almost eliminate the dialectical focus of the former.

Whereas the former had the potential for convergence, the latter is much less effective. The institutional factor has many components. One is the inordinate growth of administration in universities and the consequent increase in importance of grant overhead. This leads to an emphasis on large programs that never end. Another is the hierarchical nature of formal scientific organizations whereby a small executive council can speak on behalf of thousands of scientists as well as govern the distribution of ‘carrots and sticks’ whereby reputations are made and broken. The above factors are all amplified by the need for government funding.

When an issue becomes a vital part of a political agenda, as is the case with climate, then the politically desired position becomes a goal rather than a consequence of scientific research. This paper will deal with the origin of the cultural changes and with specific examples of the operation and interaction of these factors. In particular, we will show how political bodies act to control scientific institutions, how scientists adjust both data and even theory to accommodate politically correct positions, and how opposition to these positions is disposed of.

1. Introduction.

Although the focus of this paper is on climate science, some of the problems pertain to science more generally. Science has traditionally been held to involve the creative opposition of theory and observation wherein each tests the other in such a manner as to converge on a better understanding of the natural world. Success was rewarded by recognition, though the degree of recognition was weighted according to both the practical consequences of the success and the philosophical and aesthetic power of the success. As science undertook more ambitious problems, and the cost and scale of operations increased, the need for funds undoubtedly shifted emphasis to practical relevance though numerous examples from the past assured a strong base level of confidence in the utility of science. Moreover, the many success stories established ‘science’ as a source of authority and integrity. Thus, almost all modern movements claimed scientific foundations for their aims. Early on, this fostered a profound misuse of science, since science is primarily a successful mode of inquiry rather than a source of authority.

Until the post World War II period, little in the way of structure existed for the formal support of science by government (at least in the US which is where my own observations are most relevant). In the aftermath of the Second World War, the major contributions of science to the war effort (radar, the A-bomb), to health (penicillin), etc. were evident. Vannevar Bush (in his report, Science: The Endless Frontier, 1945) noted the many practical roles that validated the importance of science to the nation, and argued that the government need only adequately support basic science in order for further benefits to emerge. The scientific community felt this paradigm to be an entirely appropriate response by a grateful nation. The next 20 years witnessed truly impressive scientific productivity which firmly established the United States as the creative center of the scientific world.

The Bush paradigm seemed amply justified. (This period and its follow-up are also discussed by Miller, 2007, with special but not total emphasis on the NIH (National Institutes of Health).) However, something changed in the late 60’s. In a variety of fields it has been suggested that the rate of new discoveries and achievements slowed appreciably (despite increasing publications) [2], and it is being suggested that either the Bush paradigm ceased to be valid or that it may never have been valid in the first place. I believe that the former is correct. What then happened in the 1960’s to produce this change?

It is my impression that by the end of the 60’s scientists, themselves, came to feel that the real basis for support was not gratitude (and the associated trust that support would bring further benefit) but fear: fear of the Soviet Union, fear of cancer, etc. Many will conclude that this was merely an awakening of a naive scientific community to reality, and they may well be right. However, between the perceptions of gratitude and fear as the basis for support lies a world of difference in incentive structure. If one thinks the basis is gratitude, then one obviously will respond by contributions that will elicit more gratitude. The perpetuation of fear, on the other hand, militates against solving problems. This change in perception proceeded largely without comment. However, the end of the cold war, by eliminating a large part of the fear-base forced a reassessment of the situation. Most thinking has been devoted to the emphasis of other sources of fear: competitiveness, health, resource depletion and the environment.

What may have caused this change in perception is unclear, because so many separate but potentially relevant things occurred almost simultaneously. The space race reinstituted the model of large scale focused efforts such as the moon landing program. For another, the 60’s saw the first major postwar funding cuts for science in the US. The budgetary pressures of the Vietnam War may have demanded savings someplace, but the fact that science was regarded as, to some extent, dispensable, came as a shock to many scientists. So did the massive increase in management structures and bureaucracy which took control of science out of the hands of working scientists. All of this may be related to the demographic pressures resulting from the baby boomers entering the workforce and the post -sputnik emphasis on science. Sorting this out goes well beyond my present aim which is merely to consider the consequences of fear as a perceived basis of support.

Fear has several advantages over gratitude. Gratitude is intrinsically limited, if only by the finite creative capacity of the scientific community. Moreover, as pointed out by a colleague at MIT, appealing to people’s gratitude and trust is usually less effective than pulling a gun. In other words, fear can motivate greater generosity. Sputnik provided a notable example in this regard; though it did not immediately alter the perceptions of most scientists, it did lead to a great increase in the number of scientists, which contributed to the previously mentioned demographic pressure. Science since the sixties has been characterized by the large programs that this generosity encourages.

Moreover, the fact that fear provides little incentive for scientists to do anything more than perpetuate problems, significantly reduces the dependence of the scientific enterprise on unique skills and talents. The combination of increased scale and diminished emphasis on unique talent is, from a certain point of view, a devastating combination which greatly increases the potential for the political direction of science, and the creation of dependent constituencies. With these new constituencies, such obvious controls as peer review and detailed accountability, begin to fail and even serve to perpetuate the defects of the system. Miller (2007) specifically addresses how the system especially favors dogmatism and conformity.

The creation of the government bureaucracy, and the increasing body of regulations accompanying government funding, called, in turn, for a massive increase in the administrative staff at universities and research centers. The support for this staff comes from the overhead on government grants, and, in turn, produces an active pressure for the solicitation of more and larger grants [3].

One result of the above appears to have been the deemphasis of theory because of its intrinsic difficulty and small scale, the encouragement of simulation instead (with its call for large capital investment in computation), and the encouragement of large programs unconstrained by specific goals [4]. In brief, we have the new paradigm where simulation and programs have replaced theory and observation, where government largely determines the nature of scientific activity, and where the primary role of professional societies is the lobbying of the government for special advantage.

This new paradigm for science and its dependence on fear based support may not constitute corruption per se, but it does serve to make the system particularly vulnerable to corruption. Much of the remainder of this paper will illustrate the exploitation of this vulnerability in the area of climate research. The situation is particularly acute for a small weak field like climatology. As a field, it has traditionally been a subfield within such disciplines as meteorology, oceanography, geography, geochemistry, etc. These fields, themselves are small and immature. At the same time, these fields can be trivially associated with natural disasters. Finally, climate science has been targeted by a major political movement, environmentalism, as the focus of their efforts, wherein the natural disasters of the earth system, have come to be identified with man’s activities – engendering fear as well as an agenda for societal reform and control. The remainder of this paper will briefly describe how this has been playing out with respect to the climate issue.

2. Conscious Efforts to Politicize Climate Science

The above described changes in scientific culture were both the cause and effect of the growth of ‘big science,’ and the concomitant rise in importance of large organizations. However, all such organizations, whether professional societies, research laboratories, advisory bodies (such as the national academies), government departments and agencies (including NASA, NOAA, EPA, NSF, etc.), and even universities are hierarchical structures where positions and policies are determined by small executive councils or even single individuals. This greatly facilitates any conscious effort to politicize science via influence in such bodies where a handful of individuals (often not even scientists) speak on behalf of organizations that include thousands of scientists, and even enforce specific scientific positions and agendas. The temptation to politicize science is overwhelming and longstanding. Public trust in science has always been high, and political organizations have long sought to improve their own credibility by associating their goals with ‘science’ – even if this involves misrepresenting the science.

Professional societies represent a somewhat special case. Originally created to provide a means for communication within professions – organizing meetings and publishing journals – they also provided, in some instances, professional certification, and public outreach. The central offices of such societies were scattered throughout the US, and rarely located in Washington. Increasingly, however, such societies require impressive presences in Washington where they engage in interactions with the federal government. Of course, the nominal interaction involves lobbying for special advantage, but increasingly, the interaction consists in issuing policy and scientific statements on behalf of the society. Such statements, however, hardly represent independent representation of membership positions. For example, the primary spokesman for the American Meteorological Society in Washington is Anthony Socci who is neither an elected official of the AMS nor a contributor to climate science. Rather, he is a former staffer for Al Gore.

Returning to the matter of scientific organizations, we find a variety of patterns of influence. The most obvious to recognize (though frequently kept from public view), consists in prominent individuals within the environmental movement simultaneously holding and using influential positions within the scientific organization. Thus, John Firor long served as administrative director of the National Center for Atmospheric Research in Boulder, Colorado. This position was purely administrative, and Firor did not claim any scientific credentials in the atmospheric sciences at the time I was on the staff of NCAR. However, I noticed that beginning in the 1980′s, Firor was frequently speaking on the dangers of global warming as an expert from NCAR. When Firor died last November, his obituary noted that he had also been Board Chairman at Environmental Defense– a major environmental advocacy group – from 1975-1980 [5].

The UK Meteorological Office also has a board, and its chairman, Robert Napier, was previously the Chief Executive for World Wildlife Fund – UK. Bill Hare, a lawyer and Campaign Director for Greenpeace, frequently speaks as a ‘scientist’ representing the Potsdam Institute, Germany’s main global warming research center. John Holdren, who currently directs the Woods Hole Research Center (an environmental advocacy center not to be confused with the far better known Woods Hole Oceanographic Institution, a research center), is also a professor in Harvard’s Kennedy School of Government, and has served as president of the American Association for the Advancement of Science among numerous other positions including serving on the board of the MacArthur Foundation from 1991 until 2005. He was also a Clinton-Gore Administration spokesman on global warming.

The making of academic appointments to global warming alarmists is hardly a unique occurrence. The case of Michael Oppenheimer is noteworthy in this regard. With few contributions to climate science (his postdoctoral research was in astro-chemistry), and none to the physics of climate, Oppenheimer became the Barbara Streisand Scientist at Environmental Defense [6]. He was subsequently appointed to a professorship at Princeton University, and is now, regularly, referred to as a prominent climate scientist by Oprah (a popular television hostess), NPR (National Public Radio), etc. To be sure, Oppenheimer did coauthor an early absurdly alarmist volume (Oppenheimer and Robert Boyle, 1990: Dead Heat, The Race Against the Greenhouse Effect), and he has served as a lead author with the IPCC (Intergovernmental Panel on Climate Change) [7].

One could go on at some length with such examples, but a more common form of infiltration consists in simply getting a couple of seats on the council of an organization (or on the advisory panels of government agencies). This is sufficient to veto any statements or decisions that they are opposed to. Eventually, this enables the production of statements supporting their position – if only as a quid pro quo for permitting other business to get done. Sometimes, as in the production of the 1993 report of the NAS, Policy Implications of Global Warming, the environmental activists, having largely gotten their way in the preparation of the report where they were strongly represented as ‘stake holders,’ decided, nonetheless, to issue a minority statement suggesting that the NAS report had not gone ‘far enough.’ The influence of the environmental movement has effectively made support for global warming, not only a core element of political correctness, but also a requirement for the numerous prizes and awards given to scientists. That said, when it comes to professional societies, there is often no need at all for overt infiltration since issues like global warming have become a part of both political correctness and (in the US) partisan politics, and there will usually be council members who are committed in this manner.

The situation with America’s National Academy of Science is somewhat more complicated. The Academy is divided into many disciplinary sections whose primary task is the nomination of candidates for membership in the Academy [8]. Typically, support by more than 85% of the membership of any section is needed for nomination. However, once a candidate is elected, the candidate is free to affiliate with any section. The vetting procedure is generally rigorous, but for over 20 years, there was a Temporary Nominating Group for the Global Environment to provide a back door for the election of candidates who were environmental activists, bypassing the conventional vetting procedure. Members, so elected, proceeded to join existing sections where they hold a veto power over the election of any scientists unsympathetic to their position. Moreover, they are almost immediately appointed to positions on the executive council, and other influential bodies within the Academy. One of the members elected via the Temporary Nominating Group, Ralph Cicerone, is now president of the National Academy. Prior to that, he was on the nominating committee for the presidency. It should be added that there is generally only a single candidate for president. Others elected to the NAS via this route include Paul Ehrlich, James Hansen, Steven Schneider, John Holdren and Susan Solomon.

It is, of course, possible to corrupt science without specifically corrupting institutions. For example, the environmental movement often cloaks its propaganda in scientific garb without the aid of any existing scientific body. One technique is simply to give a name to an environmental advocacy group that will suggest to the public, that the group is a scientific rather than an environmental group. Two obvious examples are the Union of Concerned Scientists and the Woods Hole Research Center [9,10]. The former conducted an intensive advertising campaign about ten years ago in which they urged people to look to them for authoritative information on global warming.

This campaign did not get very far – if only because the Union of Concerned Scientists had little or no scientific expertise in climate. A possibly more effective attempt along these lines occurred in the wake of Michael Crichton’s best selling adventure, State of Fear, which pointed out the questionable nature of the global warming issue, as well as the dangers to society arising from the exploitation of this issue. Environmental Media Services (a project of Fenton Communications, a large public relations firm serving left wing and environmental causes; they are responsible for the alar scare as well as Cindy Sheehan’s anti-war campaign.) created a website, realclimate.org, as an ‘authoritative’ source for the ‘truth’ about climate. This time, real scientists who were also environmental activists, were recruited to organize this web site and ‘discredit’ any science or scientist that questioned catastrophic anthropogenic global warming.

The web site serves primarily as a support group for believers in catastrophe, constantly reassuring them that there is no reason to reduce their worrying. Of course, even the above represent potentially unnecessary complexity compared to the longstanding technique of simply publicly claiming that all scientists agree with whatever catastrophe is being promoted. Newsweek already made such a claim in 1988. Such a claim serves at least two purposes. First, the bulk of the educated public is unable to follow scientific arguments; ‘knowing’ that all scientists agree relieves them of any need to do so. Second, such a claim serves as a warning to scientists that the topic at issue is a bit of a minefield that they would do well to avoid.

The myth of scientific consensus is also perpetuated in the web’s Wikipedia where climate articles are vetted by William Connolley, who regularly runs for office in England as a Green Party candidate. No deviation from the politically correct line is permitted.

Perhaps the most impressive exploitation of climate science for political purposes has been the creation of the Intergovernmental Panel on Climate Change (IPCC) by two UN agencies, UNEP (United Nations Environmental Program) and WMO (World Meteorological Organization), and the agreement of all major countries at the 1992 Rio Conference to accept the IPCC as authoritative. Formally, the IPCC summarizes the peer reviewed literature on climate every five years. On the face of it, this is an innocent and straightforward task. One might reasonably wonder why it takes 100′s of scientists five years of constant travelling throughout the world in order to perform this task. The charge to the IPCC is not simply to summarize, but rather to provide the science with which to support the negotiating process whose aim is to control greenhouse gas levels. This is a political rather than a scientific charge. That said, the participating scientists have some leeway in which to reasonably describe matters, since the primary document that the public associates with the IPCC is not the extensive report prepared by the scientists, but rather the Summary for Policymakers which is written by an assemblage of representative from governments and NGO’s, with only a small scientific representation [11, 12].

3. Science in the service of politics

Given the above, it would not be surprising if working scientists would make special efforts to support the global warming hypothesis. There is ample evidence that this is happening on a large scale. A few examples will illustrate this situation. Data that challenges the hypothesis are simply changed. In some instances, data that was thought to support the hypothesis is found not to, and is then changed. The changes are sometimes quite blatant, but more often are somewhat more subtle. The crucial point is that geophysical data is almost always at least somewhat uncertain, and methodological errors are constantly being discovered. Bias can be introduced by simply considering only those errors that change answers in the desired direction. The desired direction in the case of climate is to bring the data into agreement with models, even though the models have displayed minimal skill in explaining or predicting climate. Model projections, it should be recalled, are the basis for our greenhouse concerns. That corrections to climate data should be called for, is not at all surprising, but that such corrections should always be in the ‘needed’ direction is exceedingly unlikely. Although the situation suggests overt dishonesty, it is entirely possible, in today’s scientific environment, that many scientists feel that it is the role of science to vindicate the greenhouse paradigm for climate change as well as the credibility of models. Comparisons of models with data are, for example, referred to as model validation studies rather than model tests.

The first two examples involve paleoclimate simulations and reconstructions. Here, the purpose has been to show that both the models and the greenhouse paradigm can explain past climate regimes, thus lending confidence to the use of both to anticipate future changes. In both cases (the Eocene about 50 million years ago, and the Last Glacial Maximum about 18 thousand years ago), the original data were in conflict with the greenhouse paradigm as implemented in current models, and in both cases, lengthy efforts were made to bring the data into agreement with the models.

In the first example, the original data analysis for the Eocene (Shackleton and Boersma, 1981) showed the polar regions to have been so much warmer than the present that a type of alligator existed on Spitzbergen as did florae and fauna in Minnesota that could not have survived frosts. At the same time, however, equatorial temperatures were found to be about 4K colder than at present. The first attempts to simulate the Eocene (Barron, 1987) assumed that the warming would be due to high levels of CO2, and using a climate GCM (General Circulation Model), he obtained relatively uniform warming at all latitudes, with the meridional gradients remaining much as they are today. This behavior continues to be the case with current GCMs (Huber, 2008). As a result, paleoclimatologists have devoted much effort to ‘correcting’ their data, but, until very recently, they were unable to bring temperatures at the equator higher than today’s (Schrag, 1999, Pearson et al, 2000). However, the latest paper (Huber, 2008) suggests that the equatorial data no longer constrains equatorial temperatures at all, and any values may have existed. All of this is quite remarkable since there is now evidence that current meridional distributions of temperature depend critically on the presence of ice, and that the model behavior results from improper tuning wherein present distributions remain even when ice is absent.

The second example begins with the results of a major attempt to observationally reconstruct the global climate of the last glacial maximum (CLIMAP, 1976). Here it was found that although extratropical temperatures were much colder, equatorial temperatures were little different from today’s. There were immediate attempts to simulate this climate with GCMs and reduced levels of CO2. Once again the result was lower temperatures at all latitudes (Bush and Philander, 1998a,b), and once again, numerous efforts were made to ‘correct’ the data. After much argument, the current position appears to be that tropical temperatures may have been a couple of degrees cooler than today’s. However, papers appeared claiming far lower temperatures (Crowley, 2000). We will deal further with this issue in the next section where we describe papers that show that the climate associated with ice ages is well described by the Milankovich Hypothesis that does not call for any role for CO2.

Both of the above examples probably involved legitimate corrections, but only corrections that sought to bring observations into agreement with models were initially considered, thus avoiding the creative conflict between theory and data that has characterized the past successes of science. To be sure, however, the case of the Last Glacial Maximum shows that climate science still retains a capacity for self-correction.

The next example has achieved a much higher degree of notoriety than the previous two. In the first IPCC assessment (IPCC, 1990), the traditional picture of the climate of the past 1100 years was presented. In this picture, there was a medieval warm period that was somewhat warmer than the present as well as the little ice age that was cooler. The presence of a period warmer than the present in the absence of any anthropogenic greenhouse gases was deemed an embarrassment for those holding that present warming could only be accounted for by the activities of man. Not surprisingly, efforts were made to get rid of the medieval warm period (According to Demming, 2005, Jonathan Overpeck, in an email, remarked that one had to get rid of the medieval warm period. Overpeck is one of organizers in Appendix 1.).

The most infamous effort was that due to Mann et al (1998, 1999 [13]) which used primarily a few handfuls of tree ring records to obtain a reconstruction of Northern Hemisphere temperature going back eventually a thousand years that no longer showed a medieval warm period. Indeed, it showed a slight cooling for almost a thousand years culminating in a sharp warming beginning in the nineteenth century. The curve came to be known as the hockey stick, and featured prominently in the next IPCC report, where it was then suggested that the present warming was unprecedented in the past 1000 years. The study immediately encountered severe questions concerning both the proxy data and its statistical analysis (interestingly, the most penetrating critiques came from outside the field: McIntyre and McKitrick, 2003, 2005a,b). This led to two independent assessments of the hockey stick (Wegman,2006, North, 2006), both of which found the statistics inadequate for the claims. The story is given in detail in Holland (2007).

Since the existence of a medieval warm period is amply documented in historical accounts for the North Atlantic region (Soon et al, 2003), Mann et al countered that the warming had to be regional but not characteristic of the whole northern hemisphere. Given that an underlying assumption of their analysis was that the geographic pattern of warming had to have remained constant, this would have invalidated the analysis ab initio without reference to the specifics of the statistics. Indeed, the 4th IPCC (IPCC, 2007) assessment no longer featured the hockey stick, but the claim that current warming is unprecedented remains, and Mann et al’s reconstruction is still shown in Chapter 6 of the 4th IPCC assessment, buried among other reconstructions. Here too, we will return to this matter briefly in the next section.

The fourth example is perhaps the strangest. For many years, the global mean temperature record showed cooling from about 1940 until the early 70′s. This, in fact, led to the concern for global cooling during the 1970′s. The IPCC regularly, through the 4th assessment, boasted of the ability of models to simulate this cooling (while failing to emphasize that each model required a different specification of completely undetermined aerosol cooling in order to achieve this simulation (Kiehl, 2007)). Improvements in our understanding of aerosols are increasingly making such arbitrary tuning somewhat embarrassing, and, no longer surprisingly, the data has been ‘corrected’ to get rid of the mid 20th century cooling (Thompson et al, 2008). This may, in fact, be a legitimate correction (http://www.climateaudit.org/?p=3114). The embarrassment may lie in the continuous claims of modelers to have simulated the allegedly incorrect data.

The fifth example deals with the fingerprint of warming. It has long been noted that greenhouse warming is primarily centered in the upper troposphere (Lindzen, 1999) and, indeed, model’s show that the maximum rate of warming is found in the upper tropical troposphere (Lee, et al, 2007). Lindzen (2007) noted that temperature data from both satellites and balloons failed to show such a maximum. This, in turn, permitted one to bound the greenhouse contribution to surface warming, and led to an estimate of climate sensitivity that was appreciably less than found in current models. Once the implications of the observations were clearly identified, it was only a matter of time before the data were ‘corrected.’ The first attempt came quickly (Vinnikov et al, 2006) wherein the satellite data was reworked to show large warming in the upper troposphere, but the methodology was too blatant for the paper to be commonly cited [14]. There followed an attempt wherein the temperature data was rejected, and where temperature trends were inferred from wind data (Allen and Sherwood, 2008).

Over sufficiently long periods, there is a balance between vertical wind shear and meridional temperature gradients (the thermal wind balance), and, with various assumptions concerning boundary conditions, one can, indeed, infer temperature trends, but the process involves a more complex, indirect, and uncertain procedure than is involved in directly measuring temperature. Moreover, as Pielke et al (2008) have noted, the results display a variety of inconsistencies. They are nonetheless held to resolve the discrepancy with models. More recently, Solomon et al (2009) have claimed further “corrections” to the data

The sixth example takes us into astrophysics. Since the 1970′s, considerable attention has been given to something known as the Early Faint Sun Paradox. This paradox was first publicized by Sagan and Mullen (1972). They noted that the standard model for the sun robustly required that the sun brighten with time so that 2-3 billion years ago, it was about 30% dimmer than it is today (recall that a doubling of CO2 corresponds to only a 2% change in the radiative budget). One would have expected that the earth would have been frozen over, but the geological evidence suggested that the ocean was unfrozen. Attempts were made to account for this by an enhanced greenhouse effect. Sagan and Mullen (1972) suggested an ammonia rich atmosphere might work. Others suggested an atmosphere with as much as several bars of CO2 (recall that currently CO2 is about 380 parts per million of a 1 bar atmosphere).

Finally, Kasting and colleagues tried to resolve the paradox with large amounts of methane. For a variety of reasons, all these efforts were deemed inadequate [15] (Haqqmisra et al, 2008). There followed a remarkable attempt to get rid of the standard model of the sun (Sackman and Boothroyd, 2003). This is not exactly the same as altering the data, but the spirit is the same. The paper claimed to have gotten rid of the paradox. However, in fact, the altered model still calls for substantial brightening, and, moreover, does not seem to have gotten much acceptance among solar modelers.

My last specific example involves the social sciences. Given that it has been maintained since at least 1988 that all scientists agree about alarming global warming, it is embarrassing to have scientists objecting to the alarm. To ‘settle’ the matter, a certain Naomi Oreskes published a paper in Science (Oreskes, 2004) purporting to have surveyed the literature and not have found a single paper questioning the alarm (Al Gore offers this study as proof of his own correctness in “Inconvenient Truth.”). Both Benny Peiser (a British sociologist) and Dennis Bray (an historian of science) noted obvious methodological errors, but Science refused to publish these rebuttals with no regard for the technical merits of the criticisms presented [16]. Moreover, Oreskes was a featured speaker at the celebration of Spencer Weart’s thirty years as head of the American Institute of Physics’ Center for History of Physics. Weart, himself, had written a history of the global warming issue (Weart, 2003) where he repeated, without checking, the slander taken from a screed by Ross Gelbspan (The Heat is On) in which I was accused of being a tool of the fossil fuel industry. Weart also writes with glowing approval of Gore’s Inconvenient Truth. As far as Oreskes’ claim goes, it is clearly absurd [17]. A more carefully done study revealed a very different picture (Schulte, 2007)

The above examples do not include the most convenient means whereby nominal scientists can support global warming alarm: namely, the matter of impacts. Here, scientists who generally have no knowledge of climate physics at all, are supported to assume the worst projections of global warming and imaginatively suggest the implications of such warming for whatever field they happen to be working in. This has led to the bizarre claims that global warming will contribute to kidney stones, obesity, cockroaches, noxious weeds, sexual imbalance in fish, etc. The scientists who participate in such exercises quite naturally are supportive of the catastrophic global warming hypothesis despite their ignorance of the underlying science [18].

4. Pressures to inhibit inquiry and problem solving

It is often argued that in science the truth must eventually emerge. This may well be true, but, so far, attempts to deal with the science of climate change objectively have been largely forced to conceal such truths as may call into question global warming alarmism (even if only implicitly). The usual vehicle is peer review, and the changes imposed were often made in order to get a given paper published. Publication is, of course, essential for funding, promotion, etc. The following examples are but a few from cases that I am personally familiar with. These, almost certainly, barely scratch the surface. What is generally involved, is simply the inclusion of an irrelevant comment supporting global warming accepted wisdom. When the substance of the paper is described, it is generally claimed that the added comment represents the ‘true’ intent of the paper. In addition to the following examples, Appendix 2 offers excellent examples of ‘spin control.’.

As mentioned in the previous section, one of the reports assessing the Mann et al Hockey Stick was prepared by a committee of the US National Research Counsel (a branch of the National Academy) chaired by Gerald North (North, 2006). The report concluded that the analysis used was totally unreliable for periods longer ago than about 400 years. In point of fact, the only basis for the 400 year choice was that this brought one to the midst of the Little Ice Age, and there is essentially nothing surprising about a conclusion that we are now warmer. Still, without any basis at all, the report also concluded that despite the inadequacy of the Mann et al analysis, the conclusion might still be correct. It was this baseless conjecture that received most of the publicity surrounding the report.

In a recent paper, Roe (2006) showed that the orbital variations in high latitude summer insolation correlate excellently with changes in glaciation – once one relates the insolation properly to the rate of change of glaciation rather than to the glaciation itself. This provided excellent support for the Milankovich hypothesis. Nothing in the brief paper suggested the need for any other mechanism. Nonetheless, Roe apparently felt compelled to include an irrelevant caveat stating that the paper had no intention of ruling out a role for CO2.

Choi and Ho (2006, 2008) published interesting papers on the optical properties of high tropical cirrus that largely confirmed earlier results by Lindzen, Chou and Hou (2001) on an important negative feedback (the iris effect – something that we will describe later in this section) that would greatly reduce the sensitivity of climate to increasing greenhouse gases. A proper comparison required that the results be normalized by a measure of total convective activity, and, indeed, such a comparison was made in the original version of Choi and Ho’s paper. However, reviewers insisted that the normalization be removed from the final version of the paper which left the relationship to the earlier paper unclear.

Horvath and Soden (2008) found observational confirmation of many aspects of the iris effect, but accompanied these results with a repetition of criticisms of the iris effect that were irrelevant and even contradictory to their own paper. The point, apparently, was to suggest that despite their findings, there might be other reasons to discard the iris effect. Later in this section, I will return to these criticisms. However, the situation is far from unique. I have received preprints of papers wherein support for the iris was found, but where this was omitted in the published version of the papers

In another example, I had originally submitted a paper mentioned in the previous section (Lindzen, 2007) to American Scientist, the periodical of the scientific honorary society in the US, Sigma Xi, at the recommendation of a former officer of that society. There followed a year of discussions, with an editor, David Schneider, insisting that I find a coauthor who would illustrate why my paper was wrong. He argued that publishing something that contradicted the IPCC was equivalent to publishing a paper that claimed that ‘Einstein’s general theory of relativity is bunk.’ I suggested that it would be more appropriate for American Scientist to solicit a separate paper taking a view opposed to mine. This was unacceptable to Schneider, so I ended up publishing the paper elsewhere. Needless to add, Schneider has no background in climate physics. At the same time, a committee consisting almost entirely in environmental activists led by Peter Raven, the ubiquitous John Holdren, Richard Moss, Michael MacCracken, and Rosina Bierbaum were issuing a joint Sigma Xi – United Nations Foundation (the latter headed by former Senator and former Undersecretary of State Tim Wirth [19] and founded by Ted Turner) report endorsing global warming alarm, to a degree going far beyond the latest IPCC report. I should add that simple disagreement with conclusions of the IPCC has become a common basis for rejecting papers for publication in professional journals – as long as the disagreement suggests reduced alarm. An example will be presented later in this section.

Despite all the posturing about global warming, more and more people are becoming aware of the fact that global mean temperatures have not increased statistically significantly since 1995. One need only look at the temperature records posted on the web by the Hadley Centre. The way this is acknowledged in the literature forms a good example of the spin that is currently required to maintain global warming alarm. Recall that the major claim of the IPCC 4th Assessment was that there was a 90% certainty that most of the warming of the preceding 50 years was due to man (whatever that might mean). This required the assumption that what is known as natural internal variability (ie, the variability that exists without any external forcing and represents the fact that the climate system is never in equilibrium) is adequately handled by the existing climate models.

The absence of any net global warming over the last dozen years or so, suggests that this assumption may be wrong. Smith et al (2007) (Smith is with the Hadley Centre in the UK) acknowledged this by pointing out that initial conditions had to reflect the disequilibrium at some starting date, and when these conditions were ‘correctly’ chosen, it was possible to better replicate the period without warming. This acknowledgment of error was accompanied by the totally unjustified assertion that global warming would resume with a vengeance in 2009 [20]. As 2009 approaches and the vengeful warming seems less likely to occur, a new paper came out (this time from the Max Planck Institute: Keenlyside et al, 2008) moving the date for anticipated resumption of warming to 2015. It is indeed a remarkable step backwards for science to consider models that have failed to predict the observed behavior of the climate to nonetheless have the same validity as the data [21].

Tim Palmer, a prominent atmospheric scientist at the European Centre for Medium Range Weather Forecasting, is quoted by Fred Pearce (Pearce, 2008) in the New Scientist as follows: “Politicians seem to think that the science is a done deal,” says Tim Palmer. “I don’t want to undermine the IPCC, but the forecasts, especially for regional climate change, are immensely uncertain.” Pearce, however, continues “Palmer .. does not doubt that the Intergovernmental Panel on Climate Change (IPCC) has done a good job alerting the world to the problem of global climate change. But he and his fellow climate scientists are acutely aware that the IPCC’s predictions of how the global change will affect local climates are little more than guesswork. They fear that if the IPCC’s predictions turn out to be wrong, it will provoke a crisis in confidence that undermines the whole climate change debate. On top of this, some climate scientists believe that even the IPCC’s global forecasts leave much to be desired. …” Normally, one would think that undermining the credibility of something that is wrong is appropriate.

Even in the present unhealthy state of science, papers that are overtly contradictory to the catastrophic warming scenario do get published (though not without generally being substantially watered down during the review process). They are then often subject to the remarkable process of ‘discreditation.’ This process consists in immediately soliciting attack papers that are published quickly as independent articles rather than comments. The importance of this procedure is as follows. Normally such criticisms are published as comments, and the original authors are able to respond immediately following the comment. Both the comment and reply are published together. By publishing the criticism as an article, the reply is published as a correspondence, which is usually delayed by several months, and the critics are permitted an immediate reply. As a rule, the reply of the original authors is ignored in subsequent references.

In 2001, I published a paper (Lindzen, Chou and Hou) that used geostationary satellite data to suggest the existence of a strong negative feedback that we referred to as the Iris Effect. The gist of the feedback is that upper level stratiform clouds in the tropics arise by detrainment from cumulonimbus towers, that the radiative impact of the stratiform clouds is primarily in the infrared where they serve as powerful greenhouse components, and that the extent of the detrainment decreases markedly with increased surface temperature. The negative feedback resulted from the fact that the greenhouse warming due to the stratiform clouds diminished as the surface temperature increased, and increased as the surface temperature decreased – thus resisting the changes in surface temperature. The impact of the observed effect was sufficient to greatly reduce the model sensitivities to increasing CO2, and it was, moreover, shown that models failed to display the observed cloud behavior. The paper received an unusually intense review from four reviewers.

Once the paper appeared, the peer review editor of the Bulletin of the American Meteorological Society, Irwin Abrams, was replaced by a new editor, Jeffrey Rosenfeld (holding the newly created position of Editor in Chief), and the new editor almost immediately accepted a paper criticizing our paper (Hartmann and Michelsen, 2002), publishing it as a separate paper rather than a response to our paper (which would have been the usual and appropriate procedure). In the usual procedure, the original authors are permitted to respond in the same issue. In the present case, the response was delayed by several months. Moreover, the new editor chose to label the criticism as follows: “Careful analysis of data reveals no shrinkage of tropical cloud anvil area with increasing SST.”

In fact, this criticism was easily dismissed. The claim of Hartmann and Michelsen was that the effect we observed was due to the intrusion of midlatitude non- convective clouds into the tropics. If this were true, then the effect should have diminished as one restricted observations more closely to the equator, but as we showed (Lindzen, Chou and Hou, 2002), exactly the opposite was found. There were also separately published papers (again violating normal protocols allowing for immediate response) by Lin et al, 2002 and Fu, Baker and Hartmann, 2002, that criticized our paper by claiming that since the instruments on the geostationary satellite could not see the thin stratiform clouds that formed the tails of the clouds we could see, that we were not entitled to assume that the tails existed. Without the tails, the radiative impact of the clouds would be primarily in the visible where the behavior we observed would lead to a positive feedback; with the tails the effect is a negative feedback. The tails had long been observed, and the notion that they abruptly disappeared when not observed by an insufficiently sensitive sensor was absurd on the face of it, and the use of better instruments by Choi and Ho (2006, 2008) confirmed the robustness of the tails and the strong dominance of the infrared impact. However, as we have already seen, virtually any mention of the iris effect tends to be accompanied with a reference to the criticisms, a claim that the theory is ‘discredited,’ and absolutely no mention of the responses. This is even required of papers that are actually supporting the iris effect.

Vincent Courtillot et al (2007) encountered a similar problem. (Courtillot, it should be noted, is the director of the Institute for the Study of the Globe at the University of Paris.) They found that time series for magnetic field variations appeared to correlate well with temperature measurements – suggesting a possible non-anthropogenic source of forcing. This was immediately criticized by Bard and Delaygue (2008), and Courtillot et al were given the conventional right to reply which they did in a reasonably convincing manner. What followed, however, was highly unusual. Raymond Pierrehumbert (a professor of meteorology at the University of Chicago and a fanatical environmentalist) posted a blog supporting Bard and Delaygue, accusing Courtillot et al of fraud, and worse. Alan Robock (a coauthor of Vinnikov et al mentioned in the preceding section) perpetuated the slander in a letter circulated to all officers of the American Geophysical Union. The matter was then taken up (in December of 2007) by major French newspapers (LeMonde, Liberation, and Le Figaro) that treated Pierrehumbert’s defamation as fact. As in the previous case, all references to the work of Courtillot et al refer to it as ‘discredited’ and no mention is made of their response. Moreover, a major argument against the position of Courtillot et al is that it contradicted the claim of the IPCC.

In 2005, I was invited by Erneso Zedillo to give a paper at a symposium he was organizing at his Center for Sustainability Studies at Yale. The stated topic of the symposium was Global Warming Policy After 2012, and the proceedings were to appear in a book to by entitled Global Warming: Looking Beyond Kyoto. Only two papers dealing with global warming science were presented: mine and one by Stefan Rahmstorf of the Potsdam Institute. The remaining papers all essentially assumed an alarming scenario and proceeded to discuss economics, impacts, and policy. Rahmstorf and I took opposing positions, but there was no exchange at the meeting, and Rahmstorf had to run off to another meeting. As agreed, I submitted the manuscript of my talk, but publication was interminably delayed, perhaps because of the presence of my paper. In any event, the Brookings Institute (a centrist Democratic Party think tank) agreed to publish the volume. When the volume finally appeared (Zedillo, 2008), I was somewhat shocked to see that Rahmstorf’s paper had been modified from what he presented, and had been turned into an attack not only on my paper but on me personally [22]. I had received no warning of this; nor was I given any opportunity to reply. Inquiries to the editor and the publisher went unanswered. Moreover, the Rahmstorf paper was moved so that it immediately followed my paper. The reader is welcome to get a copy of the exchange, including my response, on my web site (Lindzen-Rahmstorf Exchange, 2008), and judge the exchange for himself.

One of the more bizarre tools of global warming revisionism is the posthumous alteration of skeptical positions.

Thus, the recent deaths of two active and professionally prominent skeptics, Robert Jastrow (the founding director of NASA’s Goddard Institute for Space Studies, now headed by James Hansen), and Reid Bryson (a well known climatologist at the University of Wisconsin) were accompanied by obituaries suggesting deathbed conversions to global warming alarm.

The death of another active and prominent skeptic, William Nierenberg (former director of the Scripps Oceanographic Institute), led to the creation of a Nierenberg Prize that is annually awarded to an environmental activist. The most recent recipient was James Hansen who Nierenberg detested.

Perhaps the most extraordinary example of this phenomenon involves a paper by Singer, Starr, and Revelle (1991). In this paper, it was concluded that we knew too little about climate to implement any drastic measures. Revelle, it may be recalled, was the professor that Gore credits with introducing him to the horrors of CO2 induced warming. There followed an intense effort led by a research associate at Harvard, Justin Lancaster, in coordination with Gore staffers, to have Revelle’s name posthumously removed from the published paper. It was claimed that Singer had pressured an old and incompetent man to allow his name to be used. To be sure, everyone who knew Revelle, felt that he had been alert until his death. There followed a law suit by Singer, where the court found in Singer’s favor. The matter is described in detail in Singer (2003).

Occasionally, prominent individual scientists do publicly express skepticism. The means for silencing them are fairly straightforward.

Will Happer, director of research at the Department of Energy (and a professor of physics at Princeton University) was simply fired from his government position after expressing doubts about environmental issues in general. His case is described in Happer (2003).

Michael Griffin, NASA’s administrator, publicly expressed reservations concerning global warming alarm in 2007. This was followed by a barrage of ad hominem attacks from individuals including James Hansen and Michael Oppenheimer. Griffin has since stopped making any public statements on this matter.

Freeman Dyson, an acknowledged great in theoretical physics, managed to publish a piece in New York Review of Books (Dyson, 2008), where in the course of reviewing books by Nordhaus and Zedillo (the latter having been referred to earlier), he expressed cautious support for the existence of substantial doubt concerning global warming. This was followed by a series of angry letters as well as condemnation on the realclimate.org web site including ad hominem attacks. Given that Dyson is retired, however, there seems little more that global warming enthusiasts can do. However, we may hear of a deathbed conversion in the future.

5. Dangers for science and society

This paper has attempted to show how changes in the structure of scientific activity over the past half century have led to extreme vulnerability to political manipulation. In the case of climate change, these vulnerabilities have been exploited to a remarkable extent. The dangers that the above situation poses for both science and society are too numerous to be discussed in any sort of adequate way in this paper. It should be stressed that the climate change issue, itself, constitutes a major example of the dangers intrinsic to the structural changes in science.

As concerns the specific dangers pertaining to the climate change issue, we are already seeing that the tentative policy moves associated with ‘climate mitigation’ are contributing to deforestation, food riots, potential trade wars, inflation, energy speculation and overt corruption as in the case of ENRON (one of the leading lobbyists for Kyoto prior to its collapse). There is little question that global warming has been exploited by many governments and corporations (and not just by ENRON; Lehman Brothers, for example, was also heavily promoting global warming alarm, and relying on the advice of James Hansen, etc.) for their own purposes, but it is unclear to what extent such exploitation has played an initiating role in the issue. The developing world has come to realize that the proposed measures endanger their legitimate hopes to escape poverty, and, in the case of India, they have, encouragingly, led to an assessment of climate issues independent of the ‘official’ wisdom (Government of India, 2008 [23]).

For purposes of this paper, however, I simply want to briefly note the specific implications for science and its interaction with society. Although society is undoubtedly aware of the imperfections of science, it has rarely encountered a situation such as the current global warming hysteria where institutional science has so thoroughly committed itself to policies which call for massive sacrifices in well being world wide. Past scientific errors did not lead the public to discard the view that science on the whole was a valuable effort. However, the extraordinarily shallow basis for the commitment to climate catastrophe, and the widespread tendency of scientists to use unscientific means to arouse the public’s concerns, is becoming increasingly evident, and the result could be a reversal of the trust that arose from the triumphs of science and technology during the World War II period.

Further, the reliance by the scientific community on fear as a basis for support, may, indeed, have severely degraded the ability of science to usefully address problems that need addressing. It should also be noted that not all the lessons of the World War II period have been positive. Massive crash programs such as the Manhattan Project are not appropriate to all scientific problems. In particular, such programs are unlikely to be effective in fields where the basic science is not yet in place. Rather, they are best suited to problems where the needs are primarily in the realm of engineering.

Although the change in scientific culture has played an important role in making science more vulnerable to exploitation by politics, the resolution of specific issues may be possible without explicitly addressing the structural problems in science. In the US, where global warming has become enmeshed in partisan politics, there is a natural opposition to exploitation which is not specifically based on science itself. However, the restoration of the traditional scientific paradigm will call for more serious efforts. Such changes are unlikely to come from any fiat. Nor is it likely to be implemented by the large science bureaucracies that have helped create the problem in the first place. A potentially effective approach would be to change the incentive structure of science. The current support mechanisms for science is one where the solution of a scientific problem is rewarded by ending support.

This hardly encourages the solution of problems or the search for actual answers. Nor does it encourage meaningfully testing hypotheses. The alternative calls for a measure of societal trust, patience, and commitment to elitism that hardly seems consonant with the contemporary attitudes. It may, however, be possible to make a significant beginning by carefully reducing the funding for science. Many scientists would be willing to accept a lower level of funding in return for greater freedom and stability. Other scientists may find the trade-off unacceptable and drop out of the enterprise. The result, over a period of time, could be a gradual restoration of a better incentive structure.

One ought not underestimate the institutional resistance to such changes, but the alternatives are proving to be much worse. Some years ago, I described some of what I have discussed here at a meeting in Erice (Lindzen, 2005). Richard Garwin (who some regard as the inventor of the H-bomb) rose indignantly to state that he did not want to hear such things. Quite frankly, I also don’t want to hear such things. However, I fear that ignoring such things will hardly constitute a solution, and a solution may be necessary for the sake of the scientific enterprise.

Acknowledgments. The author wishes to thank Dennis Ambler, Willie Soon, Lubos Motl and Nigel Lawson for useful comments and assistance.

Notes

1. This paper was prepared for a meeting sponsored by Euresis (Associazone per la promozione e la diffusione della cultura e del lavoro scientifico) and the Templeton Foundation on Creativity and Creative Inspiration in Mathematics, Science, and Engineering: Developing a Vision for the Future. The meeting was held in San Marino from 29-31 August 2008. Its Proceedings are expected to be published in 2009.
2. At some level, this is obvious. Theoretical physics is still dealing with the standard model though there is an active search for something better. Molecular biology is still working off of the discovery of DNA. Many of the basic laws of physics resulted from individual efforts in the 17th-19th Centuries. The profound advances in technology should not disguise the fact that the bulk of the underlying science is more than 40 years old. This is certainly the case in the atmospheric and oceanic sciences. That said, it should not be forgotten that sometimes progress slows because the problem is difficult. Sometimes, it slows because the existing results are simply correct as is the case with DNA. Structural problems are not always the only factor involved.
3. It is sometimes thought that government involvement automatically implies large bureaucracies, and lengthy regulations. This was not exactly the case in the 20 years following the second world war. Much of the support in the physical sciences came from the armed forces for which science support remained a relatively negligible portion of their budgets. For example, meteorology at MIT was supported by the Air Force. Group grants were made for five year periods and renewed on the basis of a site visit. When the National Science Foundation was created, it functioned with a small permanent staff supplemented by ‘rotators’ who served on leave from universities for a few years. Unfortunately, during the Vietnam War, the US Senate banned the military from supporting non-military research (Mansfield Amendment). This shifted support to agencies whose sole function was to support science. That said, today all agencies supporting science have large ‘supporting’ bureaucracies.
4. In fairness, such programs should be distinguished from team efforts which are sometimes appropriate and successful: classification of groups in mathematics, human genome
project, etc.
5. A personal memoir from Al Grable sent to Sherwood Idso in 1993 is interesting in this regard. Grable served as a Department of Agriculture observer to the National Research Council’s National Climate Board. Such observers are generally posted by agencies to boards that they are funding. In any event, Grable describes a motion presented at a Board meeting in 1980 by Walter Orr Roberts, the director of the National Center for Atmospheric Research, and by Joseph Smagorinsky, director of NOAA’s Geophysical Fluid Dynamics Laboratory at Princeton, to censure Sherwood Idso for criticizing climate models with high sensitivities due to water vapor feedbacks (in the models), because of their inadequate handling of cooling due to surface evaporation. A member of that board, Sylvan Wittwer, noted that it was not the role of such boards to censure specific scientific positions since the appropriate procedure would be to let science decide in the fullness of time, and the matter was dropped. In point of fact, there is evidence that models do significantly understate the increase of evaporative cooling with temperature (Held and Soden, 2006). Moreover, this memoir makes clear that the water vapor feedback was considered central to the whole global warming issue from the very beginning.
6. It should be acknowledged that Oppenheimer has quite a few papers with climate in the title – especially in the last two years. However, these are largely papers concerned with policy and advocacy, assuming significant warming. Such articles probably constitute the bulk of articles on climate. It is probably also fair to say that such articles contribute little if anything to understanding the phenomenon.
7. Certain names and organizations come up repeatedly in this paper. This is hardly an accident. In 1989, following the public debut of the issue in the US in Tim Wirth’s and Al Gore’s famous Senate hearing featuring Jim Hansen associating the warm summer of 1988 with global warming, the Climate Action Network was created. This organization of over 280 ENGO’s has been at the center of the climate debates since then. The Climate Action Network, is an umbrella NGO that coordinates the advocacy efforts of its members, particularly in relation to the UN negotiations. Organized around seven regional nodes in North and Latin America, Western and Eastern Europe, South and Southeast Asia, and Africa, CAN represents the majority of environmental groups advocating on climate change, and it has embodied the voice of the environmental community in the climate negotiations since it was established.
The founding of the Climate Action Network can be traced back to the early involvement of scientists from the research ENGO community. These individuals, including Michael Oppenheimer from Environmental Defense, Gordon Goodman of the Stockholm Environmental Institute (formerly the Beijer Institute), and George Woodwell of the Woods Hole Research
Center were instrumental in organizing the scientific workshops in Villach and Bellagio on Developing Policy Responses to Climate Change’ in 1987 as well as the Toronto Conference on the Changing Atmosphere in June 1988. It should be noted that the current director of the Woods Hole Research Center is John Holdren. In 1989, several months after the Toronto Conference, the emerging group of climate scientists and activists from the US, Europe, and developing countries were brought together at a meeting in Germany, with funding from Environmental Defense and the German Marshall Fund. The German Marshall Fund is still funding NGO activity in Europe: http://www.gmfus.org/event/detail.cfm?id=453&parent_type=E (Pulver, 2004).
8. The reports attributed to the National Academy are not, to any major extent, the work of Academy Members. Rather, they are the product of the National Research Council, which consists in a staff of over 1000 who are paid largely by the organizations soliciting the reports. The committees that prepare the reports are mostly scientists who are not Academy Members, and who serve without pay.
9. One might reasonably add the Pew Charitable Trust to this list. Although they advertise themselves as a neutral body, they have merged with the National Environmental Trust, whose director, Philip Clapp, became deputy managing director of the combined body. Clapp (the head of the legislative practice of a large Washington law firm, and a consultant on mergers and acquisitions to investment banking firms), according to his recent obituary, was ‘an early and vocal advocate on climate change issues and a promoter of the international agreement concluded in 1997 in Kyoto, Japan. Mr. Clapp continued to attend subsequent global warming talks even after the US Congress did not ratify the Kyoto accord.’
10. John Holdren has defended the use of the phrase ‘Research Center’ since research is carried out with sponsorship by National Science Foundation, the National Oceanographic Administration, and NASA. However, it is hardly uncommon to find sponsorship of the activities of environmental NGO’s by federal funding agencies
11. Appendix 1 is the invitation to the planning session for the 5th assessment. It clearly emphasizes strengthening rather than checking the IPCC position. Appendix 2 reproduces a commentary by Stephen McIntyre on the recent OfCom findings concerning a British TV program opposing global warming alarmism. The response of the IPCC officials makes it eminently clear that the IPCC is fundamentally a political body. If further evidence were needed, one simply has to observe the fact that the IPCC Summary for Policymakers will selectively cite results to emphasize negative consequences. Thus the summary for Working Group II observes that global warming will result in “Hundreds of millions of people exposed to increased water stress.” This, however, is based on work (Arnell, 2004) which actually shows that by the 2080s the net global population at risk declines by up to 2.1 billion people (depending on which scenario one wants to emphasize)! The IPCC further ignores the capacity to build reservoirs to alleviate those areas they project as subject to drought (I am indebted to Indur Goklany for noting this example.)
12. Appendix 3 is a recent op-ed from the Boston Globe, written by the aforementioned John Holdren. What is interesting about this piece is that what little science it invokes is overtly incorrect. Rather, it points to the success of the above process of taking over scientific institutions as evidence of the correctness of global warming alarmism. The 3 atmospheric scientists who are explicitly mentioned are chemists with no particular expertise in climate, itself. While, Holdren makes much of the importance of expertise, he fails to note that he, himself, is hardly a contributor to the science of climate. Holdren and Paul Ehrlich (of Population Bomb fame; in that work he predicted famine and food riots for the US in the 1980′s) are responsible for the I=PAT formula. Holdren, somewhat disingenuously claims that this is merely a mathematical identity where I is environmental impact, P is population, A is GDP/P and T is I/GDP. However, in popular usage, A has become affluence and T has become technology (viz Schneider, 1997; see also Wikipedia).
13. The 1998 paper actually only goes back to 1400 CE, and acknowledges that there is no useful resolution of spatial patterns of variability going further back. It is the 1999 paper that then goes back 1000 years.
14. Of course, Vinnikov et al did mention it. When I gave a lecture at Rutgers University in October 2007, Alan Robock, a professor at Rutgers and a coauthor of Vinnikov et al declared that the ‘latest data’ resolved the discrepancy wherein the model fingerprint could not be found in the data.
15. Haqqmisra, a graduate student at the Pennsylvania State University, is apparently still seeking greenhouse solutions to the paradox.
16. The refusal was not altogether surprising. The editor of Science, at the time, was Donald Kennedy, a biologist (and colleague of Paul Ehrlich and Stephen Schneider, both also members of Stanford’s biology department), who had served as president of Stanford University. His term, as president, ended with his involvement in fiscal irregularities such as charging to research overhead such expenses as the maintenance of the presidential yacht and the provision of flowers for his daughter’s wedding – offering peculiar evidence for the importance of grant overhead to administrators. Kennedy had editorially declared that the debate concerning global warming was over and that skeptical articles would not be considered. More recently, he has published a relatively pure example of Orwellian double-speak (Kennedy, 2008) wherein he called for better media coverage of global warming, where by ‘better’ he meant more carefully ignoring any questions about global warming alarm. As one might expect, Kennedy made extensive use of Oreskes’ paper. He also made the remarkably dishonest claim that the IPCC Summary for Policymakers was much more conservative than the scientific text.
17. Oreskes, apart from overt errors, merely considered support to consist in agreement that there had been some warming, and that anthropogenic CO2 contributed part of the warming. Such innocent conclusions have essentially nothing to do with catastrophic projections. Moreover, most of the papers she looked at didn’t even address these issues; they simply didn’t question these conclusions.
18. Perhaps unsurprisingly, The Potsdam Institute, home of Greenpeace’s Bill Hare, now has a Potsdam Institute for Climate Impact Research.
19. Tim Wirth chaired the hearing where Jim Hansen rolled out the alleged global warming relation to the hot summer of 1988 (viz Section 2). He is noted for having arranged for the hearing room to have open windows to let in the heat so that Hansen would be seen to be sweating for the television cameras. Wirth is also frequently quoted as having said “We’ve got to ride the global warming issue. Even if the theory of global warming is wrong, we will be doing the right thing — in terms of economic policy and environmental policy.”
20. When I referred to the Smith et al paper at a hearing of the European Parliament, Professor Schellnhuber of the Potsdam Institute (which I mentioned in the previous section with respect to its connection to Greenpeace) loudly protested that I was being ‘dishonest’ by not emphasizing what he referred to as the main point in Smith et al: namely that global warming would return with a vengeance.
21. The matter of ‘spin control’ warrants a paper by itself. In connection with the absence of warming over the past 13 years, the common response is that 7 of the last 10 warmest years in the record occurred during the past decade. This is actually to be expected, given that we are in a warm period, and the temperature is always fluctuating. However, it has nothing to do with trends.
22. The strange identification of the CO2 caused global warming paradigm with general relativity theory, mentioned earlier in this section, is repeated by Rahmstorf. This repetition of odd claims may be a consequence of the networking described in footnote 7.
23. A curious aspect of the profoundly unalarming Indian report is the prominent involvement in the preparation of the report by Dr. Rajendra Pachauri (an economist and long term UN bureaucrat) who heads the IPCC. Dr. Pachauri has recently been urging westerners to reduce meat consumption in order to save the earth from destruction by global warming.

References

Allen, R.J. and S.C. Sherwood (2008) Warming maximum in the tropical upper troposphere deduced from thermal winds, Nature 25 May 2008; doi:10.1038/ngeo208 1-5

Arnell, N.W. (2004) Climate change and global water resources: SRES emissions and socio-economic scenarios, Global Environmental Change, 14, 31-52.

Bard, E.and G. Delaygue (2008) Comment on “Are there connections between the Earth’s magnetic field and climate?” Earth and Planetary Science Letters 265 302–307

Barron, E.J. (1987) Eocene Equator-to- Pole Surface Ocean Temperatures: A Significant Climate Problem? PALEOCEANOGRAPHY, 2, 729–739

Bush, A.B.G. and S.G.H. Philander (1998a) The late Cretaceous: simulation with a coupled atmosphere-ocean general circulation model. Paleoceanography 12 495-516

Bush, A.B.G. and S.G.H. Philander (1998b) The role of ocean -atmosphere interactions in tropical cooling during the last glacial maximum. Science 279 1341-1344

Bush, V. (1945) Science: the Endless Frontier. http://www.nsf.gov/about/history/vbush1945.htm

Choi, Y.-S., and C.-H. Ho (2006), Radiative effect of cirrus with different optical properties over the tropics in MODIS and CERES observations, Geophysical Research Letters, 33, L21811, doi:10.1029/2006GL027403

Choi, Y.-S., and C.-H. Ho (2008), Validation of the cloud property retrievals from the MTSAT-1R imagery using MODIS observations, International Journal of Remote Sensing, accepted.

Chou, M.-D., R.S. Lindzen, and A.Y. Hou (2002b) Comments on “The Iris hypothesis: A negative or positive cloud feedback?” J. Climate, 15, 2713-2715.

CLIMAP Project (1976) The surface of the ice-age Earth. Science 191:1131-1136

Courtillot, V., Y. Gallet, J.-L. Le Mouël, F. Fluteau, and A. Genevey (2007) Are there connections between the Earth’s magnetic field and climate? Earth and Planetary Science Letters 253 328–339

Crichton, M. (2004) State of Fear, Harper Collins, 624 pp.

Crowley, T. J. (2000) CLIMAP SSTs re-revisited. Climate Dynamics 16:241-255

Demming, D. (2005) Global warming, the politicization of science, and Michael Crichton’s State of Fear, Journal of Scientific Exploration, 19, 247-256.

Dyson, F. (2008) The Question of Global Warming, New York Review of Books, 55, No. 10, June 12, 2008.

Fu, Q., Baker, M., and Hartman, D. L.(2002) Tropical cirrus and water vapor: an effective Earth infrared iris feedback? Atmos. Chem. Phys., 2, 31–37

Gelbspan, R. (1998) The Heat is On, Basic Books, 288 pp.

Government of India (2008) National Action Plan on Climate Change, 56pp.

Happer, W. (2003) Harmful Politicization of Science in Politicizing Science: The Alchemy of Policymaking edited by Michael Gough, Hoover Institution 313 pp (pp 27-48).

Haqq-Misra, J.D., S.D. Domagal-Goldman, P. J. Kasting, and J.F. Kasting (2008) A Revised, hazy methane greenhouse for the Archean Earth. Astrobiology in press

Hartmann, D. L., and M. L. Michelsen (2002) No evidence for iris. Bull. Amer. Meteor. Soc., 83, 249–254.

Held, I.M. and B.J. Soden (2006) Robust responses of the hydrological cycle to global warming,

Journal of Climate., 19, 5686-5699.

Holland, D. (2007) Bias And Concealment in the IPCC Process: The “Hockey-Stick” Affair and its Implications, Energy & Environment, 18, 951-983.

Horvath, A., and B. Soden, ( 2008) Lagrangian Diagnostics of Tropical Deep Convection and Its Effect upon Upper-Tropospheric Humidity, Journal of Climate, 21(5), 1013–1028

Huber, M. (2008) A Hotter Greenhouse? Science 321 353-354

IPCC, 1990: Climate Change: The IPCC Scientific Assessment [Houghton, J. T et al., (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 362 pp.

IPCC, 1996: Climate Change 1995: The Science of Climate Change. Contribution of Working Group I to the Second Assessment Report of the Intergovernmental Panel on Climate Change
[Houghton et al. (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 572 pp

IPCC, 2001: Climate Change 2001: The Scientific Basis. Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change [Houghton, J.T., et al. (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 881 pp.

IPCC, 2007:Solomon et al., (eds.) 2007: ‘Climate Change 2007: The Physical Science Basis. Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change’. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. (Available at http://www.ipcc.ch/)

Keenlyside, N. S., M. Latif, J. Jungclaus, L. Kornblueh and E. Roeckner (2008) Advancing decadal-scale climate prediction in the North Atlantic sector. Nature 453 84-88

Kennedy, D., 2008: Science, Policy, and the Media, Bulletin of the American Academy of Arts & Sciences, 61, 18-22.

Kiehl, J.T. (2007) Twentieth century climate model response and climate sensitivity. Geophys. Res. Lttrs., 34, L22710, doi:10.1029/2007GL031383

Lee, M.I., M.J. Suarez, I.S. Kang, I. M. Held, and D. Kim (2008) A Moist Benchmark Calculation for the Atmospheric General Circulation Models, J.Clim., in press.

Lin, B., B. Wielicki, L. Chambers, Y. Hu, and K.-M. Xu, (2002) The iris hypothesis: A negative or positive cloud feedback? J. Climate, 15, 3–7.

Lindzen, R.S. (1999) The Greenhouse Effect and its problems. Chapter 8 in Climate Policy After Kyoto (T.R. Gerholm, editor), Multi-Science Publishing Co., Brentwood, UK, 170pp.

Lindzen, R.S. (2005) Understanding common climate claims. in Proceedings of the 34th International Seminar on Nuclear War and Planetary Emergencies, R. Raigaini, editor, World Scientific Publishing Co., Singapore, 472pp. (pp. 189-210)

Lindzen, R.S. (2007) Taking greenhouse warming seriously. Energy & Environment, 18, 937-950.

Lindzen, R.S., M.-D. Chou, and A.Y. Hou (2001) Does the Earth have an adaptive infrared iris?

Bull. Amer. Met. Soc. 82, 417-432.

Lindzen, R.S., M.-D. Chou, and A.Y. Hou (2002) Comments on “No evidence for iris.” Bull. Amer. Met. Soc., 83, 1345–1348

Lindzen-Rahmstorf Exchange (2008) http://www-eaps.mit.edu/faculty/lindzen/L_R-Exchange.pdf

Mann, M.E., R.E. Bradley, and M.K. Hughes (1998) Global-scale temperature patterns and climate forcing over the past six centuries,” Nature, 392, 779-787.

Mann, M.E., Bradley, R.S. and Hughes, M.K. (1999) Northern Hemisphere Temperatures During the Past Millennium: Inferences, Uncertainties, and Limitations, Geophysical Research Letters,

26, 759-762.

McIntyre, S. and R. McKitrick (2003) Corrections to the Mann et al. (1998) proxy data base and Northern hemispheric average temperature series,” Energy and Environment, 14, 751-771.

McIntyre, S. and R. McKitrick (2005a) The M&M critique of MBH98

Northern hemisphere climate index: Update and implications, Energy and Environment, 16, 69-100.

McIntyre, S. and R. McKitrick (2005b) Hockey sticks, principal components, and spurious significance,” Geophysical Research Letters, 32, L03710, doi:10.1029/2004GL021750

Miller, D.W. (2007) The Government Grant System Inhibitor of Truth and Innovation? J. of Information Ethics, 16, 59-69

National Academy of Sciences (1992) Policy Implications of Greenhouse Warming:Mitigation, Adaptation, and the Science Base, National Academy Press, 944 pp.

North, G.R., chair (2006) NRC, 2006: Committee on Surface Temperature Reconstructions for the Last 2,000 Years, National Research Council, National Academies Press

Oppenheimer, M. and R.Boyle (1990) Dead Heat, The Race Against the Greenhouse Effect, Basic Books, 288 pp.

Oreskes, N.(2004) The scientific consensus on climate change. Science, 306, 1686.

Pearce, F. (2008) Poor forecasting undermines climate debate. New Scientist, 01 May 2008, 8-9

Pearson, P.N., P.W. Ditchfeld, J. Singano, K.G. Harcourt-Brown, C.J. Nicholas, R.K. Olsson, N.J. Shackleton & M.A. Hall (2000) Warm tropical sea surface temperatures in the Late Cretaceous and Eocene epochs Nature 413 481-487

Pielke Sr., R.A., T.N. Chase, J.R. Christy and B. Herman (2008) Assessment of temperature trends in the troposphere deduced from thermal winds. Nature (submitted)

Pulver, Simone (2004). Power in the Public Sphere: The battles between Oil Companies and Environmental Groups in the UN Climate Change Negotiations, 1991-2003. Doctoral dissertation, Department of Sociology, University of California, Berkeley

Roe, G. (2006) In defense of Milankovitch. Geophys. Res. Ltrs., 33, L24703, doi:10.1029/2006GL027817

Schneider, S.H., (1997) Laboratory Earth, Basic Books, 174pp.

Sackmann, J. and A.I. Boothroyd (2003) Our sun. V. A bright young sun consistent with helioseismology and warm temperatures on ancient earth and mars. The Astrophysical Journal, 583:1024-1039

Sagan, C. and G. Mullen. (1972) Earth and Mars: evolution of atmospheres and surface temperatures. Science, 177, 52-56.

Schrag, D.P. (1999) Effects of diagenesis on isotopic record of late Paleogene equatorial sea surface temperatures. Chem. Geol., 161, 215-224

Schulte, K.-M. (2008) Scientific consensus on climate? Energy and Environment, 19 281-286

Shackleton, N., and A. Boersma, (1981) The climate of the Eocene ocean, J. Geol. Soc., London, 138, 153-157.

Singer, S.F. (2003) The Revelle-Gore Story Attempted Political Suppression of Science in

Politicizing Science: The Alchemy of Policymaking edited by Michael Gough, Hoover Institution 313 pp (pp 283-297).

Singer, S.F., C. Starr, and R. Revelle (1991), “What To Do About Greenhouse Warming: Look Before You Leap,” Cosmos 1 28–33.

Smith, D.M., S. Cusack, A.W. Colman, C.K. Folland, G.R. Harris, J.M. Murphy (2007) Improved Surface Temperature Prediction for the Coming Decade from a Global Climate Model

Science, 317, 796-799

Soon, W., S. Baliunas, C. Idso, S. Idso, and D. Legates (2003) Reconstructing climatic and environmental changes of the past 1000 years: a reappraisal. Energy and Environment, 14, 233-296

Thompson, D.W.J., J. J. Kennedy, J. M. Wallace and P.D. Jones (2008) A large discontinuity in the mid-twentieth century in observed global-mean surface temperature Nature 453 646-649

Vinnikov, K.Y. N.C. Grody, A. Robock, RJ. Stouffer, P.D. Jones, and M.D. Goldberg (2006) Temperature trends at the surface and in the troposphere. J. Geophys. Res.,111, D03106, doi:10.1029/2005JD006392

Weart, S. (2003) The Discovery of Global Warming, Harvard University Press, 228 pp.

Wegman, E.J. et al., (2006): Ad Hoc Committee report on the “Hockey Stick” global climate reconstruction, commissioned by the US Congress House Committee on Energy and Commerce, http://republicans.energycommerce.house.gov/108/home/07142006_Wegman_Report.pdf

Zedillo, E., editor (2007) Global Warming: Looking Beyond Kyoto. Brookings Institution Press, 237 pp.

October 5, 2014 Posted by | Corruption, Deception, Science and Pseudo-Science, Timeless or most popular | | 3 Comments

A rare debate on the “settled science” of climate change

October 2, 2014 Posted by | Deception, Environmentalism, Mainstream Media, Warmongering, Science and Pseudo-Science, Video | 7 Comments

Media hyped walrus climate scare stories debunked

By Marc Morano | Climate Depot | October 1, 2014

The October 1, 2014 Associated Press article linking the walrus gathering to melting sea ice, lacks historical perspective and contains serious spin that would lead readers to erroneous conclusions about walruses and the climate.

Zoologist Dr. Susan Crockford weighs in:

Mass haulouts of Pacific walrus and stampede deaths are not new, not due to low ice cover

‘The attempts by WWF and others to link this event to global warming is self-serving nonsense that has nothing to do with science…this is blatant nonsense and those who support or encourage this interpretation are misinforming the public.’

First off, walruses are not endangered. According to the New York Times, “the Pacific walrus remains abundant, numbering at least 200,000 by some accounts, double the number in the 1950s.”

The AP article titled, “35,000 walrus come ashore in northwest Alaska”, claims “the gathering of walrus on shore is a phenomenon that has accompanied the loss of summer sea ice as the climate has warmed.” The AP even includes the environmental group World Wildlife Fund, to ramp up climate hype. “It’s another remarkable sign of the dramatic environmental conditions changing as the result of sea ice loss,” said Margaret Williams, managing director of the group’s Arctic program, by phone from Washington, D.C.

Pacific Walrus_Davi

But the AP is recycling its own climate stories on walruses. See: 2009: AP : Walruses Gather as Ice Melts in the Arctic Sea (Sept. 17, 2009) Also see fact check on “melting” Arctic sea ice. See: Paper: ‘Myth of arctic meltdown’ : Stunning satellite images show ice cap has grown by an area twice the size of Alaska in two years – Despite Al Gore’s prediction it would be ICE-FREE by now

The media and green groups are implying that walrus hanging out by the tens of thousands is a new phenomenon and due to melting Arctic ice. But dating back to at least the 1604, there have been reports of large walrus gatherings or haulouts.

Excerpt:

“Walruses became only really known in Europe after the 1604 expedition to the Kola Peninsula of the ship “Speed” of Muscovy Company, commanded by Stephen Bennet. On the way back to England the Speed reached what some years before a Dutch expedition had named “Bear Island”. The crew of the Speed discovered a haulout numbering about a thousand walruses on the island’s northern coast.”

According to a National Geographic article in 2007, walrus populations were not endangered. See: “While scientists lack a firm population estimate for the species, researchers have encountered herds as large as 100,000 in recent years

Even the green activist group, the WWF, admits walrus ‘hangouts’ of tens of thousands are not unprecedented.

A 2009 WWF blog report noted: “WWF Polar Bear coordinator Geoff York returned on 17 September from a trip along the Russian coast and saw a haul out there with an estimated 20,000 walruses near Ryrkaipiy (on the Chukchi Peninsula).”

AP’s own reporting debunks walrus claims

Are 35,000 walruses gathering in “haulouts” on the shoreline with many be stampeded to death really that unusual? The answer is No!

The AP reported on 40,000 walruses in a haulout just 7 years ago in a single location. See: AP 12/14/2007: “40,000 in one spot” – “As a result, walruses came ashore earlier and stayed longer, congregating in extremely high numbers, with herds as big as 40,000 at Point Shmidt, a spot that had not been used by walruses as a “haulout” place for a century, scientists said.”

As climate blogger Tom Nelson noted in a December 28 2007 analysis:  “Are you saying that that spot *was* used as a haulout in earlier years?” Nelson wrote.

Nelson noted the media reported that “Walruses are vulnerable to stampedes when they gather in such large numbers. The appearance of a polar bear, a hunter or a low-flying airplane can send them rushing to the water.”

Nelson then asked: “Are stampedes ever caused by the appearance of researchers or low-flying research planes?”

Walrus stampede deaths drop dramatically from 3,000 to 50?

The October 1, 2014 AP article notes with obvious concern for the walrus species: “Observers last week saw about 50 carcasses on the beach from animals that may have been killed in a stampede…”

Fifty walrus carcasses? That number is a significant improvement from 2007 when there were a reported 3000 dead walruses discovered from the late summer and fall on the Russian side of the Arctic, according to the AP’s own earlier reporting. See: 2007: ‘3,000 walruses die in stampedes tied to Climate

Are walrus stampede deaths declining in recent years? It is difficult to say based on reports, but a high of 3,000 deaths in 2007 (for a whole season) to a low of 50 deaths in 2014 for a single location, but it does not  appear to be an alarming trend. Why does the AP fail to put any historical perspective on their climate scare stories, especially when the AP’s own reporting from 7 years ago calls into question their claims?

The next issue is whether or not sea ice extent is critical to walruses in late summer and fall. According to this report, ice extent is not critical. As Nelson noted in 2007:

“When I read this in the (2007) ‘walrus’ Wikipedia entry, I’m also not convinced that lack of summer ice is necessarily a big deal.”

2007 Wikipedia entry:

“In the non-reproductive season (late summer and fall) walruses tend to migrate away from the ice and form massive aggregations of tens of thousands of individuals on rocky beaches or outcrops.” [Note: This line has been omitted from the Wikipedia entry in 2014]

Walrus stampede deaths benefit polar bears

In addition, a 2007 WWF post inadvertently noted that the carcasses of stampeded walruses may actually be a great benefit to polar bears.

“Last fall some 20,000-30,000 animals were piled up there. No one has actually counted them all, but the Vankarem residents are certain the number is growing…In early winter, when the ice is re-forming and walruses leave the beach, up to 100 carcasses remain behind. These blubbery animals offer a perfect meal for wandering and hungry polar bears… In mid-November, a truck driver alerted the patrol to bear tracks on the beach. The wave had begun. For the next three weeks, bears making their way along the coast stopped to graze on the carcasses at this so-called “feeding point” instead of proceeding to the village. At one time alone, Sergey and his team counted 96 bears feeding on the walrus. In total they estimated that 185 bears had been circulating with a six mile radius around the village.”

The stampeded remains of 100 walruses fed up to 185 polar bears!

But despite the easily accessible historical data on walruses, the WWF and the AP and other media in 2014, continue to spin the haulouts as evidence of “climate change.”

Margaret Williams, WWF’s managing director of the Arctic program said in a September 18, 2014 article:

“The massive concentration of walruses onshore—when they should be scattered broadly in ice-covered waters—is just one example of the impacts of climate change on the distribution of marine species in the Arctic.”

Is the WWF correct? Should walruses be “scattered broadly in ice-covered waters”? Not exactly. As Tom Nelson noted on Twitter, (Tom Nelson‏@tan123) “If walrus haulouts are a new thing, why was this walrus haulout sanctuary established in 1960”

According to the Alaskan government, walrus haulouts are not unusual and have long been recognized and islands have been set aside for such gatherings.

Excerpt:

“The Walrus Islands State Game Sanctuary (WISGS), protects a group of seven small craggy islands and their adjacent waters in northern Bristol Bay, approximately 65 miles southwest of Dillingham. The WISGS includes Round Island, Summit Island, Crooked Island, High Island, Black Rock and The Twins. The WISGS was established in 1960 to protect one of the largest terrestrial haulout sites in North America for Pacific walrus (Odobenus rosmarus divergens).”

The Alaskan government report noted that numbers of 14,000 walrus haulouts in a single day were not unusual.

“Each summer large numbers of male walruses haul out on exposed, rocky beaches. Round Island is one of four major terrestrial haulouts in Alaska; the others are Capes Peirce (Togiak NWR), Newenham (Togiak NWR), and Seniavin (near Port Moller). Male walrus return to these haulouts every spring as the ice pack recedes northward, remaining in Bristol Bay to feed they haul out at these beach sites for several days between each feeding foray. The number of walrus using the island fluctuates significantly from year to year. However, up to 14,000 walrus have been counted on Round Island in a single day.”

Hunters have relied on large hangouts of walruses. This report details how walruses were “predictably present” and made for “clean and efficient butchering.”

Expert:

“Qayassiq was especially important for walrus hunting because it was accessible in good weather; walruses were predictably present on the beach during the preferred fall hunt; and the beach is rocky, not sandy, promoting clean and efficient butchering. Hunting on haulouts was a highly organized activity.”

###

Zoologist Dr. Susan Crockford:

‘The attempts by WWF and others to link this event to global warming is self-serving nonsense that has nothing to do with science… this is blatant nonsense and those who support or encourage this interpretation are misinforming the public.’ – Large haulouts of walruses — such as the one making news at Point Lay, Alaska on the Chukchi Sea (and which happened before back in 2009) — are not a new phenomenon for this region over the last 45 years and thus cannot be due to low sea ice levels. Nor are deaths by stampede within these herds (composed primarily of females and their young) unusual, as a brief search of the literature reveals. At least two documented incidents like this have occurred in the recent past: one in 1978, on St. Lawrence Island and the associated Punuk Islands and the other in 1972, on Wrangell Island (Fay and Kelly 1980, excerpts below)… Here is how the WWF is spinning this recent gathering at Point Lay:

We are witnessing a slow-motion catastrophe in the Arctic,” said Lou Leonard, WWF’s vice president for climate change.

Crockford Summed it up: “this is blatant nonsense and those who support or encourage this interpretation are misinforming the public.”

Related Link:

Tom Nelson’s 2007 report: About those walrus stampedes – FRIDAY, DECEMBER 28, 2007

Google currently shows about 14,000 hits for “walruses stampedes”.

Excerpts from a typical scare story, along with my comments:

The giant, tusked mammals typically clamber onto the sea ice to rest, or haul themselves onto land for just a few weeks at a time.

Ok, so it’s not unusual for them to haul up on land. Google shows a lot of pictures of them on land.

As a result, walruses came ashore earlier and stayed longer, congregating in extremely high numbers, with herds as big as 40,000 at Point Shmidt, a spot that had not been used by walruses as a “haulout” for a century, scientists said.

Are you saying that that spot *was* used as a haulout in earlier years?

Walruses are vulnerable to stampedes when they gather in such large numbers. The appearance of a polar bear, a hunter or a low-flying airplane can send them rushing to the water.

Are stampedes ever caused by the appearance of researchers or low-flying research planes?

Sure enough, scientists received reports of hundreds and hundreds of walruses dead of internal injuries suffered in stampedes. Many of the youngest and weakest animals, mostly calves born in the spring, were crushed.

Biologist Anatoly Kochnev of Russia’s Pacific Institute of Fisheries and Oceanography estimated 3,000 to 4,000 walruses out of population of perhaps 200,000 died, or two or three times the usual number on shoreline haulouts.

Were anecdotal reports of “hundreds and hundreds” used to come up with the estimate of 3,000 to 4,000? How much actual counting was done? What’s the baseline number of annual stampede deaths? Is anyone checking that any animals found dead were killed in stampedes, rather than dying from some other cause?

No large-scale walrus die-offs were seen in Alaska during the same period, apparently because the animals congregated in smaller groups on the American side of the Bering Strait, with the biggest known herd at about 2,500.

So when a walrus herd of 2,500 is panicked, stampede deaths are not a big deal, but when the herd reaches tens of thousands, we can expect lots of stampede deaths?
—–
It seems to me that more walruses worldwide may die from hunting than from stampedes. Note an excerpt from this Sea World link:

As the Pacific walrus population grew, annual subsistence catches by indigenous Arctic peoples ranged from about 3,000 to 16,000 walruses per year until about 1990, and then decreased to an average of 5,789 animals per year from 1996 to 2000.

A related paragraph is here:

Pacific walrus meat has been used for the past 40 years to feed foxes which are kept on government – subsidised fur farms in Chukotka. One estimate made by natives was of an annual kill of 10,000 – 12,000 walruses per year, but this may have been overstated. Recent investigations have found that much of the meat is left to waste and that there are no markets for the resultant fox furs. Fox farming operations in Chukotka are currently in decline due to economic recession. Local unemployment caused by the general economic situation and the closure of the farms has however led to a recent increase in illegal head-hunting.

Some more background information is in this 2007 WWF post:

Last fall some 20,000-30,000 animals were piled up there. No one has actually counted them all, but the Vankarem residents are certain the number is growing.

In early winter, when the ice is re-forming and walruses leave the beach, up to 100 carcasses remain behind. These blubbery animals offer a perfect meal for wandering and hungry polar bears.

As soon as the walruses departed, the polar bear patrol spent several days working to collect the remains of walruses killed in the stampedes. Using a tractor, they carted the carcasses six miles west of the village, anticipating that the bears would come from the west in the fall. In the end, they scattered some 80 walruses around selected sites — and then they waited.

In mid-November, a truck driver alerted the patrol to bear tracks on the beach. The wave had begun. For the next three weeks, bears making their way along the coast stopped to graze on the carcasses at this so-called “feeding point” instead of proceeding to the village. At one time alone, Sergey and his team counted 96 bears feeding on the walrus. In total they estimated that 185 bears had been circulating with a six mile radius around the village.

My comments: Eighty-100 dead walruses out of 20,000-30,000 hauled out on land seems quite low, if Kochnev’s estimate of 3,000-4,000 total stampede deaths is correct (remember, his estimate is based on a population of maybe 200,000, many of which are not hauled out in huge herds).

Also, if polar bear numbers are so threatened by global warming, what are 185 of them doing within six miles of the village?

When I read stuff like this, I’m also not completely convinced that walruses are threatened with extinction:

… researchers have encountered herds as large as 100,000 in recent years…

When I read this in the “walrus” Wikipedia entry, I’m also not convinced that lack of summer ice is necessarily a big deal:

“In the non-reproductive season (late summer and fall) walruses tend to migrate away from the ice and form massive aggregations of tens of thousands of individuals on rocky beaches or outcrops.”

In the same entry, when I read this, I’m not convinced that polar bears really need year-round sea ice in order to feed successfully.”

Polar bears hunt walruses by rushing at beached aggregations and consuming those individuals that are crushed or wounded in the sudden mass exodus, typically younger or infirm animals.

Some video of polar bears successfully hunting walruses is here and here. I don’t see any ice in that first hunting scene.

October 1, 2014 Posted by | Deception, Science and Pseudo-Science, Timeless or most popular | 4 Comments

You Have Depression

By Martha Rosenberg | CounterPunch | September 26, 2014

Are you depressed? Have you lost interest in things you used to enjoy? Are you eating or sleeping too much? Big Pharma hopes so! The push to convince people who are dealing with job, family, relationship and money problems that they actually have “depression” has resulted in almost one in four American women in their 40s and 50s taking antidepressants. Ka-ching.

Psychiatry is often accused of not being “real medicine” because the diseases it diagnoses cannot be proved on blood tests and brain scans. That’s why this month’s announcement of the “first blood test to diagnose major depression in adults” is good news for psychiatrists and Big Pharma. Developed by Northwestern Medicine® scientists, the test, announced this month, “provides the first objective, scientific diagnosis for depression,” says Northwestern.

“The blood test can also predict which therapies would be most successful for patients, and lays the groundwork for one day identifying people who are especially vulnerable to depression — even before they’ve gone through a depressive episode,” gushes a Huffington Post article. No kidding. Treating people “at risk” of heart disease, asthma, osteoporosis, GERD and other conditions is Pharma’s marketing plan. Patients never know if they would have gotten the disorders and will stay on the drugs for decades.

Another Pharma plan to sell antidepressants is “pimping suicide.” Groups like the American Foundation for Suicide Prevention present our national suicide rate as an “antidepressant deficiency” and cite a “stigma” that keeps people away from depression medication–like the 25 percent of older women who are on them right now.

Yet, despite a huge chunk of the population being on antidepressants, suicide is up not down. And in the military, where antidepressant use is rife, suicide is way up, including among those who never deployed. Left out of the American Foundation for Suicide Prevention’s marketing materials is mention of the “black box” warnings on antidepressants that say “Antidepressants increased the risk compared to placebo of suicidal thinking and behavior (suicidality) in children, adolescents, and young adults…Patients of all ages who are started on antidepressant therapy should be monitored appropriately and observed closely for clinical worsening, suicidality, or unusual changes in behavior.”

Antidepressants have other risks besides suicide. They can cause weight gain, sexual dysfunction, passivity and general complacency about life. When antidepressants quit working or don’t work to begin with the “depression” is called “treatment resistant” and more drugs are added. Their side effects–and symptoms if a patient tries to discontinue–are often taken as “proof” of the initial depression. The result is people who were never depressed being on the drugs for years.

The truth is antidepressants can increase or diminish the risk of suicide, though both suicide and meds are way up. Another truth is the American Foundation for Suicide Prevention received $100,000 from Eli Lilly in 2011 and $50,000 in 2010 and was led for a time by psychiatrist Charles Nemeroff who was found by Congress to have failed to disclose at least $1.2 million in Pharma income to Emory University.

So, yes, the pills do treat depression–depression of Big Pharma profits.

September 29, 2014 Posted by | Corruption, Deception, Science and Pseudo-Science | | 1 Comment

An unsettled climate

By Judith Curry | Climate Etc. | September 21, 2014

In a press conference last week, UN Secretary-General Ban-Ki Moon stated: “Action on climate change is urgent. The more we delay, the more we will pay in lives and in money.” The recently appointed UN Messenger of Peace Leonardo DiCaprio stated “The debate is over. Climate change is happening now.”

These statements reflect a misunderstanding of the state of climate science and the extent to which we can blame adverse consequences such as extreme weather events on human caused climate change. The climate has always changed and will continue to change. Humans are adding carbon dioxide to the atmosphere, and carbon dioxide and other greenhouse gases have a warming effect on the climate. However, there is enduring uncertainty beyond these basic issues, and the most consequential aspects of climate science are the subject of vigorous scientific debate: whether the warming since 1950 has been dominated by human causes, and how the climate will evolve in the 21st century due to both natural and human causes. Societal uncertainties further cloud the issues as to whether warming is ‘dangerous’ and whether we can afford to radically reduce carbon dioxide emissions.

At the heart of the recent scientific debate on climate change is the ‘pause’ or ‘hiatus’ in global warming – the period since 1998 during which global average surface temperatures have not increased. This observed warming hiatus contrasts with the expectation from the 2007 IPCC Fourth Assessment Report that warming would proceed at a rate of 0.2oC/per decade in the early decades of the 21st century. The warming hiatus raises serious questions as to whether the climate model projections of 21st century have much utility for decision making, given uncertainties in climate sensitivity to carbon dioxide, future volcanic eruptions and solar activity, and the multidecadal and century scale oscillations in ocean circulation patterns.

A key argument in favor of emission reductions is concern over the accelerating cost of weather disasters. The accelerating cost is associated with increasing population and wealth in vulnerable regions, and not with any increase in extreme weather events, let alone any increase that can be attributed to human caused climate change. The IPCC Special Report on Managing the Risks of Extreme Events and Disasters to Advance Climate Change Adaptation found little evidence that supports an increase in extreme weather events that can be attributed to humans. There seems to be a collective ‘weather amnesia’, where the more extreme weather of the 1930’s and 1950’s seems to have been forgotten.

Climate science is no more ‘settled’ than anthropogenic global warming is a ‘hoax’. I am concerned that the climate change problem and its solution have been vastly oversimplified. Deep uncertainty beyond the basics is endemic to the climate change problem, which is arguably characterized as a ‘wicked mess.’ A ‘wicked’ problem is complex with dimensions that are difficult to define and changing with time. A ‘mess’ is characterized by the complexity of interrelated issues, with suboptimal solutions that create additional problems.

Nevertheless, the premise of dangerous anthropogenic climate change is the foundation for a far-reaching plan to reduce greenhouse gas emissions. Elements of this plan may be argued as important for associated energy policy reasons, economics, and/or public health and safety. However, claiming an overwhelming scientific justification for the plan based upon anthropogenic global warming does a disservice both to climate science and to the policy process. Science doesn’t dictate to society what choices to make, but science can assess which policies won’t work and can provide information about uncertainty that is critical for the decision making process.

Can we make good decisions under conditions of deep uncertainty about climate change? Uncertainty in itself is not a reason for inaction. Research to develop low-emission energy technologies and energy efficiency measures are examples of ‘robust’ policies that have little downside, while at the same time have ancillary benefits beyond reducing greenhouse gas emissions. However, attempts to modify the climate through reducing CO2 emissions may turn out to be futile. The hiatus in warming observed over the past 16 years demonstrates that CO2 is not a control knob on climate variability on decadal time scales. Even if CO2 mitigation strategies are successful and climate model projections are correct, an impact on the climate would not be expected until the latter part of the 21st century. Solar variability, volcanic eruptions and long-term ocean oscillations will continue to be sources of unpredictable climate surprises.

Whether or not anthropogenic climate change is exacerbating extreme weather events, vulnerability to extreme weather events will continue owing to increasing population and wealth in vulnerable regions. Climate change (regardless of whether the primary cause is natural or anthropogenic) may be less important in driving vulnerability in most regions than increasing population, land use practices, and ecosystem degradation. Regions that find solutions to current problems of climate variability and extreme weather events and address challenges associated with an increasing population are likely to be well prepared to cope with any additional stresses from climate change.

Oversimplification, claiming ‘settled science’ and ignoring uncertainties not only undercuts the political process and dialogue necessary for real solutions in a highly complex world, but acts to retard scientific progress. It’s time to recognize the complexity and wicked nature of the climate problem, so that we can have a more meaningful dialogue on how to address the complex challenges of climate variability and change.

Related essays

In the midst of preparing my essay, Steve Koonin’s WSJ op-ed was published Climate Science is Not Settled.  Most of Koonin’s points are very similar to what I have been saying, I would say the main difference is related to decision making under deep uncertainty. Koonin states “We are very far from the knowledge needed to make good climate policy.” I argue that there are strategies for decision making under deep uncertainty that can be useful for the climate change problem, particularly if you are not trying to solve the problem of extreme weather events by reducing carbon dioxide emissions. But overall I am thrilled by Koonin’s op-ed — since he operates higher in the scientific and policy food chain than I do, his voice adds much gravitas to the message that I think needs to get out regarding climate science and policy. I would also like to add that Koonin chairs the APS Subcommittee that is reviewing the APS climate change policy statement (see my previous post on the APS Workshop, where I met Koonin).

In the midst of the ‘mad crowd’ in New York City attending the People’s Climate March, sober people are trying to figure out ways to broaden the policy debate on climate change and do a better job of characterizing the uncertainty of climate change (both the science itself and the media portrayal of the science). There is concern that the institutions of science are so mired in advocacy on the topic of dangerous anthropogenic climate change that the checks and balances in science, particularly with regard to minority perspectives, are broken.

Richard Lindzen’s CATO essay Reflections on Rapid Response to Unjustified Climate Alarm discusses the kickoff of CATO’s new center on rapid response to climate alarmism. Anthony Watts has announced the formation of a new professional society The Open Atmospheric Society for meteorologists and climatologists, with a new open access journal.  Both of these efforts emphasize public communication. I’m not sure what kind of impact either of these efforts will have, but I wish them well.

My thinking is that we need more voices from influential scientists like Steve Koonin, along with a more mature framing of the climate science problem and decision making framework that allows for dissent and examines a broader spectrum of solutions and approaches.

September 23, 2014 Posted by | Economics, Science and Pseudo-Science, Timeless or most popular | , | Leave a comment

10 Climate Myths Busted (in 60 seconds!)

By James Corbett | corbettreport.com | December 9, 2013

Myth #1. The earth is warming!

On what time scale? 16 years? 2000 years? 10000 years? 420000 years? 65 million years? (Answer: None of the above)

Myth #2. This year was the hottest year ever!

Was that before or after NASA and the NOAA altered the temperature record to make recent years warmer?

Myth #3. 97% of scientists agree on global warming!

You mean 97% of 77 scientists in an unscientific online poll?

Myth #4. Sea levels are rising!

Yes…at a rate of 7 inches per century.

Myth #5. Hurricanes are increasing!

US landfalling hurricanes are at their lowest intensity in a century. (Bonus fact: Accumulated Cyclone Energy is at a 30 year low!)

Myth #6. But…polar bears!

The polar bear population has quintupled in six decades and the USGS admits their numbers are near “historic highs.”

Myth #7. Climategate was hype and it’s been debunked.

The UK Information Commissioner found the climategate scientists guilty of breaking the law by hiding data from the public.

Myth #8. Models project a temperature increase of over 2 degrees in this century.

And these same models overestimated warming over the past 15 years by 400%.

Myth #9. Weather is not climate.

Actually, yes. And this is true when it’s hot outside, too.

Myth #10. Climate denial is a well-funded conspiracy.

Actually, the reverse. The global warming industry has generated over $140 billion in government grants, a $315 billion carbon market and is expected to generate 10s of trillions more in government-sponsored investment in the coming decades.

September 20, 2014 Posted by | Science and Pseudo-Science, Timeless or most popular, Video | 2 Comments

Obama’s Lonely Climate Summit – world leaders are staying home

Watts Up With That? | September 16, 2014

obama-fallsEric Worrall writes: The imminent climate summit in New York is rapidly turning into an utter embarrassment for President Obama and UN Secretary General Bank Ki-Moon, in addition to becoming a bit of a punishment round for national deputy leaders.

Aussie PM Tony Abbott today defended his decision not to hop on an earlier flight to America, so he could attend the UN climate conference in New York, because he has more important matters to attend to, such as running the country.

Chinese President Xi Jinping and Indian Prime Minister Narendra Modi have also indicated they likely won’t attend the summit.

Canadian Prime Minister Stephen Harper has indicated he will not be attending.

Even Angela Merkel, President of über green Germany, will not be attending the UN climate summit.

September 16, 2014 Posted by | Science and Pseudo-Science | | 2 Comments

John Kerry: Scriptures command America to respond to global warming

“Our faiths are inextricably linked on any number of things that we must confront and deal with in policy concepts today. Our faiths are inextricably linked on the environment. For many of us, respect for God’s creation also translates into a duty to protect and sustain his first creation, Earth, the planet,”

“Confronting climate change is, in the long run, one of the greatest challenges that we face, and you can see this duty or responsibility laid out in Scriptures clearly, beginning in Genesis. And Muslim-majority countries are among the most vulnerable. Our response to this challenge ought to be rooted in a sense of stewardship of Earth, and for me and for many of us here today, that responsibility comes from God.”

September 5, 2014 Posted by | Science and Pseudo-Science, Timeless or most popular, Video | | 4 Comments

Hey U.N. – show us your tipping points!

Pathetic handwaving double down from the UN

By Anthony Watts | Watts Up With That? | August 27, 2014

Eric Worrall writes: A number of MSM outlets are carrying news of a “leaked” UN document, which claims that global warming may be causing irreversible damage.

According to the Bloomberg version of the leak story;

“Global warming already is affecting “all continents and across the oceans,” and further pollution from heat-trapping gases will raise the likelihood of “severe, pervasive and irreversible impacts for people and ecosystems,”

The problem with this vapid handwaving nonsense is that it is so vague. I mean, in the good old days, alarmists made interesting predictions;

Snowfalls are now just a thing of the past

Al Gore’s ice free arctic (in 5 years!)

Rain will never fill Australian reservoirs again

The great thing about bold predictions is they are easily falsified – all you have to do is wait a few years, then point and laugh.

The survivors of that golden age of bold stupidity are far too timid – they issue vague predictions of calamity which won’t occur until long after we are all safely dead, and promises that if we wait a few decades we might see something worrying.

I mean, seriously folks, is this the best you can do? Can even the most rabid alarmists get enthused by such a pathetic effort?

August 27, 2014 Posted by | Deception, Mainstream Media, Warmongering, Science and Pseudo-Science | | 3 Comments

Follow

Get every new post delivered to your Inbox.

Join 757 other followers