Aletho News

ΑΛΗΘΩΣ

Manufacturing consensus: the early history of the IPCC

By Judith Curry | Climate Etc. | January 3, 2018

Short summary: scientists sought political relevance and allowed policy makers to put a big thumb on the scale of the scientific assessment of the attribution of climate change.

Bernie Lewin has written an important new book:

SEARCHING FOR THE CATASTROPHE SIGNAL:The Origins of The Intergovernmental Panel on Climate Change

The importance of this book is reflected in its acknowledgements, in context of assistance and contributions from early leaders and participants in the IPCC:

This book would not have been possible without the documents obtained via Mike MacCracken and John Zillman. Their abiding interest in a true and accurate presentation of the facts prevented my research from being led astray. Many of those who participated in the events here described gave generously of their time in responding to my enquiries, they include Ben Santer, Tim Barnett, Tom Wigley, John Houghton, Fred Singer, John Mitchell, Pat Michaels . . . and many more.

You may recall a previous Climate Etc. post Consensus by Exhaustion, on Lewin’s 5 part series on Madrid 1995: The last day of climate science.

Read the whole book, it is well worth reading. The focus of my summary of the book is on Chapters 8-16 in context of the theme of ‘detection and attribution’, ‘policy cart in front of the scientific horse’ and ‘manufacturing consensus’. Annotated excerpts from the book are provided below.

The 1970’s energy crisis

In a connection that I hadn’t previously made, Lewin provides historical context for the focus on CO2 research in the 1970’s, motivated by the ‘oil crisis’ and concerns about energy security. There was an important debate surrounding whether coal or nuclear power should be the replacement for oil. From Chapter 8:

But in the struggle between nuclear and coal, the proponents of the nuclear alternative had one significant advantage, which emerged as a result of the repositioning of the vast network of government-funded R&D laboratories within the bureaucratic machine. It would be in these ‘National Laboratories’ at this time that the Carbon Dioxide Program was born. This surge of new funding meant that research into one specific human influence on climate would become a major branch of climatic research generally. Today we might pass this over for the simple reason that the ‘carbon dioxide question’ has long since come to dominate the entire field of climatic research—with the very meaning of the term ‘climate change’ contracted accordingly.

This focus was NOT driven by atmospheric scientists:

The peak of interest in climate among atmospheric scientists was an international climate conference held in Stockholm in 1974 and a publication by the ‘US Committee for GARP’ [GARP is Global Atmospheric Research Programme] the following year. The US GARP report was called ‘Understanding climate change: a program for action’, where the ‘climate change’ refers to natural climatic change, and the ‘action’ is an ambitious program of research.

[There was] a coordinated, well-funded program of research into potentially catastrophic effects before there was any particular concern within the meteorological community about these effects, and before there was any significant public or political anxiety to drive it. It began in the midst of a debate over the relative merits of coal and nuclear energy production [following the oil crisis of the 1970’s]. It was coordinated by scientists and managers with interests on the nuclear side of this debate, where funding due to energy security anxieties was channelled towards investigation of a potential problem with coal in order to win back support for the nuclear option.

The emergence of ‘global warming’

In February 1979, at the first ever World Climate Conference, meteorologists would for the first time raise a chorus of warming concern. The World Climate Conference may have drowned out the cooling alarm, but it did not exactly set the warming scare on fire.

While the leadership of UNEP (UN Environmental Programme) became bullish on the issue of global warming, the bear prevailed at the WMO (World Meteorological Organization). When UNEP’s request for climate scenario modelling duly arrived with the WCRP (World Climate Research Programme) committee, they balked at the idea: computer modelling remained too primitive and, especially at the regional level, no meaningful results could be obtained. Proceeding with the development of climate scenarios would only risk the development of misleading impact assessments.

It wasn’t long before we see scientific research on climate change becoming marginalized in the policy process, in context of the precautionary principle:

At Villach in 1985, at the beginning of the climate treaty movement, the rhetoric of the policy movement was already breaking away from its moorings in the science. Doubts raised over the wildest speculation were turned around, in a rhetoric of precautionary action: we should act anyway, just in case. With the onus of proof reversed, the research can continue while the question remains (ever so slightly) open.

Origins of the IPCC

With regards to the origins of the IPCC:

Jill JÅNager gave her view that one reason the USA came out in active support for an intergovernmental panel on climate change was that the US Department of State thought the situation was ‘getting out of hand’, with ‘loose cannons’ out ‘potentially setting the agenda’, when governments should be doing so. An intergovernmental panel, so this thinking goes, would bring the policy discussion back under the control of governments. It would also bring the science closer to the policymakers, unmediated by policy entrepreneurs. After an intergovernmental panel agreed on the science, so this thinking goes, they could proceed to a discussion of any policy implications.

While the politics were already making the science increasingly irrelevant, Bert Bolin and John Houghton brought a focus back to the science:

Within one year of the first IPCC session, its assessment process would transform from one that would produce a pamphlet sized country representatives’ report into one that would produce three large volumes written by independent scientists and experts at the end of the most complex and expensive process ever undertaken by a UN body on a single meteorological issue. The expansion of the assessment, and the shift of power back towards scientists, came about at the very same time that a tide of political enthusiasm was being successfully channelled towards investment in the UN process, with this intergovernmental panel at its core.

John Houghton (Chair of Working Group I) moved the IPCC towards a model more along the lines of an expert-driven review: he nominated one or two scientific experts—‘lead authors’—to draft individual chapters and he established a process through which these would be reviewed at lead-author meetings.

The main change was that it shifted responsibility away from government delegates and towards practising scientists. The decision to recruit assessors who were leaders in the science being assessed also opened up another problem, namely the tendency for them to cite their own current work, even where unpublished.

However, the problem of marginalization of the science wasn’t going away:

With the treaty process now run by career diplomats, and likely to be dominated by unfriendly southern political agitators, the scientists were looking at the very real prospect that their climate panel would be disbanded and replaced when the Framework Convention on Climate Change came into force.

And many scientists were skeptical:

With the realisation that there was an inexorable movement towards a treaty, there was an outpouring of scepticism from the scientific community. This chorus of concern was barely audible above the clamour of the rush to a treaty and it is now largely forgotten.

At the time, John Zillman presented a paper to a policy forum that tried to provide those engaged with the policy debate some insight into just how different was the view from inside the research community.  Zillman stated that:

. . . that the greenhouse debate has now become decoupled from the scientific considerations that had triggered it; that there are many agendas but that they do not include, except peripherally, finding out whether and how climate might change as a result of enhanced greenhouse forcing and whether such changes will be good or bad for the world.

To give some measure of the frustration rife among climate researchers at the time, Zillman quoted the director of WCRP. It was Pierre Morel, he explained, who had ‘driven the international climate research effort over the past decade’. A few months before Zillman’s presentation, Morel had submitted a report to the WCRP committee in which he assessed the situation thus:

The increasing direct involvement of the United Nations. . . in the issues of global climate change, environment and development bears witness to the success of those scientists who have vied for ‘political visibility’ and ‘public recognition’ of the problems associated with the earth’s climate. The consideration of climate change has now reached the level where it is the concern of professional foreign-affairs negotiators and has therefore escaped the bounds of scientific knowledge (and uncertainty).

The negotiators, said Morel, had little use for further input from scientific agencies including the IPCC ‘and even less use for the complicated statements put forth by the scientific community’.

There was a growing gap between the politics/policies and the science:

The general feeling in the research community that the policy process had surged ahead of the science often had a different effect on those scientists engaged with the global warming issue through its expanded funding. For them, the situation was more as President Bush had intimated when promising more funding: the fact that ‘politics and opinion have outpaced the science’ brought the scientists under pressure ‘to bridge the gap’.

In fact, there was much scepticism of the modelling freely expressed in and around the Carbon Dioxide Program in these days before the climate treaty process began. Those who persisted with the search for validation got stuck on the problem of better identifying background natural variability.

The challenge of ‘detection and attribution’

Regarding Jim Hansen’s 1998 Congressional testimony:

An article in Science the following spring gives some insight into the furore. In ‘Hansen vs. the world on greenhouse threat’, the science journalist Richard Kerr explained that while ‘scientists like the attention the greenhouse effect is getting on Capitol Hill’, nonetheless they ‘shun the reputedly unscientific way their colleague James Hansen went about getting that attention’.

Clearly, the scientific opposition to any detection claims was strong in 1989 when IPCC assessment got underway.

Detection and attribution of the anthropogenic climate signal was the key issue:

During the IPCC review process (for the First Assessment Report), Wigley was asked to answer the question: When is detection likely to be achieved? He responded with an addition to the IPCC chapter that explains that we would have to wait until the half-degree of warming that had occurred already during the 20th century is repeated. Only then are we likely to determine just how much of it is human-induced. If the carbon dioxide driven warming is at the high end of the predictions, then this would be early in the 21st century, but if the warming was slow then we may not know until 2050.

The IPCC First Assessment Report didn’t help the policy makers’ ‘cause.’ In the buildup to the Rio Earth Summit:

To support the discussions of the Framework Convention at the Rio Earth Summit, it was agreed that the IPCC would provide a supplementary assessment. This ‘Rio supplement’ explains:

. . . the climate system can respond to many forcings and it remains to be proven that the greenhouse signal is sufficiently distinguishable from other signals to be detected except as a gross increase in tropospheric temperature that is so large that other explanations are not likely.

Well, this supplementary assessment didn’t help either. The scientists, under the leadership of Bolin and Houghton, are to be commended for not bowing to pressure. But the IPCC was risking marginalization in the treaty process.

In the lead up to CoP1 in Berlin, the IPCC itself was badgering the negotiating committee to keep it involved in the political process, but tensions arose when it refused to compromise its own processes to meet the political need.

However, the momentum for action in the lead up to Rio remained sufficiently strong that these difficulties with the scientific justification could be ignored.  

Second Assessment Report

In context of the treaty activities, the second assessment report of the IPCC was regarded as very important for justifying implementation for the Kyoto Protocol.

In 1995, the IPCC was stuck between its science and its politics. The only way it could save itself from the real danger of political oblivion would be if its scientific diagnosis could shift in a positive direction and bring it into alignment with policy action.  

The key scientific issue at the time was detection and attribution:

The writing of Chapter 8 (the chapter concerned with detection and attribution) got off to a delayed start due to the late assignment of its coordinating lead author. It was not until April that someone agreed to take on the role. This was Ben Santer, a young climate modeller at Lawrence Livermore Laboratory.

The chapter that Santer began to draft was greatly influenced by a paper principally written by Tim Barnett, but it also listed Santer as an author. It was this paper that held, in a nutshell, all the troubles for the ‘detection’ quest. It was a new attempt to get beyond the old stumbling block of ‘first detection’ research: to properly establish the ‘yardstick’ of natural climate variability. The paper describes how this project failed to do so, and fabulously so.

The detection chapter that Santer drafted for the IPCC makes many references to this study. More than anything else cited in Chapter 8, it is the spoiler of all attribution claims, whether from pattern studies, or from the analysis of the global mean. It is the principal basis for  the Chapter 8 conclusion that. . .

. . .no study to date has both detected a significant climate change and positively attributed all or part of that change to anthropogenic causes.

For the second assessment, the final meeting of the 70-odd Working Group 1 lead authors . . . was set to finalise the draft Summary for Policymakers, ready for intergovernmental review. The draft Houghton had prepared for the meeting was not so sceptical on the detection science as the main text of the detection chapter drafted by Santer; indeed it contained a weak detection claim.

This detection claim appeared incongruous with the scepticism throughout the main text of the chapter and was in direct contradiction with its Concluding Summary. It represented a change of view that Santer had only arrived at recently due to a breakthrough in his own ‘fingerprinting’ investigations. These findings were so new that they were not yet published or otherwise available, and, indeed, Santer’s first opportunity to present them for broader scientific scrutiny was when Houghton asked him to give a special presentation to the meeting of lead authors.

However, the results were also challenged at this meeting: Santer’s fingerprint finding and the new detection claim were vigorously opposed by several experts in the field.

On the first day of the Madrid session of Working Group 1 in November 1995, Santer again gave an extended presentation of his new findings, this time to mostly non-expert delegates. When he finished, he explained that because of what he had found, the chapter was out of date and needed changing. After some debate John Houghton called for an ad-hoc side group to come to agreement on the detection issue in the light of these important new findings and to redraft the detection passage of the Summary for Policymakers so that it could be brought back to the full meeting for agreement. While this course of action met with general approval, it was vigorously opposed by a few delegations, especially when it became clear that Chapter 8 would require changing, and resistance to the changes went on to dominate the three-day meeting. After further debate, a final version of a ‘bottom line’ detection claim was decided:

The balance of evidence suggests a discernible human influence on global climate.

All of this triggered accusations of ‘deception’:

An opinion editorial written by Frederick Seitz ‘Major deception on “global warming” appeared in the Wall Street Journal on 12 June 1996.

This IPCC report, like all others, is held in such high regard largely because it has been peer-reviewed. That is, it has been read, discussed, modified and approved by an international body of experts. These scientists have laid their reputations on the line. But this report is not what it appears to be—it is not the version that was approved by the contributing scientists listed on the title page. In my more than 60 years as a member of the American scientific community, including service as president of both the NAS and the American Physical Society, I have never witnessed a more disturbing corruption of the peer-review process than the events that led to this IPCC report.

When comparing the final draft of Chapter with the version just published, he found that key statements sceptical of any human attribution finding had been changed or deleted. His examples of the deleted passages include:

  • ‘None of the studies cited above has shown clear evidence that we can attribute the observed [climate] changes to the specific cause of increases in greenhouse gases.’
  • ‘No study to date has positively attributed all or part [of the climate change observed to date] to anthropogenic [manmade] causes.’
  • ‘Any claims of positive detection of significant climate change are likely to remain controversial until uncertainties in the total natural variability of the climate system are reduced.’

On 4 July, Nature finally published Santer’s human fingerprint paper. In Science, Richard Kerr quoted Barnett saying that he is not entirely convinced that the greenhouse signal had been detected and that there remain ‘a number of nagging questions’. Later in the year a critique striking at the heart of Santer’s detection claim would be published in reply.

The IPCC’s manufactured consensus

What we can see from all this activity by scientists in the close vicinity of the second and third IPCC assessments is the existence of a significant body of opinion that is difficult to square with the IPCC’s message that the detection of the catastrophe signal provides the scientific basis for policy action.

The scientific debate on detection and attribution was effectively quelled by the IPCC Second Assessment Report:

Criticism would continue to be summarily dismissed as the politicisation of science by vested interests, while the panel’s powerful political supporters would ensure that its role as the scientific authority in the on-going climate treaty talks was never again seriously threatened.

And of course the ‘death knell’ to scientific arguments concerned about detection was dealt by the Third Assessment Report, in which the MBH Hockey Stick analysis of Northern Hemisphere paleoclimates effectively eliminated the existence of a hemispheric medieval warm period and Little Ice Age, ‘solving’ the detection conundrum.

JC reflections

Bernie Lewin’s book provides a really important and well documented history of the context and early  history of the IPCC.

I was discussing Lewin’s book with Garth Partridge, who was involved in the IPCC during the early years, he emailed this comment:

I am a bit upset because I was in the game all through the seventies to early nineties, was at a fair number of the meetings Lewin talked about, spent a year in Geneva as one of the “staff” of the early WCRP, another year (1990) as one of the staff of the US National Program Office in the Washington DC, met most of the characters he (Lewin) talked about…… and I simply don’t remember understanding what was going on as far as the politics was concerned.  How naive can one be??  Partly I suspect it was because lots of people in my era were trained(??) to deliberately ignore, and/or laugh at, all the garbage that was tied to the political shenanigans of international politics in the scientific world. Obviously the arrogance of scientists can be quite extraordinary!

Scientific scepticism about AGW was alive and well prior to 1995; took a nose-dive following publication of the Second Assessment Report, and then was was dealt what was hoped to be a fatal blow by the Third Assessment Report and the promotion of the Hockey Stick.

A rather flimsy edifice for a convincing, highly-confident attribution of recent warming to humans.

I think Bernie Lewin is correct in identifying the 1995 meeting in Madrid as the turning point. It was John Houghton who inserted the attribution claim into the draft Summary for Policy Makers, contrary to the findings in Chapter 8.  Ben Santer typically gets ‘blamed’ for this, but it is clearly Houghton who wanted this and enabled this, so that he and the IPCC could maintain a seat at the big policy table involved in the Treaty.

One might forgive the IPCC leaders for dealing with new science and a very challenging political situation in 1995 during which they overplayed their hand.  However, it is the 3rd Assessment Report where Houghton’s shenanigans with the Hockey Stick really reveal what was going on (including selection of recent Ph.D. recipient Michael Mann as lead author when he was not nominated by the U.S. delegation). The Hockey Stick got rid of that ‘pesky’ detection problem.

I assume that the rebuttal of the AGW  ‘true believers’ to all this is that politics are messy, but look, the climate scientists were right all along, and the temperatures keep increasing. Recent research increases confidence in attribution, that we have ‘known’ for decades.

Well, increasing temperatures say nothing about the causes of climate change.  Scientists are still debating the tropical upper troposphere ‘hot spot’, which was the ‘smoking gun’ identified by Santer in 1995 [link]. And there is growing evidence that natural variability on decadal to millennial time scales is much larger than previous thought (and larger than climate model simulations) [link].

I really need to do more blog posts on detection and attribution, I will do my best to carve out some time.

And finally, this whole history seems to violate the Mertonian norm of universalism:

universalism: scientific validity is independent of the sociopolitical status/personal attributes of its participants

Imagine how all this would have played out if Pierre Morel or John Zillman had been Chair of WG1, or if Tom Wigley or Tim Barnett or John Christy had been Coordinating Lead Author of Chapter 8. And what climate science would look like today.

I hope this history of manufacturing consensus gives rational people reason to pause before accepting arguments from consensus about climate change.

January 3, 2018 Posted by | Book Review, Corruption, Deception, Nuclear Power, Science and Pseudo-Science, Timeless or most popular | | Leave a comment

Storm Eleanor’s “100 MPH Winds” – Fake News From The Telegraph

By Paul Homewood | Not A Lot Of People Know That | January 3, 2018

Storm Eleanor has lashed the UK with violent storm-force winds of up to 100mph, leaving thousands of homes without power and hitting transport links.

Gusts of 100mph were recorded at Great Dun Fell in Cumbria at 1am.

Wow! Hurricane force winds, as has been reported elsewhere.

Only one slight problem though. Great Dun Fell is the second highest mountain in England’s Pennines , and the weather station is sat at the very top, at an altitude of 847m.

Even then, mean wind speeds only reached 75 mph.

At nearby Warcop, just seven miles away and at an altitude of 224m, wind speed never got above 29 mph, a “strong breeze” on the Beaufort Scale.

This all comes from a Press Association report, which in turn appears to have been fed by the Met Office.

Why the Met Office should decide to deliberately mislead the public is anybody’s guess.

The Telegraph goes on to mention that 77mph gusts were recorded in High Bradfield, South Yorkshire.

I live 5 miles away from High Bradfield, and it was no more than a bit windy. So it won’t come as any surprise that High Bradfield is also a high altitude site, high up in the Peak District at 395m.

The nearest site with up to date data, according to the Met Office, is Watnall, 32 miles away in Nottinghamshire.

There wind speeds only reached 24 mph, a “Fresh Breeze” on the Beaufort Scale.

Even in Southern Scotland, the area worst affected in Britain, where the Met Office reported gusts of 72 mph high up on exposed cliffs above the Solway near Dundrennan, the mean wind speed peaked at 54 mph, still only a “Strong Gale”.

The headline claim that Storm Eleanor has lashed the UK with violent storm-force winds of up to 100mph is quite fraudulent.

January 3, 2018 Posted by | Deception, Fake News, Mainstream Media, Warmongering, Science and Pseudo-Science | , | Leave a comment

Word of the Day…

By Mark Doran | January 3, 2018

If there’s one thing that the West’s state-corporate media loves to report, it’s public protest within a non-compliant country — people demonstrating against a government that has refused to roll over in the face of US aggression and greed.

If you’re in the habit of examining these media reports, you’ll often find that there’s a particular word which gets used a lot.

Here are a few highly topical examples; see if you can work out which word it is

Iranians protesting the country’s strained economy gathered in Tehran and another major city on Friday, for the second day of spontaneous, unsanctioned demonstrations […] (US, Associated Press, via Washington Post, 29 Dec 2017)

A wave of spontaneous protests over Iran’s weak economy swept into Tehran on Saturday, with college students and others chanting against the government… (UK, Associated Press, via Mail Online, 30 Dec 2017)

Unauthorized, spontaneous protests engulfed Iran’s major cities for a third straight day on Saturday as what started out as demonstrations over rising prices seem to have taken a decidedly anti-government tone. (Slate.com, 30 Dec 2017)

Pro-government Iranians rallied in Tehran Saturday following spontaneous angry protests in the capital and other major cities. (US, Fox News with Associated Press, 30 Dec 2017)

A relatively small protest on Thursday in Mashhad, Iran’s second largest city . . . unexpectedly gave impetus to a wave of spontaneous protests spreading across provinces. (UK, Guardian, 31 Dec 2017)

Protests seem to be spontaneous and lack a clear leader. (Australia, ABC Radio Australia, 1 Jan 2018)

Yes: the Word of the Day is spontaneous.

As far as our state-corporate media and its ubiquitous anti-journalism are concerned, this is one of the most fascinating adjectives we ever see. Let’s take a moment to examine its use…

For a start, how would anyone really know — and so quickly, too! — that these foreign protests, these far-away demonstrations were all ‘spontaneous’? Are thousands of protestors across Iran currently in touch with hundreds of Western journalists — and constantly insisting on the utter spontaneity of everything they do?

No, they aren’t. And even if they were, why would anyone with any sense believe they were telling the truth?

The reality is, of course, that ‘spontaneous’ is a propaganda word, purely manipulative. It’s there to achieve three different but related aims — every one of which serves the imperialist agendas of the Western elites.

First, it helps to create the encouraging impression of an Official Enemy in Deep Trouble. If the media unites in painting a given set of protests as ‘spontaneous‘, then the illusion can be manufactured that ‘the population as a whole‘ is ‘angrily turning against‘ the obstructive government that the West is so selfishly anxious to see removed. ‘Clearly, this vile regime is tottering! Stay focused, everyone! Our corporations will be gang-raping the place in no time!

Secondly, ‘spontaneous’ protests are by far the best kind when it comes to ‘justifying’ illegal and destructive ‘intervention’ in a non-compliant country. How ‘desperate‘ an oppressed population must be if it ‘takes to the streets’ in ‘spontaneous protests’! How ‘close to the edge‘ those people must feel to be ‘finally overcoming their fear‘ and ‘actually calling for change‘! Those people can’t take much more of this! For God’s sake, we have to do something! How about we try more economic warfare — plus humanitarian bombing? Agreed…?

Thirdly, it’s a word that’s designed to take the most important thought of all … and drive it far away from everyone’s mind. For what, ultimately, the word ‘spontaneous‘ says is: ‘Do not for a moment consider the probability that this is happening as part of a carefully co-ordinated and externally funded regime-change operation. Don’t even think about it! It’s all just SPONTANEOUS, d’you hear!

And if you won’t listen to me, pay attention to Nikki Haley, the Novelty Talking Insect currently doubling as the Trump Administration’s ‘Ambassador to the United Nations’…

See…?

For the rest — and just in case anyone still refuses to believe how indispensable a weapon is the word ‘spontaneous’ in the armoury of the modern journalist-impersonator — note how and when the imprimatur is withheld.

On the one hand, when a Western media trusty encounters what might be a public demonstration of support for an Official Enemy, ‘spontaneity’ will be specifically denied — sometimes even before you can say Nick Jack Robinson…

Then, on the other hand, there’s what happens when people in the Proudly Democratic West decide to protest about the actions or policies of their own governing elites. For, if a protest or demonstration is happening whose scale and importance cannot altogether be denied by our state-corporate media, the word ‘spontaneous’ simply won’t be in evidence: it would be too legitimating

January 3, 2018 Posted by | Deception, Fake News, Mainstream Media, Warmongering, Timeless or most popular | , , , , , | 2 Comments

A Tale of Two Americas: Where the Rich Get Richer and the Poor Go to Jail

By John W. Whitehead | The Rutherford Institute | January 3, 2018

“It is said that no one truly knows a nation until one has been inside its jails. A nation should not be judged by how it treats its highest citizens, but its lowest ones.” ― Nelson Mandela

This is the tale of two Americas, where the rich get richer and the poor go to jail.

Aided and abetted by the likes of Attorney General Jeff Sessions—a man who wouldn’t recognize the Constitution if it smacked him in the face—the American dream has become the American scheme: the rich are getting richer and more powerful, while anyone who doesn’t belong to the power elite gets poorer and more powerless to do anything about the nation’s steady slide towards fascism, authoritarianism and a profit-driven police state.

Not content to merely pander to law enforcement and add to its military largesse with weaponry and equipment designed for war, Sessions has made a concerted effort to expand the police state’s power to search, strip, seize, raid, steal from, arrest and jail Americans for any infraction, no matter how insignificant.

Now Sessions has given state courts the green light to resume their practice of jailing individuals who are unable to pay the hefty fines imposed by the American police state. In doing so, Sessions has once again shown himself to be not only a shill for the Deep State but an enemy of the people.

First, some background on debtors’ prisons, which jail people who cannot afford to pay the exorbitant fines imposed on them by courts and other government agencies.

Congress banned debtors’ prisons in 1833.

In 1983, the U.S. Supreme Court ruled the practice to be unconstitutional under the Fourteenth Amendment’s Equal Protection clause.

“Despite prior attempts on the federal level and across the country to prevent the profound injustice of locking people in cages because they are too poor to pay a debt,” concludes The Atlantic, “the practice persists every day.”

Where things began to change, according to The Marshall Project, was with the rise of “mass incarceration.” As attorney Alec Karakatsanis stated, “In the 1970s and 1980s, we started to imprison more people for lesser crimes. In the process, we were lowering our standards for what constituted an offense deserving of imprisonment, and, more broadly, we were losing our sense of how serious, how truly serious, it is to incarcerate. If we can imprison for possession of marijuana, why can’t we imprison for not paying back a loan?”

By the late 1980s and early 90s, “there was a dramatic increase in the number of statutes listing a prison term as a possible sentence for failure to repay criminal-justice debt.” During the 2000s, the courts started cashing in big-time “by using the threat of jail time – established in those statutes – to squeeze cash out of small-time debtors.”

Fast-forward to the present day which finds us saddled with not only profit-driven private prisons and a prison-industrial complex but also, as investigative reporter Eli Hager notes, “the birth of a new brand of ‘offender-funded’ justice [which] has created a market for private probation companies. Purporting to save taxpayer dollars, these outfits force the offenders themselves to foot the bill for parole, reentry, drug rehab, electronic monitoring, and other services (some of which are not even assigned by a judge). When the offenders can’t pay for all of this, they may be jailed – even if they have already served their time for the offense.”

Follow the money trail. It always points the way.

Whether you’re talking about the government’s war on terrorism, the war on drugs, or some other phantom danger dreamed up by enterprising bureaucrats, there is always a profit-incentive involved.

The same goes for the war on crime.

At one time, the American penal system operated under the idea that dangerous criminals needed to be put under lock and key in order to protect society. Today, the flawed yet retributive American “system of justice” is being replaced by an even more flawed and insidious form of mass punishment based upon profit and expediency.

Sessions’ latest gambit plays right into the hands of those who make a profit by jailing Americans.

Sharnalle Mitchell was one such victim of a system for whom the plight of the average American is measured in dollars and cents. As the Harvard Law Review recounts:

On January 26, 2014, Sharnalle Mitchell was with her children in Montgomery, Alabama when police showed up at her home to arrest her. Mitchell was not accused of a crime. Instead, the police came to her home because she had not fully paid a traffic ticket from 2010. The single mother was handcuffed in front of her children (aged one and four) and taken to jail. She was ordered to either pay $2,800 or sit her debt out in jail at a rate of fifty dollars a day for fifty-nine days. Unable to pay, Mitchell wrote out the numbers one to fifty-eight on the back of her court documents and began counting days.

This is not justice.

This is yet another example of how greed and profit-incentives have not only perverted policing in America but have corrupted the entire criminal justice system.

As the Harvard Law Review concludes:

[A]s policing becomes a way to generate revenue, police start to “see the people they’re supposed to be serving not as citizens with rights, but as potential sources of revenue, as lawbreakers to be caught.” This approach creates a fugitive underclass on the run from police not to hide illicit activity but to avoid arrest for debt or seizure of their purportedly suspicious assets… In turn, communities … begin to see police not as trusted partners but as an occupying army constantly harassing them to raise money to pay their salaries and buy new weapons. This needs to end.

Unfortunately, the criminal justice system has been operating as a for-profit enterprise for years now, covertly padding its pockets through penalty-riddled programs aimed at maximizing revenue rather than ensuring public safety.

All of those seemingly hard-working police officers and code-enforcement officers and truancy officers and traffic cops handing out ticket after ticket after ticket: they’re not working to make your communities safer—they’ve got quotas to fill.

Same goes for the courts, which have come to rely on fines, fees and exorbitant late penalties as a means of increased revenue. The power of these courts, magnified in recent years through the introduction of specialty courts beyond your run-of-the-mill traffic court (drug court, homeless court, veterans court, mental health court, criminal court, teen court, gambling court, prostitution court, community court, domestic violence court, truancy court), is “reshaping the American legal system—with little oversight,” concludes the Boston Globe.

And for those who can’t afford to pay the court fines heaped on top of the penalties ($302 for jaywalking, $531 for an overgrown yard, or $120 for arriving a few minutes late to court), there’s probation (managed by profit-run companies that tack on their own fees, which are often more than double the original fine) or jail time (run by profit-run companies that charge inmates for everything from food and housing to phone calls at outrageous markups), which only adds to the financial burdens of those already unable to navigate a costly carceral state.

“When bail is set unreasonably high, people are behind bars only because they are poor,” stated former Attorney General Loretta Lynch. “Not because they’re a danger or a flight risk — only because they are poor. They don’t have money to get out of jail, and they certainly don’t have money to flee anywhere. Other people who do have the means can avoid the system, setting inequality in place from the beginning.”

In “Policing and Profit,” the Harvard Law Review documents in chilling detail the criminal justice system’s efforts to turn a profit at the expense of those who can least afford to pay, thereby entrapping them in a cycle of debt that starts with one minor infraction:

In the late 1980s, Missouri became one of the first states to let private companies purchase the probation systems of local governments. In these arrangements, municipalities impose debt on individuals through criminal proceedings and then sell this debt to private businesses, which pad the debt with fees and interest. This debt can stem from fines for offenses as minor as rolling through a stop sign or failing to enroll in the right trash collection service. In Ferguson, residents who fall behind on fines and don’t appear in court after a warrant is issued for their arrest (or arrive in court after the courtroom doors close, which often happens just five minutes after the session is set to start for the day) are charged an additional $120 to $130 fine, along with a $50 fee for a new arrest warrant and 56 cents for each mile that police drive to serve it. Once arrested, everyone who can’t pay their fines or post bail (which is usually set to equal the amount of their total debt) is imprisoned until the next court session (which happens three days a month). Anyone who is imprisoned is charged $30 to $60 a night by the jail. If an arrestee owes fines in more than one of St. Louis County’s eighty-one municipal courts, they are passed from one jail to another to await hearings in each town.

Ask yourself this: at a time when crime rates across the country remain at historic lows (despite Sessions’ inaccurate claims to the contrary), why does the prison population continue to grow?

The prison population continues to grow because of a glut of laws that criminalize activities that should certainly not be outlawed, let alone result in jail time. Overcriminalization continues to plague the country because of legislators who work hand-in-hand with corporations to adopt laws that favor the corporate balance sheet. And when it comes to incarceration, the corporate balance sheet weighs heavily in favor of locking up more individuals in government-run and private prisons.

As Time reports, “The companies that build and run private prisons have a financial interest in the continued growth of mass incarceration. That is why the two major players in this game—the Corrections Corporation of America and the GEO Group—invest heavily in lobbying for punitive criminal justice policies and make hefty contributions to political campaigns that will increase reliance on prisons.”

It’s a vicious cycle that grows more vicious by the day.

According to The Atlantic, “America spends $80 billion a year incarcerating 2.4 million people.” But the costs don’t end there. “When someone goes to prison, nearly 65 percent of families are suddenly unable to pay for basic needs such as food and housing… About 70 percent of those families are caring for children under the age of 18.”

Then there are the marked-up costs levied against the inmate by private companies that provide services and products to government prisons. Cereal and soup for five times the market price. $15 for a short phone call.

The Center for Public Integrity found that “prison bankers collect tens of millions of dollars every year from inmates’ families in fees for basic financial services. To make payments, some forego medical care, skip utility bills and limit contact with their imprisoned relatives… Inmates earn as little as 12 cents per hour in many places, wages that have not increased for decades. The prices they pay for goods to meet their basic needs continue to increase.”

Worse, as human rights attorney Jessica Jackson points out, “the fines and fees system has turned local governments into the equivalent of predatory lenders.” For instance, Jackson cites:

Washington state charges a 12% interest rate on all its criminal debt. Florida adds a 40% fee that goes into the pockets of a private collections agency. In California, penalties can raise a $100 fine to $490, or $815 if the initial deadline is missed. A $500 traffic ticket can actually cost $1,953, even if it is paid on time. And so we are left with countless tales of lives ruined—people living paycheck to paycheck who cannot afford a minor fine, and so face ballooning penalties, increasing amounts owed, a suspended license, jail time, and being fired from their jobs or unable to find work.

This isn’t the American Dream I grew up believing in.

This certainly isn’t the American Dream my parents and grandparents and those before them worked and fought and sacrificed to achieve.

This is a cold, calculated system of profit and losses.

Now you can shrug all of this away as a consequence of committing a crime, but that just doesn’t cut it. Especially not when average Americans are being jailed for such so-called crimes as eating SpaghettiOs (police mistook them for methamphetamine), not wearing a seatbelt, littering, jaywalking, having homemade soap (police mistook the soap for cocaine), profanity, spitting on the ground, farting, loitering and twerking.

There is no room in the American police state for self-righteousness. Not when we are all guilty until proven innocent.

As I make clear in my book Battlefield America: The War on the American People, this is no longer a government “of the people, by the people, for the people.”

It is fast becoming a government “of the rich, by the elite, for the corporations,” and its rise to power is predicated on shackling the American taxpayer to a debtors’ prison guarded by a phalanx of politicians, bureaucrats and militarized police with no hope of parole and no chance for escape.

ABOUT JOHN W. WHITEHEAD

Constitutional attorney and author John W. Whitehead is founder and president of The Rutherford Institute. His new book Battlefield America: The War on the American People (SelectBooks, 2015) is available online at http://www.amazon.com. Whitehead can be contacted at johnw@rutherford.org.

January 3, 2018 Posted by | Civil Liberties, Subjugation - Torture, Timeless or most popular | , | Leave a comment

As Israel prepares to annex most of the West Bank, Abbas bleats about ‘incitement’

By Ramona Wadi | MEMO | January 2, 2018

Palestinian Authority leader Mahmoud Abbas should be aware by now that he can no longer be taken seriously. As evidence of a shrinking Palestine continues to accumulate, Israel and the international community are well aware of this and are applying two complementary strategies. While Israel passes legislation to annex areas of the occupied West Bank colonised by illegal, Jewish settlers, the international community has quietly departed from the scene of its recent condemnations over the US-approved Israeli appropriation of Jerusalem.

The legislation was described by Israel’s Public Security Minister Gilad Erdan as a “moral right and obligation towards our settler brothers,” indicating the dependency of the colonial entity upon the expansion of its settler-colonies for its own survival. This prompted Abbas to make one of his weakest statements ever: “We shall make important decisions during 2018, including regarding legal avenues, in order to hold Israel accountable for its grave and systematic violations of international law, and to revisit agreements signed with Israel.”

A leader with decolonisation in mind would not stoop so low. Decades have proven that Israel cannot be held accountable and the international community is adamant that it will not try to reverse this impunity. Indeed, UN votes to one side (they mean little), the world is equally assertive that Palestinians should remain tethered to Israeli hegemony until there is no further need to sustain that narrative and the ethnic cleansing of Palestine is complete.

Nothing that Abbas says or does in 2018 can be deemed to be an “important decision”. Giving a hint of what he has in mind, the PA leader called upon the international community “to look at the Israeli incitement against Palestinian rights that is particularly recurrent among members of the government coalition.”

There are many weaknesses in Abbas’s statement. First of all, the international community does not need reminders; it is, quite simply and willfully, ignoring Israel’s violations of international law. To crawl and beg continuously in front of a complicit international community has harmed the people of Palestine in the long run; while Israel is rapidly executing its expansionist plans (and executing Palestinians extrajudicially in the process), Abbas is misinterpreting the whole colonial project as a series of isolated violations which he loosely labels “incitement”.

This begs the question of whether or not it is still correct to speak of Israeli incitement. Given the impunity that Israel enjoys, it is imperative that incitement is analysed critically and specified, rather than used as a convenient term to describe actions taken by Israel which result in the loss of Palestinian rights, land and lives. The prelude to such loss is indeed incitement, but it was initiated decades ago. For Abbas to speak of it as if it were something new is a bonus for Israel. Palestinians are witnessing a denial of the current political violence by the leaders which claim to represent them.

Hence, Abbas would do well to refrain from redundant statements which entrench the politically inferior position imposed upon him by Israel, the PA and the international community. If Israel’s legislation paves the way for the physical end of the two-state compromise, why are Abbas and members of the international community still insisting upon what is clearly an obsolete paradigm as the “only” solution? The answer is clear to anyone with eyes and the will to see: by adhering to the concept, Palestinian leaders and the UN are taking several steps backwards and thus buying time for Israel to complete its obliteration of Palestine unhindered. This tactic has been obvious for decades. As Israel prepares to annex most of the West Bank, Abbas bleats about “incitement”; he really can’t be taken seriously anymore.

January 3, 2018 Posted by | Ethnic Cleansing, Racism, Zionism, Illegal Occupation, Timeless or most popular | , , , | 3 Comments

How the West is misusing an image to represent the Iranian protests

By Paul Antonopoulos – Fort Russ News – January 2, 2018

TEHRAN – The above image has become synonymous with the Iranian protests that have engulfed most regions of Iran for the past week. However, there remains one problem…. The original image has nothing to do with the current protests at all.

The original photo shows a woman with a hijab on a stick in a defiant moment as she challenges the law that makes it compulsory for women to wear a hijab in Iran.

However this was taken before the current protests even began. Although the Islamic Republic forces women to wear the hijab, its slow liberalization is seen, especially with Tehran announcing just days before protests began that they will no longer enforce the law in this regard.

However, despite the current protests being about economic reform and a clampdown on corruption, Western war enablers, particularly so-called activists and Western media, have widely been spreading this image as a symbol for a struggle against the regime that only exists in their own mind and not in the general consensus of Iranians, nor the majority of those protesting.

As Israeli geopolitical expert Michael A. Horowitz acknowledges, “The only thing this new “symbol,” [the image] largely imposed from the outside [the West], does represent is some form of “wishful thinking” from outside observers on what they’d want the current protest movement to be.

Western war hawks, activists and media alike are all trying to portray the Iranian protests as one for regime-change, but this remains only a small segment of the current protesters. However, no amount of Western “wishful thinking” as Horowitz correctly asserts, will change the fact that the majority of women currently protesting come from conservative segments of Iran.

It has been found that Saudi Arabia has tweeted more about the Iranian protests then people within Iran has themselves, with around three-quarters of all tweets about the protests coming from outside of the Islamic Republic.

Therefore, it can be seen that the great pushers for the protests are mostly coming from outside of the country. Another attempted colored revolution that will fail just as imperialists had in Venezuela last year.

January 3, 2018 Posted by | Deception, Fake News, Mainstream Media, Warmongering | | 1 Comment

Iran’s regime faces moment of truth

By M K Bhadrakumar | Indian Punchline | January 3, 2018

An unexpected side effect of the ongoing unrest in Iran is that it will consolidate the Turkish-Iranian entente in regional politics. The Turkish leadership has openly reached out to President Hassan Rouhani. Following up on the contact between the two foreign ministers on Tuesday and the statement by the Turkish Foreign Ministry, President Recep Erdogan telephoned Rouhani today to express Turkey’s solidarity.

While talking to a group of editors in Ankara today, Foreign Minister Mevlut Cavusoglu gently ticked off the US and Israel saying, “There are only two [world] figures who support protestors: Trump and Netanyahu. We are against such foreign interventions.” Cavusoglu added, “I have not seen any other world leader making such supportive statements. You may not like the regime but Iran’s president and government, apart from the religious leader, can only be changed through elections. And there are no objections about the security of elections [in Iran].”

Turkey did not have to go this far but it senses an imperative need to intervene. Turkey’s main concern will be Iran’s stability. Having said that, although Turkey is voicing open support for Iran in the current difficult period, it cannot be oblivious of the strong undercurrents playing out in Iran’s political economy. Erdogan has shown empathy for Rouhani’s approach – allowing the protests to take place peacefully without any intimidation by the state security agencies but effectively curbing any violent incidents. Rouhani told Erdogan that he hoped that the protests would end “within a few days.”

Even so, this ought to be a moment of truth for the Iranian regime. For the first time, perhaps, the unrest is largely among the downtrodden people who are losing hope in a better future under the existing regime. There is widespread resentment among poor people that the resources of the country are siphoned off by the ruling elites. The draft budget that was presented to the Majlis last month itself flagged a shocking misallocation of resources – Al-Mustafa International University (which propagates Shi’ism worldwide) has a budget that exceeds the combined budgetary allocation for the Ministry of Roads and Urban Development, Ministry of Labour and Social Affairs and the National Organization for Food and Drug.

There were high expectations among the poor people that following the signing of the nuclear deal in July 2015, the economic conditions would improve. They were jubilant when Zarif returned home after the nuclear deal and spontaneously thronged the airport to receive him. But two years down the line, these hopes have been dashed. The bazaar gossip is that billions of dollars in blocked funds lying in western banks that were returned to post-sanctions Iran have either been squandered away in overseas enterprises (Syria, Iraq, Lebanon, Yemen, etc.) or simply misappropriated by the religious establishment.

The current unrest is doomed to fizzle out. The absence of middle class (which is in the vanguard of all revolutions in history) guarantees it. Again, the lack of leadership among protestors would mean that “fatigue” would set in sooner or later. The wretched of the earth do not have the luxury to protest till eternity instead of eking out their daily livelihood to keep body and soul together.

What is the way forward? The people, clearly, want “change”. Arguably, they no longer have faith in the so-called “reformists”, either. On the other hand, Rouhani faces dogged opposition from entrenched interest groups who masquerade as “conservatives” or “principlists”. If the protests in 2009 (led by the middle class) were about political empowerment and had a narrow social base, this time around, the demand is ‘Where is my money?’ and the social base lies among the the downtrodden sections of society. Shockingly enough, the cry “Allahu Akbar” (God is great) was conspicuous by its absence throughout this turmoil, although it has been the signal tune of Iranian street politics ever since the Islamic Revolution in 1979.

The regime may be right up to a point in alleging that there has been foreign interference. But, hopefully, it will not become the alibi for postponing reforms. Certainly, the regime is in no immediate danger. But a challenging period lies ahead. Iran has crucial choices to make. Iran’s foreign policies should become an extension of its national policies, attuned to its development agenda – economic growth, job creation and alleviation of poverty – and to the creation of a just society. Geopolitics is not the priority for Iran today – it is nation-building.

January 3, 2018 Posted by | Economics | , , , | 2 Comments

The WannaCry Cyberattack: What the Evidence Says and Why the Trump Administration Blames North Korea

By Gregory Elich | CounterPunch | January 3, 2018

On December 19, in a Wall Street Journal editorial that drew much attention, Homeland Security Advisor Tom Bossert asserted that North Korea was “directly responsible” for the WannaCry cyberattack that struck more than 300,000 computers worldwide. The virus encrypted files on infected computers and demanded payment in return for supposedly providing a decryption key to allow users to regain access to locked files. Bossert charged that North Korea was “using cyberattacks to fund its reckless behavior and cause disruption across the world.” [1]

At a press conference on the same day, Bossert announced that the attribution was made “with evidence,” and that WannaCry “was directed by the government of North Korea,” and carried out by “actors on their behalf, intermediaries.” The evidence that led to the U.S. to that conclusion? Bossert was not saying, perhaps recalling the ridicule that greeted the FBI and Department of Homeland Security’s misbegotten report on the hacking of the Democratic National Committee. [2] “Press Briefing on the Attribution of the WannaCry Malware Attack to North Korea,” Whitehouse.gov, December 19, 2017.

The centerpiece of the claim of North Korean culpability is the similarity in code between the Contopee malware, which opens backdoor access to an infected computer, and code in an early variant of WannaCry. [3]

Contopee has been linked to the Lazarus group, a cybercrime organization that some believe launched the Sony hack, based on the software tools used in that attack. Since North Korea is widely considered to be behind the cyberattack on Sony, at first glance that would appear to seal the argument.

It is a logical argument, but is it founded on valid premises? Little is known about Lazarus, aside from the operations that are attributed to it. The link between Lazarus and North Korea is a hypothesis based on limited evidence. It may or may not be true, but the apparent linkage is far weaker than mainstream media’s conviction would have one believe. Lazarus appears to be an independent organization possibly based in China, which North Korea may or may not have contracted to perform certain operations. That does not necessarily mean that every action – or even any action at all – Lazarus performs is at North Korea’s behest.

In Bossert’s mind as well as that of media reporters, Lazarus – the intermediaries Bossert refers to – and North Korea are synonymous when it comes to cyber operations. North Korea gives the orders and Lazarus carries them out. James Scott, a senior fellow at the Institute for Critical Infrastructure Technology, notes that “speculation concerning WannaCry attributes the malware to the Lazarus Group, not to North Korea, and even those connections are premature and not wholly convincing. Lazarus itself has never been definitively proven to be a North Korean state-sponsored advanced persistent threat (APT); in fact, an abundance of evidence suggests that the Lazarus group may be a sophisticated, well-resourced, and expansive cyber-criminal and occasional cyber-mercenary collective.” Furthermore, Scott adds, the evidence used to tie Lazarus to North Korea, “such as an IP hop or some language indicators, are circumstantial and could even be intentional false flags” to misdirect investigators. [4]

Whether an association exists or not between Lazarus and North Korea has little meaning regarding a specific attack. Joseph Carson of Thycotic emphasizes “that it is important to be clear that [Lazarus] is a group and motives can change depending on who is paying. I have found when researching hacking groups they can one day be working for one government under one alias and another using a different alias. This means that association in cyberspace means nothing.” [5]

It is considered a particularly damning piece of evidence that some of the tools used in an early variant of WannaCry share characteristics with those deployed in the cyberattack on Sony. [6] However, there is ample cause for doubting North Korea’s role in the Sony hack, as I have written about before. [7] Following the Sony breach, IT businessman John McAfee revealed that he had contact with the group that attacked Sony. “It has to do with a group of hackers” motivated by dislike of the movie industry’s “controlling the content of art,” he said, and the FBI was wrong in attributing the attack to North Korea. [8]

If attribution of the Sony hack to North Korea does not hold up, then linkage based on tool usage falls apart.

Once malware is deployed, it often appears for sale on the Dark Web, where it can be purchased by cybercriminals. The reuse of code is a time-saving measure in building new threats. Indeed, malware can find its way onto the market quite rapidly, and almost as soon as WannaCry was wreaking havoc back in May, it was reported that “researchers are already finding variants” of WannaCry “in the wild.” [9]

According to Peter Stephenson of SC Media, “The most prevailing [theory] uses blocks of code that were part of known Korean hacks appearing in the WannaCry code as justification for pinning the attacks on NK. That’s really not enough. These blocks of code are readily available in the underground and get reused regularly.” [10]

Commonality of tool usage means less than we are led to believe. “While malware may initially be developed and used by a single actor,” Digital Shadows explains, “this does not mean that it will permanently remain unique to that actor. Malware samples might be accidentally or intentionally leaked, stolen, sold, or used in independent operations by individual members of the group.” [11]

“Shared code is not the same as attribution. Code can be rewritten and erased by anyone, and shared code is often reused,” observes Patrick Howell O’Neill of Cyberscoop. “The same technique could potentially be used to frame another group as responsible for a hack but, despite a lot of recent speculation, there is no definitive proof.” [12]

None of the shared code was present in WannaCry’s widespread attack on May 12. Although it is more likely than not that the same actor was behind the early variants of WannaCry and the May version, it is not certain. Alan Woodward, cybersecurity advisor to Europol, points out, “It is quite possible for even a relatively inexperienced group to obtain the malicious WannaCry payload and to have repackaged this. Hence, the only thing actually tying the May attacks to the earlier WannaCry attacks is the payload, which criminals often copy.” [13]

The most devastating component WannaCry utilized in its May 12 attack is EternalBlue, an exploit of Windows vulnerabilities that was developed by the National Security Agency and leaked by Shadow Brokers. The NSA informed Microsoft of the vulnerability only after it learned of the software’s theft. According to Bossert, the NSA informs software manufacturers about 90 percent of the time when it discovers a vulnerability in operating software. It keeps quiet about the remaining ten percent so that it can “use those vulnerabilities to develop exploits for the purpose of national security for the classified work we do.” [14] Plainly put, the NSA intentionally leaves individuals and organizations worldwide exposed to potential security breaches so that it can conduct its own cyber operations. This is less than reassuring.

The May variant of WannaCry also implemented DoublePulsar, which is a backdoor implant developed by the NSA that allows an attacker to gain full control over a system and load executable malware.

The two NSA-developed components are what allowed WannaCry to turn virulent last May. After loading, EternalBlue proceeds to infect every other vulnerable computer on the same network. It simultaneously generates many thousands of random IP addresses and launches 128 threads at two-second intervals, seeking vulnerabilities in computers that it can exploit at each one of the generated external IP addresses.[15]

China and Russia were among the nations that were most negatively impacted by the malware. [16] WannaCry initially targeted Russian systems, which would seem an odd thing for North Korea to do, given that Russia and China are the closest things it has to allies. [17]

Digital Shadows reports that “the malware appeared to spread virtually indiscriminately with no control by its operators,” and a more targeted approach “would have been more consistent with the activities of a sophisticated criminal outfit or a technically-competent nation-state actor.” [18]

Flashpoint analyzed the ransom note that appeared on infected computers. There were two Chinese versions and an English version. The Chinese texts were written by someone who is fluent, and the English by someone with a strong but imperfect command of English. Ransom notes in other languages were apparently translated from the English version using Google translator. [19] It has been pointed out that this fact does not disprove the U.S. attribution of North Korea, as that nation could have hired Chinese cybercriminals. True enough, but then North Korea does not have a unique ability to do so. If so inclined, anyone could contract Chinese malware developers.  Or cybercriminals could act on their own.

Lazarus and North Korean cyber actors have a reputation for developing sophisticated code. The hallmark of WannaCry, however, is its sheer sloppiness, necessitating the release of a series of new versions in fairly quick succession. Alan Woodward believes that WannaCry’s poorly designed code reveals that it had been written by “a less than experienced malware developer.” [20]

Important aspects of the code were so badly bungled that it is difficult to imagine how any serious organization could be responsible.

IT security specialists use virtual machines, or sandboxes, to safely test and analyze malware code. A well-designed piece of malware will include logic to detect the type of environment it is executing in and alter its performance in a virtual machine (VM) environment to appear benign. WannaCry was notably lacking in that regard.  “The authors did not appear to be concerned with thwarting analysis, as the samples analyzed have contained little if any obfuscation, anti-debugging, or VM-aware code,” notes LogRhythm Labs. [21]

James Scott argues that “every WannaCry attack has lacked the stealth, sophistication, and resources characteristic of [Lazarus sub-group] Bluenoroff itself or Lazarus as a whole. If either were behind WannaCry, the attacks likely would have been more targeted, had more of an impact, would have been persistent, would have been more sophisticated, and would have garnered significantly greater profits.” The EternalBlue exploit was too valuable to waste “on a prolific and unprofitable campaign” like the May 12 WannaCry attack. By contrast, Bluenoroff “prefers to silently integrate into processes, extort them, and invisibly disappear after stealing massive fiscal gains.” [22] Bogdan Botezatu of Bitdefender, agrees. “The attack wasn’t targeted and there was no clear gain for them. It’s doubtful they would use such a powerful exploit for anything else but espionage.” [23]

WannaCry included a “kill switch,” apparently intended as a poorly thought out anti-VM feature. “For the life of me,” comments Peter Stephenson, “I can’t see why they might think that would work.” [24] When the software executes it first attempts to connect to a hostname that was unregistered. The malware would proceed to run if the domain was not valid. A cybersecurity researcher managed to disable WannaCry by registering the domain through NameCheap.com, shutting down with ease the ability of WannaCry to infect any further computers. [25]

Once WannaCry infected a computer, it demanded a ransom of $300 in bitcoin to release the files it had encrypted. After three days, the price doubled. The whole point of WannaCry was to generate income, and it is here where the code was most inept.

Ideally, ransomware like WannaCry would use a new account number for each infected computer, to better ensure anonymity. Instead, WannaCry hard-coded just three account numbers, which basically informed authorities what accounts to monitor. [26] It is an astonishing botch.

Incredibly, WannaCry lacked the capability of automatically identifying which victims paid the ransom. That meant that determining the source of each payment required manual effort, a daunting task given the number of infected computers. [27] Inevitably, decryption keys were not sent to paying victims and once the word got out, there was no motivation for anyone else to pay.

In James Scott’s assessment, “The WannaCry attack attracted very high publicity and very high law-enforcement visibility while inflicting arguably the least amount of damage a similar campaign that size could cause and garnering profits lower than even the most rudimentary script kiddie attacks.” Scott was incredulous over claims that WannaCry was a Lazarus operation. “There is no logical rationale defending the theory that the methodical [Lazarus], known for targeted attacks with tailored software, would suddenly launch a global campaign dependent on barely functional ransomware.” [28]

One would never know it from news reports, but cybersecurity attribution is rarely absolute. Hal Berghel, of the Department of Computer Science at the University of Nevada, comments on the “absence of detailed strategies to provide justifiable, evidence-based cyberattribution. There’s a reason for that: there is none. The most we have is informed opinion.”  The certainty with which government officials and media assign blame in high-profile cyberattacks to perceived enemies should at least raise questions. “So whenever a politician, pundit, or executive tries to attribute something to one group or another, our first inclination should always be to look for signs of attribution bias, cognitive bias, cultural bias, cognitive dissonance, and so forth. Our first principle should be cui bono: What agendas are hidden? Whose interests are being represented or defended? What’s the motivation behind the statement? Where are the incentives behind the leak or reportage? How many of the claims have been substantiated by independent investigators?” [29]

IT security specialist Graham Cluley raises an important question. “I think in the current hostile climate between USA and North Korea it’s not unhelpful to retain some skepticism about why this claim might have been made, and what may have motivated the claim to be made at the present time.” [30]

To all appearances, WannaCry was the work of amateurish developers who got hold of NSA software that allowed the malware to spread like wildfire, but their own code was so poorly written that it failed to monetize the effort to any meaningful degree.

WannaCry has its uses, though. The Trump administration’s public attribution is “more about the administration’s message that North Korea is a dangerous actor than it is about cybersecurity,” says Ross Rustici, head of Intelligence Research at Cybereason. “They’re trying to lay the groundwork for people to feel like North Korea is a threat to the homeland.” [31] It is part of a campaign by the administration to stampede the public into supporting harsh measures or possibly even military action against North Korea.

[1] Thomas P. Bossert, “It’s Official: North Korea is Behind WannaCry,” Wall Street Journal,” December 19, 2017.

[2] “Press Briefing on the Attribution of the WannaCry Malware Attack to North Korea,” Whitehouse.gov, December 19, 2017.

[3] “WannaCry and Lazarus Group – the Missing Link?” SecureList, May 15, 2017.

[4] James Scott, “There’s Proof That North Korea Launched the WannaCry Attack? Not So Fast! – A Warning Against Premature, Inconclusive, and Distracting Attribution,” Institute for Critical Infrastructure Technology, May 23, 2017.

[5] Eduard Kovacs, “Industry Reactions to U.S. Blaming North Korea for WannaCry,” Security Week, December 22, 2017.

[6] “WannaCry: Ransomware Attacks Show Strong Links to Lazarus Group,” Symantec Official Blog, May 22, 2017.

[7] Gregory Elich, “Who Was Behind the Cyberattack on Sony?” Counterpunch, December 30, 2014.

[8] David Gilbert, Gareth Platt, “John McAfee: ‘I Know Who Hacked Sony Pictures – and it Wasn’t North Korea,” International Business Times, January 19, 2015.

[9] Amanda Rousseau, “WCry/WanaCry Ransomware Technical Analysis,” Endgame, May 14, 2017.

[10] Peter Stephenson, “WannaCry Attribution: I’m Not Convinced Kim Dunnit, but a Russian…”, SC Media, May 21, 2017.

[11] Digital Shadows Analyst Team, “WannaCry: An Analysis of Competing Hypotheses,” Digital Shadows, May 18, 2017.

[12] Patrick Howell O’Neill, “Researchers: WannaCry Ransomware Shares Code with North Korean Malware,” Cyberscoop, May 15, 2017.

[13] Alan Woodward, “Attribution is Difficult – Consider All the Evidence,” Cyber Matters, May 24, 2017.

[14] Thomas P. Bossert, “It’s Official: North Korea is Behind WannaCry,” Wall Street Journal,” December 19, 2017.

[15] Luke Somerville, Abel Toro, “WannaCry Post-Outbreak Analysis,” Forcepoint, May 16, 2017.

Sarah Maloney, “WannaCry / WCry /WannaCrypt Attack Profile,” Cybereason, May 16, 2017.

Rohit Langde, “WannaCry Ransomware: A Detailed Analysis of the Attack,” Techspective, September 26, 2017.

[16] Eduard Kovacs, “WannaCry Does Not Fit North Korea’s Style, Interests: Experts,” Security Week, May 19, 2017.

[17] “A Technical Analysis of WannaCry Ransomware,” LogRhythm, May 16, 2017.

[18] Digital Shadows Analyst Team, “WannaCry: An Analysis of Competing Hypotheses,” Digital Shadows, May 18, 2017.

[19] Jon Condra, John Costello, Sherman Chu, “Linguistic Analysis of WannaCry Ransomware Messages Suggests Chinese-Speaking Authors,” Flashpoint, May 25, 2017.

[20] Alan Woodward, “Attribution is Difficult – Consider All the Evidence,” Cyber Matters, May 24, 2017.

[21] Erika Noerenberg, Andrew Costis, Nathanial Quist, “A Technical Analysis of WannaCry Ransomware,” LogRhythm, May 16, 2017.

[22] James Scott, “There’s Proof That North Korea Launched the WannaCry Attack? Not So Fast! – A Warning Against Premature, Inconclusive, and Distracting Attribution,” Institute for Critical Infrastructure Technology, May 23, 2017.

[23] Eduard Kovacs, “WannaCry Does Not Fit North Korea’s Style, Interests: Experts,” Security Week, May 19, 2017.

[24] Peter Stephenson, “WannaCry Attribution: I’m Not Convinced Kim Dunnit, but a Russian…”, SC Media, May 21, 2017.

[25] Rohit Langde, “WannaCry Ransomware: A Detailed Analysis of the Attack,” Techspective, September 26, 2017.

[26] Jesse Dunietz, “The Imperfect Crime: How the WannaCry Hackers Could Get Nabbed,” Scientific American, August 16, 2017.

[27] Andy Greenberg, “The WannaCry Ransomware Hackers Made Some Major Mistakes,” Wired, May 15, 2017.

[28] James Scott, “WannaCry Ransomware & the Perils of Shoddy Attribution: It’s the Russians! No Wait, it’s the North Koreans!” Institute for Critical Infrastructure Technology, May 18, 2017.

[29] Hal Berghel, “On the Problem of (Cyber) Attribution,” Computer — IEEE Computer Society, March 2017.

[30] Scott Carey, “Should We Believe the White House When it Says North Korea is Behind WannaCry?” Computer World, December 20, 2017.

[31] John P. Mello Jr., “US Fingers North Korea for WannaCry Epidemic,” Tech News World, December 20, 2017.

Gregory Elich is on the Board of Directors of the Jasenovac Research Institute and the Advisory Board of the Korea Policy Institute. He is a member of the Solidarity Committee for Democracy and Peace in Korea, a columnist for Voice of the People, and one of the co-authors of Killing Democracy: CIA and Pentagon Operations in the Post-Soviet Period, published in the Russian language. He is also a member of the Task Force to Stop THAAD in Korea and Militarism in Asia and the Pacific. His website is https://gregoryelich.org Follow him on Twitter at @GregoryElich

January 3, 2018 Posted by | Deception, Fake News, Mainstream Media, Warmongering, Timeless or most popular | , , , | Leave a comment

Kim Jong-un Opens Contact Channel Between Two Koreas

© Sputnik/ Ilya Pitalev
Sputnik – 03.01.2018

North Korean leader Kim Jong-un has ordered the reopening of a contact channel between Pyongyang and Seoul to discuss issues related to the upcoming Olympic games in Pyeongchang, Ri Son Gwon, the head of North Korea’s agency handling inter-Korean affairs, said as quoted by the Yonhap news agency.

“I was instructed to open the Panmunjom [the shared border village] contact channel between North and South at 15:00 [3:30 p.m. in Seoul, 6:30 GMT] in order to settle issues related to hosting the PyeongChang Olympic Games, including sending [North Korea’s] delegation to the Games,” Ri said.

The officials added that Pyongyang would closely work with Seoul on practical issues related to sending the country’s delegation to the upcoming sports event, based upon the leadership’s stance, and expressed the hope that the Olympics would be successful.

“We sincerely wish once again that the PyeongChang Olympic Games will be held successfully,” Ri said.

Cheong Wa Dae, South Korea’s presidential office welcomed the announcement.

“I believe it signals a move toward an environment where communication will be possible at all times,” Yoon Young-chan, the chief presidential spokesman, told reporters, as quoted by the news agency.

Pyongyang’s statement came a day after Seoul proposed high-level discussions with the DPRK following Kim’s New Year’s address in which he had stressed he was willing to launch a dialogue with South Korea.

Meanwhile, US Envoy to the United Nations Nikki Haley warned in a statement on Tuesday that Washington would not recognize any talks between the two Koreas unless the nuclear issue is resolved and all of their nuclear weapons banned.

“North Korea can talk to anyone they want, but the United States is not going to recognize it or acknowledge it until they agree to ban the nuclear weapons that they have,” Haley said.

North Korean leader Kim Jong-un said in his New Year’s Day address that Pyongyang was ready to send its athletes to the 2018 Winter Olympics in Pyeongchang, South Korea, and expressed readiness to start talks with Seoul on the issue.

Seoul, in turn, has proposed holding high-level talks on January 9 in Panmunjom Village in the demilitarized zone between North Korea and South Korea.

The 2018 Winter Olympic Games will take place in Pyeongchang and two nearby cities, Gangneung and Jeongseon, in South Korea from February 9 to February 25. The South Korean resort city is located just 80 kilometers (50 miles) from the border with North Korea.

January 3, 2018 Posted by | Aletho News | , , | Leave a comment