No Matter How Beautiful the Model…
A serf’s musings on climate science.
All Nature faithfully.
So if it can’t or won’t agree with observation, if it’s gigo –garbage in and out, then no matter how beautiful the model, it’s wro-ong.
There’s a verse by Nietzsche that describes the art of painting and which aptly applies to climate science if you substitute the word, ‘models,’ for ‘paints.’
‘All Nature faithfully ‘ – But by what feint
Can Nature be subdued to art’s constraint?
Her smallest fragrant is still infinite!
And so he paints but what he likes in it.
What does he like? He likes, what he can paint.
And so, herewith, via writings by Professors’ Richard Lindzen, Ross McKitrick, W.J.R. Alexander and Judith Curry, identifying problems with climate models, a serf’s musings on why those climate models, however beautiful they may seem, are fatally flawed.
If it can’t, or won’t, describe all Nature faithfully …
A little history of science change. Richard Lindzen, in his paper, ‘Climate Science: is it currently designed to answer questions?’ (2008) looks at reasons, why climate science won’t, and does not, seek to describe Nature faithfully…namely, as a consequence of 20th century politicization of science generally, and of climate science modeling specifically. Enter Government, stage left, with customary dead–hand effect.
Richard Lindzen traces the ways in which science has changed from its traditional practice involving ‘the creative opposition of theory and observation wherein each tests the other in such a manner as to converge on a better understanding of the world.’ (P2) He identifies, in the aftermath of WW2. a shifting paradigm from ‘gratitude’ for the achievements of science during the War and in the ensuing two decades, with the lessening of new discoveries, change to a new paradigm for the science community in the late 1960’s, a paradigm of ‘fear,’ fear of the Soviet Union, fear of cancer, etc. Lindzen observes that ‘fear,’ as an incentive structure for big government spending in science and expansion of bureaucratic structures for stakeholders, is more compelling than gratitude.
Some consequences of fear as a basis of support.
With the end of the Cold War, there arose a need to look for other fear incentives, which soon put the focus on the environment. Enter, also left of stage, your Anthropological Global Warming, Climate Incentive… Climate Change Science, here’s a small and immature field of science depending on fear-based support, which makes it particularly vulnerable to fear-based corruption.
Richard Lindzen points to ways this is actually taking place in climate science. One consequence of the big spending paradigm in science appears to be that less emphasis is given to theory, because of its intrinsic difficulty and small scale, and more emphasis, instead, is on model simulation, (which calls for large capital investment in computation) and emphasis on adoption of large programs unconstrained by specific goals. More to be gained by perpetuation of problems than by solving them.
‘In brief, we have the new paradigm where simulation and programs have replaced theory and observation, where government largely determines the nature of scientific activity, and where the primary role of professional societies is the lobbying of the government for special advantage.’ (P4)
‘Perhaps,’ says Lindzen ,‘the most impressive exploitation of climate science for political purposes has been the creation of the Intergovernmental Panel on Climate Change (IPCC) by two UN agencies, UNEP (United Nations Environmental Program) and WMO (World Meteorological Organization) and the agreement of all major countries at the 1992 Rio Conference to accept the IPCC as authoritative. Formally, the IPCC summarizes the peer reviewed literature on climate every five years. The charge to the IPCC is not simply to summarize, but rather to provide the science with which to support the negotiating process whose aim is to control greenhouse gas levels. This is a political rather than a scientific charge… That said, the participating scientists have some leeway in which to reasonably describe matters, since the primary document that the public associates with the IPCC is not the extensive report prepared by the scientists, but rather the Summary for Policymakers which is written by an assemblage of representatives from governments and NGO’s, with only a small scientific representation.’
Who controls the message?
This politicization process, exploiting public alarm, necessitates political corruption of scientific institutions, as it requires political spokespersons on message within the expanding academic, government and research organizations that support science.
Richard Lindzen gives examples of how the leading spokespersons of these institutions’ hierarchical structures are no longer scientists but political appointees. For example Anthony Socci, spokesman for the American Meteorological Society in Washington is neither an elected official of the AMS nor a contributor to climate science but a former staffer to Al Gore. John Holdren, whose primary affiliation is the pseudo-scientific Wood’s Hole Research Centre, an environmental advocacy centre whose name is designed to confuse it with Wood’s Hole Oceanographic Centre, which actually is a research centre. John Holdren , another Al Gore Admin spokesman, is a professor in Harvard’s Government Department. Then there’s America’s National Academy of Science, which has allowed a back door for the election of candidates for membership and election to positions on the executive council, by-passing the conventional vetting procedure. Ralph Ciceroni, Paul Ehrlich, James Hansen, Steven Schneider, John Holdren and Susan Solomon were elected via this route. (P8)
Given the above you’d hardly be surprised if working scientists would make special efforts to support the global warming hypothesis. And there is ample evidence that shows that they do. Remember that crucial opposition between theory and test, test meaning observation? Well in climate science the desired direction is to bring the data into agreement with the models and not vice versa. As Lindzen illustrates by several examples, many scientists act as though it is the role of science to vindicate the greenhouse paradigm for climate change and vindicate the credibility of the models. ‘Comparisons of models with data are, for example, referred to as model validation studies rather than model tests.’ (P10.)
Bringing data into agreement with the models.
In his paper, Richard Lindzen presents seven examples of scientists doing just that. Here’s just one, and the most famous, maybe ‘infamous’ example, the effort to eliminate the Medieval Warming Period, by Michael Mann et al MBH98, (1998 -1999.) Quoting Lindzen directly:
‘In the first IPCC assessment (IPCC, 1990), the traditional picture of the climate of the past 1100 years was presented. In this picture, there was a medieval warm period that was somewhat warmer than the present, as well as the little ice age that was cooler. The presence of a period warmer than the present in the absence of any anthropogenic greenhouse gases was deemed an embarrassment for those holding that present warming could only be accounted for by the activities of man. Not surprisingly, efforts were made to get rid of the medieval warm period (According to Demming, 2005, in 1995, “A major person working in the area of climate change and global warming sent me an astonishing email that said “We have to get rid of the Medieval Warm Period.” The most infamous effort was that due to Mann et al (1998, 199913) which used primarily a few handfuls of tree ring records to obtain a reconstruction of Northern Hemisphere temperature going back eventually a thousand years that no longer showed a medieval warm period. Indeed, it showed a slight cooling for almost a thousand years culminating in a sharp warming beginning in the nineteenth century. The curve came to be known as the hockey stick, and featured prominently in the next IPCC report, where it was then suggested that the present warming was unprecedented in the past 1000 years. The study immediately encountered severe questions concerning both the proxy data and its statistical analysis (interestingly, the most penetrating critiques came from outside the field: McIntyre and McKitrick, 2003, 2005). This led to two independent assessments of the hockey stick (Wegman,2006, North, 2006), both of which found the statistics inadequate for the claims. The story is given in detail in Holland (2007). Since the existence of a medieval warm period is amply documented in historical accounts for the North Atlantic region (Soon et al, 2003), Mann et al countered that the warming had to be regional but not characteristic of the whole northern hemisphere. Given that an underlying assumption of their analysis was that the geographic pattern of warming had to have remained constant, this would have invalidated the analysis ab initio without reference to the specifics of the statistics. Indeed, the 4th IPCC (2007) assessment no longer featured the hockey stick, but the claim that current warming is unprecedented remains, and Mann et al’s reconstruction is still shown in Chapter 6 of the 4th IPCC assessment, buried among other reconstructions.’ (PP10/12)
Lots more in Lindzen’s paper on those pressures to inhibit enquiry and problem solving and the need for model validation that take place when an issue becomes a vital part of a political agenda as is the case with climate, where a politically designed position becomes a goal rather than a consequence
And more on that hockey stick here, in a paper by Professor Ross McKitrick.
Ross McKitrick’s paper, ‘A Brief Retrospective on the Hockey Stick’ (2014) is a concise summary of the controversial MBH98 paper and the methodological problems that McKitrick and Steve McIntyre identified in Michael Mann’s creation of that iconic hockey stick. Worth reading in the original, linked above, its analytical explanation is only six pages long, and clearly formatted in six sections.
1: Core Issues: The Proxy Data set. Those suspect tree ring records, namely bristlecone pine cores from high mountains in the US Southwest. These long-lived trees grow in highly contorted shapes as bark dies back to a single twisted strip. The scientists who published the data specifically warned that the data should not be used for temperature construction. Mann’s method exaggerated the significance of the bristlecones so as to make that chronology out to be the dominant global pattern rather than a minor and regional one. Mann understated the certainties of the final reconstruction leading to the claim of 1998 s the warmest year of the last millennium. Mann put obstacles in place for subsequent researchers wanting to obtain his data and replicate his methodologies, six years later were only received by the intervention s of US Congress ional investigators and editors of Nature magazine.
2: Critique of the method: One quote:
‘Mann’s PC step was programmed incorrectly and created two weird effects in how it handled data. First, if the underlying data set was mostly random noise, but there was one hockey stick-shaped series in the group, the flawed PC step would isolate it out, generate a hockey stick composite and call it the dominant pattern, even if it was just a minor background fluctuation. Second, if the underlying data consisted of a particular type of randomness called “red noise”—basically randomness operating on a slow, cyclical scale—then the PC step would rearrange the red noise into a hockey stick-shaped composite. Either way, the resulting composites would have a hockey stick shape for the LS step to glom onto and produce the famous final result.’ (P2)
3: Stickhandling: Here’s a Mannian response to whether he used a well understood statistic which McKitrick tells us is found in every statistics textbook and is the workhorse of model testing. In 2005, following an article in the Wall St Journal, Mann was sent a list of questions by the Energy and Commerce Committee of the US Congress, one of which was whether he had computed a required statistic benchmark. His answer:
‘My colleagues and I did not rely on this statistic in our assessments of “skill” (i.e., the reliability of a statistical model, based on the ability of a statistical model to match data not used in constructing the model) because, in our view, and in the view of other reputable scientists in the field, it is not an adequate measure of “skill.” The statistic used by Mann et al. 1998, the reduction of error, or “RE” statistic, is generally favored by scientists in the field.’ (P3)
McKitrick argues that this is classic misdirection. Mann was not asked whether he relied on the statistic when assessing his results because if he had relied on it he would never have claimed his results were significant:
‘He only claimed significance by ignoring it. The question specifically was whether he computed it. Tellingly, in his reply he changed the subject. But it hardly matters. Either he did not compute it, in which case he was lying in the paper by saying he had, or he did, in which case his failure to disclose it was misleading to his readers.’ (P3)
4: The National Academy of Science (NAS) Report.: McKitrick describes how Gerald North et al came up with elliptical ways to actually say that the Hockey Stick was unreliable … reductions can be assessed in a variety of tests … if the Coefficient of Efficiency (CE ) score is near zero or negative, your model is junk … the Wahl and Amman paper in which they use Mann’s data and code and compute the test scores that he didn’t report The CE scores range from near zero to negative … telling us that Mann’s results are …well you know!
5: The Censored Folder: ‘Hey, when we removed the rings the graph collapsed.’
‘Mann also published an online review article, (Mann et al. 2000,) that assured readers in categorical terms that their results were “robust” to non-climatic bias in tree ring data6 and even to the complete removal of tree rings from their data set, though they illustrated that point only for the post-1760 interval. In the course of our analysis, Steve found some directories at Mann’s FTP site (the “CENSORED” directories), which, through detective work, were found to contain assessments of the impact from dropping the bristlecones from the underlying data. In light of the claim in Mann et al. (2000), this should not have made any difference, but it did. In our NAS presentation we showed graphs of the data in Mann’s “CENSORED” results, in which the hockey stick shape completely disappears. That is, even applying Mann’s biased methods, after dropping the few bristlecone pine series there is no remaining hockey stick shape. The claim in Mann et al about robustness to the exclusion of the tree ring data was obviously misleading.’ (P5)
6: Conclusion: Seems to Professor McKitrick that this whole Hockey Stick episode has ‘social significance as an indicator of a rather defective aspect of early 21st century scientific culture…’ (P6)
Biblical prophecies and modern predictions, questions, evidence ‘n tests…
Behold there came seven years of great plenty throughout the land of Egypt – and there shall arise after them seven years of famine. (Genesis, 6.)
W.J.R. Alexander, Professor Emeritus, Department of Civil and Bio-systems Engineering at the University of Pretoria, published a report, ‘A Critical Assessment of Current Climate Change Science, (April, 2006)’ which I downloaded for my files in 2011. The report is an impressive and fascinating study of sunspot observations and their correlation with flood data of the River Vaal, and the report is also a critique of climate science and bodies like the IPCC for their reluctance to make use of the extensive hydrological and historical observations available to them in the South African data base.
‘Tis Unfortunate that Google no longer provides connection to this paper or several other publications by Professor Alexander, one wonders why, as Professor Alexander brought a wealth of experience to water research and management, so here is a link to a shorter publication by Alexander which I located at R Pielke Senior’s blog and which includes in its Tables 9-10, (on pp 23/24 of the original paper ) observations important in the much longer report.
Regarding that wealth of experience: in his early career Professor Alexander spent years in the field building dams, canals pipelines and the Orange-Fish Tunnel, which is the longest tunnel in the world. Later, as Chief of the Division of Hydrology he was responsible for collection and publication of hydrological data and conducting research necessary for water resource management in a water-scarce country and designing structures exposed to flood damage. A major challenge for Professor Alexander was the search for multi-year river flow prediction capabilities and solving this problem was motivation for his continuing research when he was appointed Professor Emeritus at Pretoria University in 1985. This research reads like a detective story.
Not jest an academic enquiry but a real need.
Lot’s of prior knowledge of the multiyear characteristics of rainfall and river flow in the historical record. In the 1900’s there’s scientist, civil- engineer, R,E Hutchins who served in the British Colonial Office in India during the severe drought of 1876, at the time searching for predictable links between droughts and sunspot numbers. When Hutchins migrated to South Africa he continued his research, finding a correspondence between average price of food grain and sunspot numbers and showing that the linkage between floods that broke the drought and sunspot numbers was greatest in the temperate zones, not the tropics or northern America or Europe.
Following Hutchins, in 1950, civil engineer R.E Hurst, analysing 1080 years of data recorded on the Nile River in Egypt in order to determine the required storage capacity of the proposed Aswan High Dam, found an unexplained anomaly in the data. Using graphical methods he recognized the same phenomenon in the long record, which became known as Hurst’s Ghost and which was confirmed by Mandelbrot and Wallis, in their paper.’ Some Long-Run Properties of Geophysical Research,’(1969) finding the anomaly present in their own extensive research of varve deposits, river flow and meander, earthquake frequencies and sunspots.
In the mid 1970’s hydrologists in the South African Department of Water Affairs also encountered Hurst’s anomaly, and perceiving that the reservoir capacity–yield model was deficient, began looking further. Graphical analysis revealed that there was a clear 21 year periodicity in the data that was the cause of the difficulty. These graphs showed a clear pattern in the accumulated departures from the record mean values that were approximately synchronous with sunspot activity.
Over the following years, Professor Alexander continued studying Vaal River periodicity and in 1995 Published his paper ‘Floods, droughts and Climate Change’, in which he successfully predicted the next breaking of the drought.
In November 2005 Alexander issued another flood alert, which was again a successful prediction. From his examination of sun spot cycles available from the World Data Centre for the Sun spot Index, Professor Alexander observed a pattern, not apparent in the conventional graphs a pattern that that did not appear in the 10.5 year sunspot cycle but via alternating pairs of year cycles where there is a meaningful difference in sunspot activity in the alternating cycle of the pair. What is important is not annual sunspot density but rate of change in the densities. Herewith his graphs that illustrate the process.
The Figure 9, below, is very important demonstrating the unequivocal synchronous relationship between annual sunspot numbers and the annual flows in the Vaal River that is South Africa’s major river. Note the alternating above (rising) and below (falling) flow sequences and their synchronous relationship with sunspot numbers; also the statistically significant (95%), 21-year periodicity in the flow data synchronous with the double sunspot cycle.
Notice the absence of 11-year periodicity in the correlogram of the Vaal River. It is no wonder that climate change scientists have been unable to detect synchronous relationships with the 11-year sunspot cycle. It does not exist! This is because the properties of the alternating solar cycles are fundamentally different to the extent that the climatic responses are also very different.
Figure 9. Comparisons of the characteristics of annual sunspot numbers with corresponding characteristics of the annual flows in the Vaal River. Another frequent error associated with the sunspot cycle is the assumption that the maximum effect is associated with the sunspot maxima. This is altogether wrong. The maxima occur immediately after the solar minima.
Table 10 illustrates this.
Look away, look away, climate man…
In his paper Alexander notes serious deficiencies in climate science. He cites and comments on the following passage from IPCC Report (2001) technical summary of Working Group 1:
This section bridges to the climate change of the future by describing the only tool that provides quantitative estimates of future climate changes, namely numerical models…
The complexity of the processes in the climate system prevents the use of extrapolation of past trends or statistical and other purely empirical techniques for projections…
The degree to which the model can simulate the responses of the climate system hinges to a very large degree on the level of understanding of the physical, geographical, chemical biological processes that govern the climate system. (A. P13.)
‘Unfortunately,’ says Professor Alexander, ‘this process is fundamentally flawed. The interest is in climate change. Climate in turn does not refer to an instant in time but to a period of time. For example, agricultural and water supply droughts have durations measured in years. The interest is therefore in the properties of future multi-year time series, not in changes in mean conditions. Global climate models are inherently incapable of producing information in this format.
It is clear from the above extracts that the climate researchers did not appreciate the fundamental difference between process theory, which they applied, and observation theory, which is the foundation of the applied sciences. A simple example is the biblical reference to Joseph’s prediction of plenty followed by famine.’ (P13.)
Contra to the IPCC statement of complexity preventing extrapolation of past trends, statistical or purely empirical techniques, ‘more than 3000 years ago, administrators in the ancient Egyptian civilizations were aware of the anomalous grouping of wet and dry sequences of wet and dry seasons and the ability to predict future conditions,’ which the IPCC denied. And there’s that record from the water level gauging structure on the Nile near Cairo, the longest hydrological record in the world, which the IPCC ignored. Following Nature? No way, what has Nature to do with us?
Can’t agree with Nature faithfully…
In a post at Climate Etc, November, 2016, Professor Judith Curry presents a critical analysis of climate models with some follow up insightful commentary by readers.. Read the full post above, which addresses the following four questions: (1) What is a Global Climate Model? (2) What is the reliability of climate models? (3) What are the failings of Climate models? (4) Are Global Climate Models’ a reliable tool for predicting climate change?
Here’s a summary of Judith Curry’s analysis paper, ‘Climate models for lawyers,’ as a response to each of those questions.
Question 1: Judith Curry describes a Global Climate Model as a simulation of the Earth’s climate system, with modules that simulate the atmosphere, ocean, land surface, sea ice and glaciers. The atmospheric module simulates evolution of the winds, temperatures, humidity and atmospheric pressure using complex mathematical equations that can only be solved using computers. These equations attempt to incorporate fundamental physical principles such as Newton’s Laws of Motion and the First Law of Thermodynamics. Global Climate Models, (GCMs) also use mathematical equations to describe some complex dynamics… three dimensional ocean circulation, how it transports heat, and how the ocean exchanges heat and moisture with the atmosphere, land surface modeling to describe how vegetation, soil, snow and ice exchange energy and moisture with the atmosphere.
Trying to solve these equations on a computer, GCMs divide atmosphere, oceans and land into a three-dimensional grid system. The equations are then calculated for each cell in the grid repeatedly for successive time steps marching forward in time throughout the simulation. The necessary coarseness of the model resolution is driven by the computing resources available and tradeoffs between model resolution, model complexity and length and number of simulations to be conducted. Because models’ special resolutions are relatively coarse, smaller resolutions or sub-grid resolutions, like clouds and rainfall, are represented as parameters or simple formulas which are ‘calibrated,’ or ‘tuned’ so that models perform adequately when compared with historical observations. This calibration is needed because the real processes are either poorly understood or too complex to incorporate into the models.
Mesdames et messieurs, faites vos jeux
As Judith Curry observes:
‘There are literally thousands of different choices made in the construction of a climate model (e.g. resolution, complexity of the sub-models, parameterizations). Each different set of choices produces a different model having different sensitivities. Further, different modeling groups have different focal interests e.g. long paleoclimate simulations, details of ocean circulations, nuances of the interactions between aerosol particles and clouds, the carbon cycle. These different interests focus computational resources on a particular aspect of simulating the climate system at the expense of others.’ (P3)
Question 2: concerning the reliability of GCMs: Problems arise from uncertainties in model structure, model parameterizations and initial conditions and from ad hoc modeling to compensate for the absence of neglected factors. Continual ad hoc adjustments in models, (calibration) masks underlying deficiencies in model structural form. (P5) And therefore model calibration to match 20th century historic temperatures is no metric for models’ accuracy and nor does agreement of models’ forecasts and hindcasts imply that a model gives a correct answer for the right reason. For example, the various coupled climate models used in the IPCC Fourth Assessment Report each reproduces the time series for the 20th century but with different feedbacks and sensitivities producing different simulations.(P5)
Question 3: A significant failing of climate models is their failure to understand the causes of global warming. Models’ assumptions of human – caused warming rely not only on the amount of greenhouse gases in the atmosphere but also on how ‘sensitive’ the climate is to these increases. The equilibrium climate sensitivity (ECS) defined as the change in global mean surface temperature at equilibrium that is caused by a doubling of atmospheric CO2 concentration was estimated by the International Panel on Climate Change, (IPCC) in 2007, to be in the range 2 to 4.5 degrees. Since then the uncertainty of the range has been increasing, the bottom of the range has been lowered from 2 to 1,5 degrees, and no best estimate is now given as a consequence of substantial discrepancy between observation lower best estimations of ECS versus the higher estimates from climate models.
And ‘arguably the most fundamental challenge for climate models,’ says Judith Curry, ‘lies in the coupling of two chaotic fluids, the ocean and the atmosphere.’
‘Coupling a non-linear chaotic atmospheric model to a non-linear, chaotic ocean model gives rise to something much more complex than the deterministic chaos of the weather model, particularly under conditions of transient forcing, (such as the increasing conditions of CO2) Coupled atmospheric/ocean modes of internal variability arise on timescales of weeks, years, decades, centuries and millennia. These coupled modes give rise to bifurcation, instability and chaos. How to characterize such phenomena arising from transient forcing of the coupled atmosphere/ocean system defies classification by current theories of non linear dynamic al systems, particularly in situations involving transient changes of parameter values. Stainforth et al (2007) refer to this situation as ‘Pandemonium.’ (P9)
Question 4: Regarding fitness for purpose, Judith Curry concludes that given the above and given the failure of climate models to explain the observed early 20th century warming and the mid-century cooling, the climate models are not fit for the purpose of simulating and predicting the evolution of Earth’s climate.
So there it is. Given the above analysis by Professors Curry, Lindzen, McKitrick and Alexander … alas, you modelers in cloud towers, whiling away the tenured hours, looks like those climate models, beautiful as they may appear, just – don’t – match – the -observations and so, … well you know!