Review of Thornton et al 2017

Author: Dr. Michael Chase, 3rd July 2017

metoffice_kendon_dec_2010

This post documents a review of a recent paper published in Environmental Research Letters about GB wind power and electricity demand:

The relationship between wind power, electricity demand and winter weather patterns in Great Britain

Hazel E Thornton1,5, Adam A Scaife1, Brian J Hoskins2,3 and David J Brayshaw3,4

Published 16 June 2017 © 2017 Crown copyright
Environmental Research Letters, Volume 12, Number 6

http://iopscience.iop.org/article/10.1088/1748-9326/aa69c6/meta

I was alerted to the publication of this paper by a post about it at the “Energy Matters” blog by Roger Andrews: http://euanmearns.com/peak-demand-and-the-winter-wind/

The paper has generated some hype and fake news, such as this from “energylivenews”:

“Wind turbines produce more power on the coldest days than the average winter day.”

This post attempts to provide a more accurate description of what the paper says, and what it does not (but should) say.

The authors of the paper are all meteorologists or climatologists. The meteorological aspects of the paper are excellent, especially the insights provided into the particular weather patterns that lead to most cold spells and associated high demands for electricity. The absence of electrical engineering input is apparent in the incomplete analysis of the contribution of wind power to meeting future peak demands.

Incomplete Analysis

The paper quantifies how well current wind power deals with an old problem (high demand during cold spells on a system without wind power) but fails to quantify how additional wind power would contribute to solving current problems. Current GB wind power already has more than sufficient capacity to deal with the relatively small excess demands that appear to occur during some windy cold spells, so windy cold spells are no longer a problem. In fact, the current nameplate capacity of wind power of around 15GW (metered plus embedded generators) is so large that it has shifted the current problem (the peak demands placed on the rapidly diminishing conventional sources of supply) to times of such low wind conditions that additional wind power capacity will have negligible effect on the current capacity problem for the foreseeable future.

The following figure shows how the analysis could be improved to draw appropriate conclusions about additional wind power:

Thornton_Fig6_mod

The figure above has been copied and pasted from the paper, with the addition by me of the red line, which provides a rough estimation of where the current problem lies, the peak demands that conventional sources might be expected to have to meet when cold spells fall on working days. The slope of the red line follows from the current total nameplate capacity of GB wind power. I have assumed that the conventional sources can supply 1040 GWh per day, so the red curve starts at that level. As total wind power increases higher total demands can be met thanks to the wind contribution. The current problem is events below the red line, several of which had very low wind power. Those very low wind power events had a BIT less demand than the highest but a LOT less wind power than the average, and that is the current peak capacity problem, and additional wind power will not solve it.

The figure above can be used to see the outcome of several what-ifs. If demands increase (such as via increased electrification of heating and transport) then many more events will move to the right into the danger region. If more conventional supply is lost then the red line will move to the left, bringing many more events into the danger region.

What-if more wind power capacity is added? Suppose that an extra 1.5 GW (nameplate) is added in the next few years, will that improve the security of the GB electricity system? The answer can be seen from the effect on the red line, whose slope will merely decrease by 10% (since total nameplate capacity has risen by 10%), making very little difference to the problem area below the red line.

Wind power enthusiasts may be tempted to argue that there is very little in the way of events below the red line so there is not much of a capacity problem, especially when more wind power is added. There are two problems with that argument, firstly that the temporal resolution (daily wind averages) used in the paper underestimates the number of events below the red line (more on that below), but even if that issue is minor the capacity problem includes the large number of events that are poised to enter the danger region via a rise in demand and/or a fall in conventional supply. Wind power has changed the statistics of the supply/demand balance, but that change in statistics has now all but stopped, and somehow the rapidly falling conventional supply has to be reconciled with the expected rapid rise in demand.

Modelling Issues

The reanalysis data used, from 1979, includes long periods of relatively mild (and presumably windy) winters in the UK, and this is likely to have biased the statistics in the over-optimistic direction. The following figures show HadCET data for daily winter maximum and minimum temperatures from 1878, with exceptionally cold days shown with blue markers.

hadcet_all

hadcetmin_all

Finally, the paper uses daily average wind power, when it should have made an attempt to estimate wind power at the critical early evening period, when peak demands occur. Critical events with wind power lulls in the early evening will have been biased towards higher apparent wind power by the use of daily averages. There would be many more dots below the red line in Figure 6 of the paper (shown above) if the analysis had been done at a finer resolution, with Roger Andrews showing example data at 5-minute resolution from a particular cold spell in his blog post cited above.

Advertisements
Posted in Uncategorized | Leave a comment

Climate Distortion from Homogenisation

Author: Dr. Michael Chase

“When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down…”

Source: http://berkeleyearth.org/understanding-adjustments-temperature-data/

Many people suspect that there are inaccuracies in the major homogenisations of instrumental temperature records. This article asserts that there are substantial errors resulting from the homogenisation procedures commonly employed, and provides a general explanation for them. In short, there are many transient perturbations in temperature records, and the homogenisation procedure over-corrects for many of them. I am currently quantifying this over-correction in the ACORN-SAT version of Australian surface air temperatures, and hope that this article will inspire others to help, or to look at data from other countries. The article is based on knowledge mainly of ACORN-SAT, but there is no reason to suppose that the conclusions do not apply generally.

First of all the following figure illustrates why raw data has to be adjusted to reveal the true background temperature variations. The objective is to obtain the temperatures that would have been recorded in the past if the weather station had been at its current location, and with its current equipment. The figure below shows a typical history of the difference in effective temperature calibration between the past and the present:

HOMOGENO_DISTORTION_01

Station moves and equipment changes are the typical causes of sudden and persistent changes in temperature relative to neighbours, events that computer algorithms are good at detecting and correcting. If that were the whole story then everything would be fine, but things go rapidly downhill from this point on.

The main problem for large-scale homogenisations is that there are many “transient” (rather than persistent) perturbations of temperature. The computer detection algorithms still work to some extent with transient perturbations (though it would be better if they failed to work), but they cannot do the correction part of the procedure without adult supervision. The outcome is illustrated in the following figure, showing two typical transient perturbations, and the erroneous corrections that they generate:

HOMOGENO_DISTORTION_02

The problem arises when only one end of transient perturbations get detected, the procedure assumes that the transients are persistent, and therefore over-corrects the data before the transient.

It is likely that the most common transient perturbations of temperature involve sudden cooling, for example from the following mechanisms:

  • Removal of thermometers from an urban-warmed location
  • Replacement of damaged or degraded screens
  • Onset of a rainy period after a drought

There can be sudden warmings, for example when thermometers are removed from a shaded location, there is sudden screen damage, or a building  is erected nearby, but the other end of those transients are probably more likely to be detected than in the case of sudden coolings. It seems likely that poorly corrected transient perturbations give a bias towards cooling of the early part of temperature records.

There is always scope for improving computer algorithms, but I think that the problem lies with the functional design of the homogenisation procedure, which needs more involvement of expert analysts and less blind faith in what the computer says. The analysts need to examine rainfall data to remove false detections from that source, and they need to look over long periods of time to find both ends of transient perturbations. As the main interest in long temperature records is the end-to-end variation of temperature it may be OK to leave transient perturbations in place, but note that mid-20th century urban warming can convert cyclic variations of temperature into hockey sticks.

I am continuing a review of ACORN-SAT data, trying to separate its step change detections into two groups, those resulting from persistent changes (which need correction), and those resulting from transient perturbations, which don’t need correction. I hope that this article will encourage people to examine data from other regions, to determine the extent of climate distortion introduced by the homogenisation process.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT, Part 3

Author: Dr. Michael Chase

Kerang_1930

Photo above: Kerang, Victoria, circa 1930

ACORN-SAT is the outcome of a “system” for detection and correction of non-climatic influences on surface air temperature data recorded in Australia. Previous posts have dealt with errors in the correction part of the process, and with false detections. This post deals with failure to detect what should be detected.

Many non-climatic influences on temperature measurements are transient in nature, so if an attempt is made to detect and correct, then both ends of the transient influence must be found. That fact alone makes the process rather risky, and this post shows examples of the risk being realised, with one end of transient perturbations not being detected, resulting in invalid correction of the data before the onset of the transient influence.

The following figure shows a transient warming influence on daily maximum temperatures at Kerang in Victoria. To make the transient warming easier to see the data from a nearby reference station (Echuca Aerodrome) has been subtracted from the Kerang data (black curve), removing most of the natural background variation in temperature. The figure also shows matching results for nearby Deniliquin (red) and Rutherglen (blue), for which transient perturbations (detected via ACORN-SAT, and verified visually by me) have been corrected.

Kerang_screen_transient

ACORN-SAT has a detection for Kerang in 1957 at the end of the transient influence, but no detection for the start, therefore it falsely corrects the data all the way back to 1910. The right answer is to correct the data only back to 1943, the onset of the transient, or to make no correction at all.

This post will be updated with any further detection failures that are found.

NOTE: Missing months of data have been infilled by interpolation, following the temperature variations of neighbouring stations, and partial quality control adjustments have been made for anomalous spikes and dips, in particular at Rutherglen in 1925.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT, Part 2

Author: Dr. Michael Chase

outback-town-welcome-boulia-channel-country-w-queensland-australia-aahmw2

A recent post dealt with the flaw in ACORN-SAT that it makes the erroneous assumption that all step changes in temperature arise from persistent non-climatic influences. This post illustrates a potential “false alarm” problem with the detection of step changes, that often occur when there is heavier than average rainfall at a weather station. It looks like the algorithms are responding to a transient (several years) cooling of daily maximum temperatures associated with the rainfall and its aftermath, and the analysts are not removing those false detections. Correcting the data for those false alarms cools all years before the event, but the correct thing to do is to make no corrections at all.

The following figure shows rainfall and Tmax data from interior Queensland, an area which responds strongly to rainfall (and clouds), with drops in temperature when rainfall is higher than average:

ACORN_Rain_01

The stations shown in the figure above are Richmond (red), Camooweal (cyan), Boulia (blue) and Longreach (black). Note how drops in temperature are associated with higher than average rainfall. ACORN-SAT gives a “statistical” step change for Richmond (red) in 1950, exactly when it has a peak in rainfall.

This post will be expanded later to show other examples of ACORN-SAT steps being linked to peaks in rainfall.

A feature of the algorithms that may be contributing to false detections is the use of 10 neighbouring stations to decide on the size of the step. Requiring 10 stations means that a majority of them may be in areas not affected by the local rise in rainfall. The use of medians to decide the size of the step then gives an inconsistency with the temperature change at the station being examined, crossing the threshold involved, and triggering a false detection.

Posted in Uncategorized | 2 Comments

Sheds and Weather at Kalgoorlie

KalgoorliePanoramaSep1930_WEFretwellCollection

Photo above: Kalgoorlie, Western Australia, circa 1930

Author: Dr. Michael Chase

This post looks at monthly averages of daily maximum temperatures recorded at Kalgoorlie, and surrounding areas, in the inter-war years of 1920-40, during which ACORN-SAT makes two adjustments. The validity of the adjustments is discussed, as well as their wider significance for other Australian weather station data.

First of all here is a clear example of a temperature inhomogeneity at Kalgoorlie Post Office (black curve), suddenly changing its temperature relative to nearby Menzies (mauve) in 1936:

KALG_Tmax_02

The ACORN-SAT adjustment summary gives “Move” as the explanation for its temperature adjustments (decreases) at Kalgoorlie, for all years prior to 1936. It is the “all years before 1936” part of the adjustments that is in serious doubt, because the appendix to Simon Torok’s PhD thesis gives the reason for the move:

KALG_Torok

The thermometers and screen were moved a mere 100 yards in 1936 because of sheds, and it seems likely that the resulting drop in temperature was caused by the removal of the thermometers from the heat trap created by the sheds, rather than by any intrinsic difference in temperature between the new and old locations. Temperature changes due to location are persistent and justify the adjustment of all temperatures prior to the move, but temperature changes due to sheds is probably transient in nature, and correction should only be applied to (say) 1910 data if there is evidence that the shed heat trap was in place then, which seems unlikely.

The station history summary shown above gives a flavour of the problems involved in measuring temperatures in the inter-war years (such as readings taken by girls!), worth bearing in mind in the discussion below of the possible inhomogeneity in 1930, which I think is in doubt. Note that central organisation probably improved accuracy overall, but it tended to make changes happen all at about the same time, for example all three stations mentioned above had some sort of change in 1935/36, which makes it difficult to deal with inhomogeneities around that time.

The following figure shows the data at Kalgoorlie Post Office around 1930, together with eye-ball estimates of averages before and after:

KALG_Tmax_01

ACORN-SAT says that there was a non-climatic drop in Tmax temperatures at Kalgoorlie in 1930, but I find the evidence for it unconvincing. Firstly there is no mention of any change in the station history summary at that time, and secondly the local (well-inland only) neighbours don’t show anything unusual at Kalgoorlie at that time:

KALG_Tmax_03

The figure above shows a cooling trend amongst all the stations (the well-inland ones only) in the area, but bear in mind potential screen problems, which may have distorted the trend if there were more sunny years before 1930 than after.

ACORN-SAT calculates the size of its temperature adjustments from how temperatures changed at neighbouring stations, but most of the neighbours used are closer to the sea than Kalgoorlie, hence have a greater maritime moderating effect on any temperature trend. There are hints in the data that around 1930 Kalgoorlie was much more in an inland weather pattern than in that of the stations with strong maritime influence.

I think that there is currently a respectable hypothesis that around 1930 Kalgoorlie was cooling in Tmax, and maybe the especially cool year of 1931 (and its aftermath) triggered the ACORN-SAT detectors to examine it, and they produced an erroneous decision because most of the neighbours were more moderated by the sea to the West and South.

Further work is needed at Kalgoorlie to sort out the uncertainties and produce a more accurate set of temperature adjustments than is provided by ACORN-SAT.

Posted in Uncategorized | Leave a comment

Temperature Homogenisation Errors

Author: Dr. Michael Chase

Figure_4.1_Adelaide_screens

“When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down depending on the size and direction of the breakpoint.”

Extract above from: http://berkeleyearth.org/understanding-adjustments-temperature-data/

Temperature measurements have two classes of non-climatic influence:

  • Transient influences, with no impact on the start and end of the record
  • Persistent shifts, with an impact on the start of the record relative to the end

An example of a persistent shift is the switch from the use of a Glaisher stand or thermometer shed in the early years, to the use of a Stevenson screen at some time within the record. Another example of a persistent shift is a station move within the record.

Examples of transient influences are urban growth near a weather station before it is relocated, and deterioration and replacement of the thermometer screen.

This article asserts that the major temperature homogenisations are failing to make a distinction between transient and persistent influences, treating all perturbations as being persistent. The consequence of that poor assumption is that the transient influences, which are predominantly warming, are leading to an over-cooling of the early periods of many records, with this artifact increasing in severity as the detection of step changes in temperature becomes more sensitive.

Suppose a weather station recorded accurate temperatures around 1900, but now those temperatures are being changed by homogenisation algorithms, as a result of all the things that happened at, and to, the weather station between 1900 and its current location at (say) an airport. Typically, between 1900 and now, the temperatures recorded will have been influenced by several non-climatic effects, such as varying amounts of urban warming, various screen and thermometer deteriorations and replacements, and observer errors.

Suppose a village or town was built around the weather station, maybe consisting of just a few nearby buildings, and the weather station was relocated to an airport or rural site. Suppose the screen deteriorated, allowing sunlight to shine on the thermometers, until the screen was replaced. Would a computer be able to use the temperature record, compared to those of neighbours, to properly correct the temperatures all the way back to 1900 so that they reflected only the background climate?

In order to change temperature data back to 1900, to give what would have been measured in the past, but at the current location of the weather station, with its current equipment, you must have a COMPLETE history of the non-climatic influences on the temperature data being adjusted. But, such a complete history is usually lacking, the homogenisation algorithms only quantify step changes in influences, and only when the steps are large enough to detect, or when there is information to suggest that a change is expected.

The algorithms for step detection and quantification are already quite sophisticated and are being further developed, but there is a barely mentioned elephant in the room, the drastically simple (and often bad) assumption that the history of non-climatic influence consists solely of its step changes, which is only true if the non-climatic influences don’t change with time between the steps.

A significant part of the total non-climatic influence on recorded temperature can be regarded as “thermal degradation”, typically urban growth near a weather station, and deterioration of the (usually wooden) thermometer screen. Such degradation is both warming and time-varying, often growing slowly then ending suddenly with a step change down in temperature when the weather station is moved to a better location, or an old or broken screen is replaced with a shiny new one. It is of course correct to adjust temperatures down in the years leading up to such sudden cooling events, but the assumption of constant influence will often over-correct earlier years, in particular the years before the influence in question began.

Currently, in temperature reconstructions produced by homogenisation algorithms, early temperature measurements are being reduced incorrectly as a result of the detection of time varying warming influences. As the detection algorithms become more sensitive the invalid cooling of the past increases.

As a systems engineer I find it useful to regard temperature homogenisation as a system, inputs are raw data, outputs are meant to be better representations of background temperature history. Treating all detected step changes as being persistent is a functional design error. The step change detection algorithms themselves are blameless, the problem lies with an invalid assumption made at the functional design stage.

To avoid over-correcting the past, expert meteorological data analysts should make the final decision about whether or not, and how, each detected step change is removed from the data. Station history, and the temperature data itself, must be examined to classify each detected step change into two groups, persistent changes (such as station moves or equipment changes) or time-varying changes, the latter requiring special analysis to either measure (from the data) or model how the influence varies with time.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT

Author: Dr. Michael Chase

acorn_means_sat-on

Source of the figure above: http://www.bom.gov.au/climate/change/#tabs=Tracker&tracker=timeseries

Summary

There is considerable climate distortion in the ACORN-SAT version of surface air temperatures of Australia from 1910 to present, with most of the distortion in the first half of the 20th century. The main problem lies in the assumption made that all non-climatic influences, detected via anomalous step changes in temperature, do not vary with time. Many non-climatic influences, such as urban heating and screen degradation, do vary with time, so whilst the ACORN-SAT correction process does a good job for some years before step changes occur, it over-corrects at earlier times, often giving an invalid cooling of the early decades of the 20th century.

This problem with ACORN-SAT is only at the final stage of processing, when corrections are applied. The step change information itself is highly valuable and it should be possible to produce a more accurate version of the temperature history of Australia, if possible by following how non-climatic influences vary with time in the data, or by modelling.

Introduction

Reconstructing the actual surface temperatures of Australia back to 1910 is a difficult job which has to contend with several non-climatic influences on what is sometimes incomplete, erroneous and poorly documented temperature data. The ACORN-SAT reconstruction detects non-climatic influences when they change, either sudden onsets or removals. Data from neighbouring stations is used to detect and estimate the size of sudden and persistent changes in relative temperature, deemed to be non-climatic in origin. The final stage of processing is to correct the temperature data so as to reveal the true background climate variations.

The following figure illustrates where things go wrong for any time-varying influence, shown as the red curve:

acorn_correction_error_v2

The red curve in the figure above applies to common non-climatic influences such as urban growth around weather stations located in towns, or the gradual degradation of the thermometer screen. When a weather station moves out of town, or to a better location within the town, or a screen is replaced, the temperatures recorded suddenly drop (relative to those of neighbours), an event detected by the ACORN-SAT algorithms. The erroneous assumption (shown as the blue curve) is then made that the non-climatic influence is constant in time, resulting in over-correction in the entire period leading up to the full effect of the influence.

Examples

morris_denil_photo

Source of the picture above: http://www.hashemifamily.com/Kevan/Climate/Heat_Island.pdf

When the weather station at Deniliquin moved to a better location in the town in 1971, the minimum temperatures recorded fell by around 1 degree C on average, probably mostly due to the removal of the heating effect of the buildings and paving stones. ACORN-SAT assumes that the urban heating in 1971 was constant all the way back to 1910.

Another example is given in the ACORN-SAT documentation itself, for a site move in Inverell in 1967:

inverell_acorn

Source of the extract above: http://cawcr.gov.au/technical-reports/CTR_049.pdf

Again, ACORN-SAT assumes that the urban heating of the very built-up post office site applied all the way back to 1910, though in this case there were also earlier step changes. Integrating the step changes of time varying influences does not in general get you anywhere near the right answer for early decades.

There are many examples in the ACORN-SAT station adjustment summary of step changes due to equipment replacement, many likely to be due to a recent degradation being removed by the provision of new equipment. ACORN-SAT assumes that the degraded equipment was in place all the way back to 1910.

Previous Commentary

I must apologise at this stage for being unfamiliar with most of the literature on temperature homogenisation, so am probably not giving due credit to previous authors. The following two examples are relevant, firstly from Hansen et al, 2001:

hansen_uhi

Source of the figure above: https://pubs.giss.nasa.gov/abs/ha02300a.html

The figure above shows a time-varying urban heating being correctly removed around the time of a station move, but being over-corrected at earlier times. There are several examples in ACORN-SAT of town data being merged with that from an airport, with an unknown amount of historical urban heating being turned into erroneous cooling of early data.

The second example is from Stockwell and Stewart (2012), correctly identifying a major reason why the pre-cursor to ACORN-SAT gives more apparent warming than is found in raw data:

stockwell_stewart_drift_error

Source of the figure above: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.362.9661&rep=rep1&type=pdf

The Way Forward

ACORN-SAT only falls at the final hurdle, the temperature step changes that it identifies can be used to redeem it. Here is an example of a validation of one of its discontinuities:

cr06_lav_1955

The figure above shows an anomalous step-up in average maximum temperatures at Laverton RAAF (blue curve), as well as possible indications of anomalous warming in the 1960s relative to neighbours: Melbourne Regional (-0.8C), Essendon Airport (-0.1C), Black Rock, and Tooradin.

One way to deal with urban warming, most of which is historical rather than current, is to construct composite temperature records, in the example above following Laverton up to its step change, then jumping to a suitably scaled average of its more rural neighbours.

For large temperature step changes, such as those shown above for Deniliquin and Inverell, it should be possible to follow, and thereby properly correct, the time variation of the urban warming down to a certain minimum level. For smaller temperature changes, many of which occur in regions without reliable near neighbours, modelling of the urban heating can be used, guided by any available documentation of the station history.

This post ends here, examples will be shown in later articles of the size of ACORN-SAT errors, but the focus will shift to the question of what is the right answer for Australian temperature histories.

Posted in Uncategorized | 8 Comments