Averaging Multiple Temperature Records

Author: Dr. Michael Chase

Introduction

This is the first in a series of posts with the general theme of “Do it yourself temperature homogenisation“. The full series of posts will outline a simple but effective procedure for turning raw instrumental temperature data (aided by any available station history and data on rainfall) into reconstructions of the background temperature history for any area with a “sufficient” number of weather stations, the sufficient number depending mostly on the quality and extent of the data.

The overall procedure involves visual detection of inhomogeneities, followed by averaging and integration of interannual temperature differences in which the inhomogeneous data is omitted. Effectively, large inhomogeneities allow visual detection, followed by removal from the data, small ones are suppressed by the averaging process used to obtain regional histories.

Averaging Multiple Records

This first post in the series deals with the final step of the procedure, the method by which multiple temperature records are combined to give a regional average temperature history. The following figure gives a schematic picture of what typical temperature data looks like:

TEMPAV_A

The figure above illustrates the following:

  • Black data: Historical urban warming, followed by a station move (typically to an airport or other out-of-town site), followed by a switch to automatic sensors
  • Red Data: A good rural station, but with some missing data
  • Blue data: A station with a transient perturbation, possibly of non-climatic origin, or possibly due to a period of localised heavy cloud/rainfall

It is assumed for the purposes of this post that the previous stage of the overall procedure has identified the inhomogeneities, marked all periods of “transition” within a computer program, and that the analyst has corrected the data for any large residual inhomogeneities at transition boundaries and at all ends (more on that last issue below and in subsequent posts). The computer program, under analyst control, then does the following:

  • Either infills missing raw data or leaves gaps if there is doubt about the station history within a gap. Infilling can also be done manually.
  • Computes all valid differences in temperature, separately for each month, between years N and (N-1). Valid differences are those that do not cross, or lie within transitions
  • Extrapolate temperature differences for all stations with missing years using the average of valid temperature differences

The following figure illustrates the resulting full coverage of temperature differences:

TEMPAV_B

The temperature differences can now be averaged again with the important feature that each station has a constant weight in the averaging process (1/3 for each station in the example shown in the figure above). Finally, the average temperature differences can be integrated forwards and backwards in time from any desired reference year/temperature to obtain average temperature histories for each month.

Error Analysis

The reason for extrapolating temperature differences for all stations can be seen from the example of the blue data in the figure above. In some cases the dip in the blue data will be deemed to be associated with a period of heavy rainfall, i.e. a genuine climatic effect. This genuine climatic effect will only produce the correct impact on the regional average (i.e. only within the period of rainfall) if the weighting of the blue data is constant in time, which is what results from the extrapolation of temperature differences for the red data.

Using interannual temperature differences avoids the need to estimate and correct most inhomogeneities, but there is a small price to pay for that substantial benefit, the problem of residual inhomogeneities at boundaries, illustrated in the following figure:

TEMPAV_C

The figure above shows an example of a temperature record with perturbations from the regional average. The perturbation within the record is not really a problem, because the downward shift in temperature is matched by the exact reverse in later data, with the constant weighting of each record ensuring that the perturbation will not influence the end-to-end shift in average temperature.

The problem illustrated in the figure above arises from the inhomogeneous ends of the record (more generally from any boundary, including ones created by defining “transitions”), which can distort the end-to-end variation of average temperature if the inhomogeneities are not detected and corrected, which can be done manually (see a later post).

 

Posted in Uncategorized | 2 Comments

Review of Thornton et al 2017

Author: Dr. Michael Chase, 3rd July 2017

metoffice_kendon_dec_2010

This post documents a review of a recent paper published in Environmental Research Letters about GB wind power and electricity demand:

The relationship between wind power, electricity demand and winter weather patterns in Great Britain

Hazel E Thornton1,5, Adam A Scaife1, Brian J Hoskins2,3 and David J Brayshaw3,4

Published 16 June 2017 © 2017 Crown copyright
Environmental Research Letters, Volume 12, Number 6

http://iopscience.iop.org/article/10.1088/1748-9326/aa69c6/meta

I was alerted to the publication of this paper by a post about it at the “Energy Matters” blog by Roger Andrews: http://euanmearns.com/peak-demand-and-the-winter-wind/

The paper has generated some hype and fake news, such as this from “energylivenews”:

“Wind turbines produce more power on the coldest days than the average winter day.”

This post attempts to provide a more accurate description of what the paper says, and what it does not (but should) say.

The authors of the paper are all meteorologists or climatologists. The meteorological aspects of the paper are excellent, especially the insights provided into the particular weather patterns that lead to most cold spells and associated high demands for electricity. The absence of electrical engineering input is apparent in the incomplete analysis of the contribution of wind power to meeting future peak demands.

Incomplete Analysis

The paper quantifies how well current wind power deals with an old problem (high demand during cold spells on a system without wind power) but fails to quantify how additional wind power would contribute to solving current problems. Current GB wind power already has more than sufficient capacity to deal with the relatively small excess demands that appear to occur during some windy cold spells, so windy cold spells are no longer a problem. In fact, the current nameplate capacity of wind power of around 15GW (metered plus embedded generators) is so large that it has shifted the current problem (the peak demands placed on the rapidly diminishing conventional sources of supply) to times of such low wind conditions that additional wind power capacity will have negligible effect on the current capacity problem for the foreseeable future.

The following figure shows how the analysis could be improved to draw appropriate conclusions about additional wind power:

Thornton_Fig6_mod

The figure above has been copied and pasted from the paper, with the addition by me of the red line, which provides a rough estimation of where the current problem lies, the peak demands that conventional sources might be expected to have to meet when cold spells fall on working days. The slope of the red line follows from the current total nameplate capacity of GB wind power. I have assumed that the conventional sources can supply 1040 GWh per day, so the red curve starts at that level. As total wind power increases higher total demands can be met thanks to the wind contribution. The current problem is events below the red line, several of which had very low wind power. Those very low wind power events had a BIT less demand than the highest but a LOT less wind power than the average, and that is the current peak capacity problem, and additional wind power will not solve it.

The figure above can be used to see the outcome of several what-ifs. If demands increase (such as via increased electrification of heating and transport) then many more events will move to the right into the danger region. If more conventional supply is lost then the red line will move to the left, bringing many more events into the danger region.

What-if more wind power capacity is added? Suppose that an extra 1.5 GW (nameplate) is added in the next few years, will that improve the security of the GB electricity system? The answer can be seen from the effect on the red line, whose slope will merely decrease by 10% (since total nameplate capacity has risen by 10%), making very little difference to the problem area below the red line.

Wind power enthusiasts may be tempted to argue that there is very little in the way of events below the red line so there is not much of a capacity problem, especially when more wind power is added. There are two problems with that argument, firstly that the temporal resolution (daily wind averages) used in the paper underestimates the number of events below the red line (more on that below), but even if that issue is minor the capacity problem includes the large number of events that are poised to enter the danger region via a rise in demand and/or a fall in conventional supply. Wind power has changed the statistics of the supply/demand balance, but that change in statistics has now all but stopped, and somehow the rapidly falling conventional supply has to be reconciled with the expected rapid rise in demand.

Modelling Issues

The reanalysis data used, from 1979, includes long periods of relatively mild (and presumably windy) winters in the UK, and this is likely to have biased the statistics in the over-optimistic direction. The following figures show HadCET data for daily winter maximum and minimum temperatures from 1878, with exceptionally cold days shown with blue markers.

hadcet_all

hadcetmin_all

Finally, the paper uses daily average wind power, when it should have made an attempt to estimate wind power at the critical early evening period, when peak demands occur. Critical events with wind power lulls in the early evening will have been biased towards higher apparent wind power by the use of daily averages. There would be many more dots below the red line in Figure 6 of the paper (shown above) if the analysis had been done at a finer resolution, with Roger Andrews showing example data at 5-minute resolution from a particular cold spell in his blog post cited above.

Posted in Uncategorized | Leave a comment

Climate Distortion from Homogenisation

Author: Dr. Michael Chase

“When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down…”

Source: http://berkeleyearth.org/understanding-adjustments-temperature-data/

Many people suspect that there are inaccuracies in the major homogenisations of instrumental temperature records. This article asserts that there are substantial errors resulting from the homogenisation procedures commonly employed, and provides a general explanation for them. In short, there are many transient perturbations in temperature records, and the homogenisation procedure over-corrects for many of them. I am currently quantifying this over-correction in the ACORN-SAT version of Australian surface air temperatures, and hope that this article will inspire others to help, or to look at data from other countries. The article is based on knowledge mainly of ACORN-SAT, but there is no reason to suppose that the conclusions do not apply generally.

First of all the following figure illustrates why raw data has to be adjusted to reveal the true background temperature variations. The objective is to obtain the temperatures that would have been recorded in the past if the weather station had been at its current location, and with its current equipment. The figure below shows a typical history of the difference in effective temperature calibration between the past and the present:

HOMOGENO_DISTORTION_01

Station moves and equipment changes are the typical causes of sudden and persistent changes in temperature relative to neighbours, events that computer algorithms are good at detecting and correcting. If that were the whole story then everything would be fine, but things go rapidly downhill from this point on.

The main problem for large-scale homogenisations is that there are many “transient” (rather than persistent) perturbations of temperature. The computer detection algorithms still work to some extent with transient perturbations (though it would be better if they failed to work), but they cannot do the correction part of the procedure without adult supervision. The outcome is illustrated in the following figure, showing two typical transient perturbations, and the erroneous corrections that they generate:

HOMOGENO_DISTORTION_02

The problem arises when only one end of transient perturbations get detected, the procedure assumes that the transients are persistent, and therefore over-corrects the data before the transient.

It is likely that the most common transient perturbations of temperature involve sudden cooling, for example from the following mechanisms:

  • Removal of thermometers from an urban-warmed location
  • Replacement of damaged or degraded screens
  • Onset of a rainy period after a drought

There can be sudden warmings, for example when thermometers are removed from a shaded location, there is sudden screen damage, or a building  is erected nearby, but the other end of those transients are probably more likely to be detected than in the case of sudden coolings. It seems likely that poorly corrected transient perturbations give a bias towards cooling of the early part of temperature records.

There is always scope for improving computer algorithms, but I think that the problem lies with the functional design of the homogenisation procedure, which needs more involvement of expert analysts and less blind faith in what the computer says. The analysts need to examine rainfall data to remove false detections from that source, and they need to look over long periods of time to find both ends of transient perturbations. As the main interest in long temperature records is the end-to-end variation of temperature it may be OK to leave transient perturbations in place, but note that mid-20th century urban warming can convert cyclic variations of temperature into hockey sticks.

I am continuing a review of ACORN-SAT data, trying to separate its step change detections into two groups, those resulting from persistent changes (which need correction), and those resulting from transient perturbations, which don’t need correction. I hope that this article will encourage people to examine data from other regions, to determine the extent of climate distortion introduced by the homogenisation process.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT, Part 3

Author: Dr. Michael Chase

Kerang_1930

Photo above: Kerang, Victoria, circa 1930

ACORN-SAT is the outcome of a “system” for detection and correction of non-climatic influences on surface air temperature data recorded in Australia. Previous posts have dealt with errors in the correction part of the process, and with false detections. This post deals with failure to detect what should be detected.

Many non-climatic influences on temperature measurements are transient in nature, so if an attempt is made to detect and correct, then both ends of the transient influence must be found. That fact alone makes the process rather risky, and this post shows examples of the risk being realised, with one end of transient perturbations not being detected, resulting in invalid correction of the data before the onset of the transient influence.

The following figure shows a transient warming influence on daily maximum temperatures at Kerang in Victoria. To make the transient warming easier to see the data from a nearby reference station (Echuca Aerodrome) has been subtracted from the Kerang data (black curve), removing most of the natural background variation in temperature. The figure also shows matching results for nearby Deniliquin (red) and Rutherglen (blue), for which transient perturbations (detected via ACORN-SAT, and verified visually by me) have been corrected.

Kerang_screen_transient

ACORN-SAT has a detection for Kerang in 1957 at the end of the transient influence, but no detection for the start, therefore it falsely corrects the data all the way back to 1910. The right answer is to correct the data only back to 1943, the onset of the transient, or to make no correction at all.

This post will be updated with any further detection failures that are found.

NOTE: Missing months of data have been infilled by interpolation, following the temperature variations of neighbouring stations, and partial quality control adjustments have been made for anomalous spikes and dips, in particular at Rutherglen in 1925.

Posted in Uncategorized | Leave a comment

Historical UHI at Deniliquin

Author: Dr. Michael Chase

UPDATE (18th March 2017): This post is of dubious quality and may be modified or withdrawn, pending a review of the data at ECHUCA AERODROME, which may well not be a good enough representation of the background temperature variations.

australia-deniliquin-16-06-13

Deniliquin in NSW is on-the-map in both the worlds of abattoirs and climatology, and there is a connection between those worlds. The rural nature of the town, and the current weather station location at the airport, rules out current Urban Heating as being a problem, but the weather station was once in the town, and even small rural towns have walls, sheds and paved areas.

Deniliquin has a very long temperature record, extending well back into the 19th century, but unfortunately it suffered a bout of Urban Heating in the 20th century, and this must be dealt with before the background climatology can be obtained. The simplest way to deal with UHI is to ignore it, but this is only possible if it occurs as a “transient” perturbation within the record, otherwise it will distort the end-to-end variation of temperature. This post presents evidence that suggests that most of the UHI did indeed occur as transient heating within the 20th century part of the record.

The following photo provides some ground truth for the UHI effect at Deniliquin:

morris_denil_photo

Source: http://ww.hashemifamily.com/Kevan/Climate/Heat_Island.pdf

When the weather station moved to a better location in the town in 1971 the average daily minimum temperature dropped by around 1C, relative to neighbouring stations (source: ACORN-SAT station adjustment summary, verified visually by me). The key question is how the UHI varied in the decades leading up to 1971. That is a difficult question to answer with any degree of certainty, not helped by a general lack of consistency and completeness of the temperature variations of neighbouring stations, but a comparison with a single near neighbour provides compelling evidence that significant UHI at Deniliquin only began in the 1920s. The following figure  shows a comparison between annual average Tmin at Deniliquin and Echuca:

DENIL_UHI_01

The events annotated on the figure above come from extracts (provided at the end of this post) from the PhD thesis of Simon Torok, which provides a summary of station histories. The onset of UHI at Deniliquin appears to have been in the early 1920s, with variations up and down, presumably as new buildings were added, probably with small moves of the thermometers, until dropping in 1971. There is still some uncertainty about how much residual UHI was present when the station moved to the airport in 1984, but that will be discussed in a later post.

The onset and ending of UHI within the record allows it to be ignored, as it will have no effect on the end-to-end variations of temperature, which will be presented in later posts.

I don’t have any photos of Deniliquin Post Office, but here is one of the “nearby” Mildura Post Office from 1912:

Mildura_PO_1912

Appendix: Station History Summaries, click to enlarge (source: PhD thesis of Simon Torok)

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT, Part 2

Author: Dr. Michael Chase

outback-town-welcome-boulia-channel-country-w-queensland-australia-aahmw2

A recent post dealt with the flaw in ACORN-SAT that it makes the erroneous assumption that all step changes in temperature arise from persistent non-climatic influences. This post illustrates a potential “false alarm” problem with the detection of step changes, that often occur when there is heavier than average rainfall at a weather station. It looks like the algorithms are responding to a transient (several years) cooling of daily maximum temperatures associated with the rainfall and its aftermath, and the analysts are not removing those false detections. Correcting the data for those false alarms cools all years before the event, but the correct thing to do is to make no corrections at all.

The following figure shows rainfall and Tmax data from interior Queensland, an area which responds strongly to rainfall (and clouds), with drops in temperature when rainfall is higher than average:

ACORN_Rain_01

The stations shown in the figure above are Richmond (red), Camooweal (cyan), Boulia (blue) and Longreach (black). Note how drops in temperature are associated with higher than average rainfall. ACORN-SAT gives a “statistical” step change for Richmond (red) in 1950, exactly when it has a peak in rainfall.

This post will be expanded later to show other examples of ACORN-SAT steps being linked to peaks in rainfall.

A feature of the algorithms that may be contributing to false detections is the use of 10 neighbouring stations to decide on the size of the step. Requiring 10 stations means that a majority of them may be in areas not affected by the local rise in rainfall. The use of medians to decide the size of the step then gives an inconsistency with the temperature change at the station being examined, crossing the threshold involved, and triggering a false detection.

Posted in Uncategorized | 2 Comments

Sheds and Weather at Kalgoorlie

KalgoorliePanoramaSep1930_WEFretwellCollection

Photo above: Kalgoorlie, Western Australia, circa 1930

Author: Dr. Michael Chase

This post looks at monthly averages of daily maximum temperatures recorded at Kalgoorlie, and surrounding areas, in the inter-war years of 1920-40, during which ACORN-SAT makes two adjustments. The validity of the adjustments is discussed, as well as their wider significance for other Australian weather station data.

First of all here is a clear example of a temperature inhomogeneity at Kalgoorlie Post Office (black curve), suddenly changing its temperature relative to nearby Menzies (mauve) in 1936:

KALG_Tmax_02

The ACORN-SAT adjustment summary gives “Move” as the explanation for its temperature adjustments (decreases) at Kalgoorlie, for all years prior to 1936. It is the “all years before 1936” part of the adjustments that is in serious doubt, because the appendix to Simon Torok’s PhD thesis gives the reason for the move:

KALG_Torok

The thermometers and screen were moved a mere 100 yards in 1936 because of sheds, and it seems likely that the resulting drop in temperature was caused by the removal of the thermometers from the heat trap created by the sheds, rather than by any intrinsic difference in temperature between the new and old locations. Temperature changes due to location are persistent and justify the adjustment of all temperatures prior to the move, but temperature changes due to sheds is probably transient in nature, and correction should only be applied to (say) 1910 data if there is evidence that the shed heat trap was in place then, which seems unlikely.

The station history summary shown above gives a flavour of the problems involved in measuring temperatures in the inter-war years (such as readings taken by girls!), worth bearing in mind in the discussion below of the possible inhomogeneity in 1930, which I think is in doubt. Note that central organisation probably improved accuracy overall, but it tended to make changes happen all at about the same time, for example all three stations mentioned above had some sort of change in 1935/36, which makes it difficult to deal with inhomogeneities around that time.

The following figure shows the data at Kalgoorlie Post Office around 1930, together with eye-ball estimates of averages before and after:

KALG_Tmax_01

ACORN-SAT says that there was a non-climatic drop in Tmax temperatures at Kalgoorlie in 1930, but I find the evidence for it unconvincing. Firstly there is no mention of any change in the station history summary at that time, and secondly the local (well-inland only) neighbours don’t show anything unusual at Kalgoorlie at that time:

KALG_Tmax_03

The figure above shows a cooling trend amongst all the stations (the well-inland ones only) in the area, but bear in mind potential screen problems, which may have distorted the trend if there were more sunny years before 1930 than after.

ACORN-SAT calculates the size of its temperature adjustments from how temperatures changed at neighbouring stations, but most of the neighbours used are closer to the sea than Kalgoorlie, hence have a greater maritime moderating effect on any temperature trend. There are hints in the data that around 1930 Kalgoorlie was much more in an inland weather pattern than in that of the stations with strong maritime influence.

I think that there is currently a respectable hypothesis that around 1930 Kalgoorlie was cooling in Tmax, and maybe the especially cool year of 1931 (and its aftermath) triggered the ACORN-SAT detectors to examine it, and they produced an erroneous decision because most of the neighbours were more moderated by the sea to the West and South.

Further work is needed at Kalgoorlie to sort out the uncertainties and produce a more accurate set of temperature adjustments than is provided by ACORN-SAT.

Posted in Uncategorized | Leave a comment