Climate Distortion in ACORN-SAT, Part 2

Author: Dr. Michael Chase

outback-town-welcome-boulia-channel-country-w-queensland-australia-aahmw2

A recent post dealt with the flaw in ACORN-SAT that it makes the erroneous assumption that all step changes in temperature arise from persistent non-climatic influences. This post illustrates a potential “false alarm” problem with the detection of step changes, that often occur when there is heavier than average rainfall at a weather station. It looks like the algorithms are responding to a transient (several years) cooling of daily maximum temperatures associated with the rainfall and its aftermath, and the analysts are not removing those false detections. Correcting the data for those false alarms cools all years before the event, but the correct thing to do is to make no corrections at all.

The following figure shows rainfall and Tmax data from interior Queensland, an area which responds strongly to rainfall (and clouds), with drops in temperature when rainfall is higher than average:

ACORN_Rain_01

The stations shown in the figure above are Richmond (red), Camooweal (cyan), Boulia (blue) and Longreach (black). Note how drops in temperature are associated with higher than average rainfall. ACORN-SAT gives a “statistical” step change for Richmond (red) in 1950, exactly when it has a peak in rainfall.

This post will be expanded later to show other examples of ACORN-SAT steps being linked to peaks in rainfall.

A feature of the algorithms that may be contributing to false detections is the use of 10 neighbouring stations to decide on the size of the step. Requiring 10 stations means that a majority of them may be in areas not affected by the local rise in rainfall. The use of medians to decide the size of the step then gives an inconsistency with the temperature change at the station being examined, crossing the threshold involved, and triggering a false detection.

Advertisements
Posted in Uncategorized | 2 Comments

Sheds and Weather at Kalgoorlie

KalgoorliePanoramaSep1930_WEFretwellCollection

Photo above: Kalgoorlie, Western Australia, circa 1930

Author: Dr. Michael Chase

This post looks at monthly averages of daily maximum temperatures recorded at Kalgoorlie, and surrounding areas, in the inter-war years of 1920-40, during which ACORN-SAT makes two adjustments. The validity of the adjustments is discussed, as well as their wider significance for other Australian weather station data.

First of all here is a clear example of a temperature inhomogeneity at Kalgoorlie Post Office (black curve), suddenly changing its temperature relative to nearby Menzies (mauve) in 1936:

KALG_Tmax_02

The ACORN-SAT adjustment summary gives “Move” as the explanation for its temperature adjustments (decreases) at Kalgoorlie, for all years prior to 1936. It is the “all years before 1936” part of the adjustments that is in serious doubt, because the appendix to Simon Torok’s PhD thesis gives the reason for the move:

KALG_Torok

The thermometers and screen were moved a mere 100 yards in 1936 because of sheds, and it seems likely that the resulting drop in temperature was caused by the removal of the thermometers from the heat trap created by the sheds, rather than by any intrinsic difference in temperature between the new and old locations. Temperature changes due to location are persistent and justify the adjustment of all temperatures prior to the move, but temperature changes due to sheds is probably transient in nature, and correction should only be applied to (say) 1910 data if there is evidence that the shed heat trap was in place then, which seems unlikely.

The station history summary shown above gives a flavour of the problems involved in measuring temperatures in the inter-war years (such as readings taken by girls!), worth bearing in mind in the discussion below of the possible inhomogeneity in 1930, which I think is in doubt. Note that central organisation probably improved accuracy overall, but it tended to make changes happen all at about the same time, for example all three stations mentioned above had some sort of change in 1935/36, which makes it difficult to deal with inhomogeneities around that time.

The following figure shows the data at Kalgoorlie Post Office around 1930, together with eye-ball estimates of averages before and after:

KALG_Tmax_01

ACORN-SAT says that there was a non-climatic drop in Tmax temperatures at Kalgoorlie in 1930, but I find the evidence for it unconvincing. Firstly there is no mention of any change in the station history summary at that time, and secondly the local (well-inland only) neighbours don’t show anything unusual at Kalgoorlie at that time:

KALG_Tmax_03

The figure above shows a cooling trend amongst all the stations (the well-inland ones only) in the area, but bear in mind potential screen problems, which may have distorted the trend if there were more sunny years before 1930 than after.

ACORN-SAT calculates the size of its temperature adjustments from how temperatures changed at neighbouring stations, but most of the neighbours used are closer to the sea than Kalgoorlie, hence have a greater maritime moderating effect on any temperature trend. There are hints in the data that around 1930 Kalgoorlie was much more in an inland weather pattern than in that of the stations with strong maritime influence.

I think that there is currently a respectable hypothesis that around 1930 Kalgoorlie was cooling in Tmax, and maybe the especially cool year of 1931 (and its aftermath) triggered the ACORN-SAT detectors to examine it, and they produced an erroneous decision because most of the neighbours were more moderated by the sea to the West and South.

Further work is needed at Kalgoorlie to sort out the uncertainties and produce a more accurate set of temperature adjustments than is provided by ACORN-SAT.

Posted in Uncategorized | Leave a comment

Temperature Homogenisation Errors

Author: Dr. Michael Chase

Figure_4.1_Adelaide_screens

“When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down depending on the size and direction of the breakpoint.”

Extract above from: http://berkeleyearth.org/understanding-adjustments-temperature-data/

Temperature measurements have two classes of non-climatic influence:

  • Transient influences, with no impact on the start and end of the record
  • Persistent shifts, with an impact on the start of the record relative to the end

An example of a persistent shift is the switch from the use of a Glaisher stand or thermometer shed in the early years, to the use of a Stevenson screen at some time within the record. Another example of a persistent shift is a station move within the record.

Examples of transient influences are urban growth near a weather station before it is relocated, and deterioration and replacement of the thermometer screen.

This article asserts that the major temperature homogenisations are failing to make a distinction between transient and persistent influences, treating all perturbations as being persistent. The consequence of that poor assumption is that the transient influences, which are predominantly warming, are leading to an over-cooling of the early periods of many records, with this artifact increasing in severity as the detection of step changes in temperature becomes more sensitive.

Suppose a weather station recorded accurate temperatures around 1900, but now those temperatures are being changed by homogenisation algorithms, as a result of all the things that happened at, and to, the weather station between 1900 and its current location at (say) an airport. Typically, between 1900 and now, the temperatures recorded will have been influenced by several non-climatic effects, such as varying amounts of urban warming, various screen and thermometer deteriorations and replacements, and observer errors.

Suppose a village or town was built around the weather station, maybe consisting of just a few nearby buildings, and the weather station was relocated to an airport or rural site. Suppose the screen deteriorated, allowing sunlight to shine on the thermometers, until the screen was replaced. Would a computer be able to use the temperature record, compared to those of neighbours, to properly correct the temperatures all the way back to 1900 so that they reflected only the background climate?

In order to change temperature data back to 1900, to give what would have been measured in the past, but at the current location of the weather station, with its current equipment, you must have a COMPLETE history of the non-climatic influences on the temperature data being adjusted. But, such a complete history is usually lacking, the homogenisation algorithms only quantify step changes in influences, and only when the steps are large enough to detect, or when there is information to suggest that a change is expected.

The algorithms for step detection and quantification are already quite sophisticated and are being further developed, but there is a barely mentioned elephant in the room, the drastically simple (and often bad) assumption that the history of non-climatic influence consists solely of its step changes, which is only true if the non-climatic influences don’t change with time between the steps.

A significant part of the total non-climatic influence on recorded temperature can be regarded as “thermal degradation”, typically urban growth near a weather station, and deterioration of the (usually wooden) thermometer screen. Such degradation is both warming and time-varying, often growing slowly then ending suddenly with a step change down in temperature when the weather station is moved to a better location, or an old or broken screen is replaced with a shiny new one. It is of course correct to adjust temperatures down in the years leading up to such sudden cooling events, but the assumption of constant influence will often over-correct earlier years, in particular the years before the influence in question began.

Currently, in temperature reconstructions produced by homogenisation algorithms, early temperature measurements are being reduced incorrectly as a result of the detection of time varying warming influences. As the detection algorithms become more sensitive the invalid cooling of the past increases.

As a systems engineer I find it useful to regard temperature homogenisation as a system, inputs are raw data, outputs are meant to be better representations of background temperature history. Treating all detected step changes as being persistent is a functional design error. The step change detection algorithms themselves are blameless, the problem lies with an invalid assumption made at the functional design stage.

To avoid over-correcting the past, expert meteorological data analysts should make the final decision about whether or not, and how, each detected step change is removed from the data. Station history, and the temperature data itself, must be examined to classify each detected step change into two groups, persistent changes (such as station moves or equipment changes) or time-varying changes, the latter requiring special analysis to either measure (from the data) or model how the influence varies with time.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT

Author: Dr. Michael Chase

acorn_means_sat-on

Source of the figure above: http://www.bom.gov.au/climate/change/#tabs=Tracker&tracker=timeseries

Summary

There is considerable climate distortion in the ACORN-SAT version of surface air temperatures of Australia from 1910 to present, with most of the distortion in the first half of the 20th century. The main problem lies in the assumption made that all non-climatic influences, detected via anomalous step changes in temperature, do not vary with time. Many non-climatic influences, such as urban heating and screen degradation, do vary with time, so whilst the ACORN-SAT correction process does a good job for some years before step changes occur, it over-corrects at earlier times, often giving an invalid cooling of the early decades of the 20th century.

This problem with ACORN-SAT is only at the final stage of processing, when corrections are applied. The step change information itself is highly valuable and it should be possible to produce a more accurate version of the temperature history of Australia, if possible by following how non-climatic influences vary with time in the data, or by modelling.

Introduction

Reconstructing the actual surface temperatures of Australia back to 1910 is a difficult job which has to contend with several non-climatic influences on what is sometimes incomplete, erroneous and poorly documented temperature data. The ACORN-SAT reconstruction detects non-climatic influences when they change, either sudden onsets or removals. Data from neighbouring stations is used to detect and estimate the size of sudden and persistent changes in relative temperature, deemed to be non-climatic in origin. The final stage of processing is to correct the temperature data so as to reveal the true background climate variations.

The following figure illustrates where things go wrong for any time-varying influence, shown as the red curve:

acorn_correction_error_v2

The red curve in the figure above applies to common non-climatic influences such as urban growth around weather stations located in towns, or the gradual degradation of the thermometer screen. When a weather station moves out of town, or to a better location within the town, or a screen is replaced, the temperatures recorded suddenly drop (relative to those of neighbours), an event detected by the ACORN-SAT algorithms. The erroneous assumption (shown as the blue curve) is then made that the non-climatic influence is constant in time, resulting in over-correction in the entire period leading up to the full effect of the influence.

Examples

morris_denil_photo

Source of the picture above: http://www.hashemifamily.com/Kevan/Climate/Heat_Island.pdf

When the weather station at Deniliquin moved to a better location in the town in 1971, the minimum temperatures recorded fell by around 1 degree C on average, probably mostly due to the removal of the heating effect of the buildings and paving stones. ACORN-SAT assumes that the urban heating in 1971 was constant all the way back to 1910.

Another example is given in the ACORN-SAT documentation itself, for a site move in Inverell in 1967:

inverell_acorn

Source of the extract above: http://cawcr.gov.au/technical-reports/CTR_049.pdf

Again, ACORN-SAT assumes that the urban heating of the very built-up post office site applied all the way back to 1910, though in this case there were also earlier step changes. Integrating the step changes of time varying influences does not in general get you anywhere near the right answer for early decades.

There are many examples in the ACORN-SAT station adjustment summary of step changes due to equipment replacement, many likely to be due to a recent degradation being removed by the provision of new equipment. ACORN-SAT assumes that the degraded equipment was in place all the way back to 1910.

Previous Commentary

I must apologise at this stage for being unfamiliar with most of the literature on temperature homogenisation, so am probably not giving due credit to previous authors. The following two examples are relevant, firstly from Hansen et al, 2001:

hansen_uhi

Source of the figure above: https://pubs.giss.nasa.gov/abs/ha02300a.html

The figure above shows a time-varying urban heating being correctly removed around the time of a station move, but being over-corrected at earlier times. There are several examples in ACORN-SAT of town data being merged with that from an airport, with an unknown amount of historical urban heating being turned into erroneous cooling of early data.

The second example is from Stockwell and Stewart (2012), correctly identifying a major reason why the pre-cursor to ACORN-SAT gives more apparent warming than is found in raw data:

stockwell_stewart_drift_error

Source of the figure above: http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.362.9661&rep=rep1&type=pdf

The Way Forward

ACORN-SAT only falls at the final hurdle, the temperature step changes that it identifies can be used to redeem it. Here is an example of a validation of one of its discontinuities:

cr06_lav_1955

The figure above shows an anomalous step-up in average maximum temperatures at Laverton RAAF (blue curve), as well as possible indications of anomalous warming in the 1960s relative to neighbours: Melbourne Regional (-0.8C), Essendon Airport (-0.1C), Black Rock, and Tooradin.

One way to deal with urban warming, most of which is historical rather than current, is to construct composite temperature records, in the example above following Laverton up to its step change, then jumping to a suitably scaled average of its more rural neighbours.

For large temperature step changes, such as those shown above for Deniliquin and Inverell, it should be possible to follow, and thereby properly correct, the time variation of the urban warming down to a certain minimum level. For smaller temperature changes, many of which occur in regions without reliable near neighbours, modelling of the urban heating can be used, guided by any available documentation of the station history.

This post ends here, examples will be shown in later articles of the size of ACORN-SAT errors, but the focus will shift to the question of what is the right answer for Australian temperature histories.

Posted in Uncategorized | 8 Comments

Historical Temperatures at Alice Springs

Paul Matthews has recently reported on the highly unstable versions of temperatures at Alice Springs (and elsewhere) in GHCN:

Instability of GHCN adjustment algorithm

Several years ago Roger Andrews raised doubts about GHCN temperature homogenisation at Alice Springs:

https://tallbloke.wordpress.com/2012/10/11/roger-andrews-chunder-down-under-how-ghcn-v3-2-manufactures-warming-in-the-outback/

These blog posts and ones related to them (apologies for not mentioning everyone) have inspired me to fire up once again my own temperature analysis tools.

Besides the question of GHCN algorithm stability, there is also the question of what the right answer is, and whether or not one of the GHCN versions has come close. Step one is always to plot the raw data, and the data for closest neighbours, as shown in the following figure for annual averages of daily maximum temperature (Tmax), downloaded from BoM Climate Data Online:

alice_a

The figure above shows the temperatures as they are (not anomalies), with Alice Springs Post Office in black, and data from nearest neighbours in various colours. Several tentative conclusions can be drawn just by eye-balling the data as follows:

  • Alice Springs is cooler than its neighbours (possibly due to higher elevation and differences in vegetation and cloud cover), which may create an obstacle to easy homogenisation
  • The neighbours share a great deal of consistency in their temperature fluctuations, and in their gentle cooling trend to around 1960, seen elsewhere in Eastern Australia. It should be relatively easy to detect inhomogeneities amongst the neighbours, and correct them at the level of annual averages, but only back to around 1905, the date from which there may be enough neighbours to provide reasonable confidence.
  • The elevated temperatures at Alice before around 1900 (marked A on the figure above) suggests that non-standard or damaged exposures were used in that period. Documentary evidence for a Stevenson screen in use at a certain date only provides evidence for its use from that date, not before.
  • Alice is missing a rise in temperature in 1931 (marked B on the figure above), which would trigger some algorithms to shift its temperatures at that time, especially as the site moved in 1932 from the Telegraph Office to the Post Office. But, the overall temperature trend of Alice is already consistent with the neighbours, and a major shift in its temperatures would create an inconsistent warming trend. See below for documentary evidence that may explain the 1931 discrepancy, a change to either a new or a replacement Stevenson screen.
  • It is possible that Alice at the Post Office (after 1932) had urban heating, if so that would make accurate homogenisation difficult. The problem of urban heating at a Post Office (or similar urban) site, prior to a shift to an airport site, is not obviously dealt with properly by govt. homogenisations, such as ACORN-SAT. Simply splicing together airport and urban-warmed Post Office data gives an exaggerated warming trend, even if there is no urban warming at the airport.

Here is some documentary evidence of station history, firstly an extract from the PhD of Simon Torok, an invaluable source of site history information, especially for stations that are not part of the ACORN-SAT network:

alice_springs_torok

A link to the PhD thesis of Simon Torok is given as ref. 5 in the CLIMATE HISTORY OF SE AUS page of this blog (the appendix contains the station history summary). The extract above indicates a Stevenson screen supplied in November 1931, but it is unclear whether it is the first one used, or a replacement for a degraded one.

Information via ACORN-SAT adds to the above, with details of the area in use:

alice_springs_acorn

Finally, Alice Springs was used as a case study by Blair Trewin in describing the techniques used in the development of ACORN-SAT, see page 86 of his report here:

http://cawcr.gov.au/technical-reports/CTR_049.pdf

The ACORN-SAT report mentions considerable local temperature variations due to heavy rainfall.

I am developing another line of attack on Alice Springs, approaching from North West NSW, and will update this post when sufficient surrounding climate history is available to give confidence in any answer for Alice Springs.

Posted in Uncategorized | 1 Comment

Issues with AEMO Forecasting, Part 3

The AEMO models the future adequacy of the NEM electricity system, one of the key outputs being an annual “Electricity Statement of Opportunities” (ESOO), whose stated purpose is to alert industry to potential opportunities for new generators. This post is about whether or not the ESOO is achieving its stated purpose, the conclusion being that it largely fails because it focuses on a single number (average unserved energy) as a measure of system adequacy. This single number is usually zero, which does not convey any information about how close it is to being non-zero, and when it is non-zero it conveys no information about how bad the energy shortfall might be in a severe heatwave summer.

The key quantity modelled by AEMO is unserved energy (USE), the expected average number of MWh that the system will fail to provide (in the absence of special measures being taken), as a percentage of total annual consumption. The Reliability Standard for USE is 0.002% (around 250 MWh in South Australia), and the value of USE is quoted if the Reliability Standard is exceeded. To gain some insight into USE I have calculated actual sample values for it, as the amount of non-wind supply varies, for years 2009/10 to 2014/15, the ones for which AEMO provides wind trace data at 2016/17 levels.

The following figure shows how USE varies with non-wind supply in South Australia for 2013/14 weather and demand, for a system with no wind power (black), 2016/17 level wind power (red) and 150% of 2016/17 wind power (blue):

sa_use_a1

The figure above indicates what might happen in the future if non-wind supply drops below 3000 MW, but it obscures the situation around the very low percentage of the Reliability Standard.

The following figure shows the same data as above, but with a focus on the very low percentages of the Reliability Standard:

sa_use_a2

The figure above reveals that the effect of 1500 MW of 2016/17 wind power on un-served energy would be equivalent to a few hundred MW of firm capacity, but 50% more wind power would not provide 50% more equivalent firm capacity.

The following figure shows the same data as above, but for all the 6 years used by AEMO in its assessment of future USE values, for South Australia. Note that for display purposes the value of un-served energy is limited to 0.004%, double the Reliability Threshold:

sa_use_a

The figure above shows a high degree of consistency for the effect of 2016/17 level wind power, and for 50% more wind power. The main difference between the years is simply the varying amount of demand, which is due in large part to the varying severity of heatwaves and to whether or not the severest ones culminated on working days.

Discussion

A prediction of future average USE figures involves modelling plant availability, including unexpected outages, and future demands, and this is what the AEMO does. Leaving aside issues of the validity of the various ingredients in the calculation, this post is suggesting that ESOO reports would benefit from much more information being provided, for example giving information for each separate year (2009/10 to 2014/15) that goes into the future year averages. It is likely that the future averages are dominated by one or two severe heatwave years in the past, whose effect gets considerably diluted by averaging over all six years examined.

Reliability standard breaches come from a few severe heatwave days with low wind power, it would be easy to do a poor job of modelling unexpected outages on these heatwave days, for example via a limited number of Monte Carlo runs that have a good chance of no outage falling on a heatwave day.

The Monte Carlo approach, and use of averages by AEMO may not be appropriate for this problem. An alternative approach would be to focus on credible worst case scenarios, such as the highest demand seen in the last 6 years, at the same time as an outage at a major generator or interconnector.

There are many obstacles to getting conventional generators built, and one such obstacle may be the limited amount of (and possibly misleading) information being provided in ESOO reports. Builders of wind farms do not have a problem with the quantification of system adequacy, the only relevant information for them is their capacity figure, which they can estimate themselves from information available elsewhere. Maybe ESOO reports are poor because nobody reads them, because nobody is currently contemplating building conventional generators.

Posted in Uncategorized | 1 Comment

Issues with AEMO Forecasting, Part 2

The previous post looked at a demand-side issue with AEMO electricity forecasting, this post deals with an issue concerning the contribution of wind power to meeting peak demands. In summary, it appears that AEMO decouples historical wind power from historical demand in its Monte-Carlo modelling of future system adequacy. If so this significantly over-estimates the contribution of wind power to peak demands, and hence over-estimates system adequacy. There is a strong association between heatwaves and wind power lulls in South Australia, and this association is partly lost by the decoupling that AEMO appears to have done with demand and wind data.

The basis of AEMO’s Monte Carlo modelling is a set of historical data on demand and wind power, at 30-minute resolution, which are available to the public at this link:

https://www.aemo.com.au/Electricity/National-Electricity-Market-NEM/Planning-and-forecasting/NEM-Electricity-Statement-of-Opportunities

The AEMO’s approach to the modelling is excellent, being based on actual demand and wind power data, but an examination of the data used reveals a potential problem. The problem can be seen in the following two example CSV files, which can be opened with a spreadsheet:

  • LKBONNY1-REF-2014.csv
  • 2016 SA Neutral 10POE REF-YR 2013-14.csv

The LKBONNY file contains wind power data as 30-minute averages from July 2013 to June 2014, which I have verified against NEMWEB archive data at 5-minute resolution in the following figure:

sa_windtraces_a

Source of the 5-minute data in the figure above: http://nemweb.com.au/Reports/ARCHIVE/Next_Day_Actual_Gen/

The strange thing about the 30-minute wind traces data files is that they contain around 25 identical copies of the actual 2013/14 data, one copy for each year to 2040 (for leap years the data for 29th February is an exact copy of the data for the 28th). Thus, when the future is modeled, it appears that the wind power data retains its original dates, but the same thing does not happen with the demand data.

The demand data files for future years shift the actual demand dates so as to preserve their days of the week. This can be seen clearly by looking at the demand spikes of heatwaves, in particular the very severe one of Monday 13th to Friday 17th January 2014; the 5 consecutive days of very high demands can be seen moving in date from year to year so as to preserve them always as Monday to Friday.

Thus, it appears that there is a good chance that AEMO has decoupled wind power data from demand data, partly diluting the key correlation between very high demand and very low wind power, thereby over-estimating system adequacy for dealing with heatwaves. The dilution would be only partial, rather than total, because the demand data oscillates around correct alignment with the wind power data.

 

Posted in Uncategorized | Leave a comment