Heathrow Heatwave Data

UPDATE: AUGUST 2022

The UK Met Office now provides much of its historical weather data in a freely accessible database called MIDAS-OPEN. You have to register to gain access, but this can be done by anyone.

The MIDAS-OPEN data reveals that there are two observation intervals in use: 12-hour and 24-hour periods for which Tmax and Tmin are recorded. The 12-hour data periods are 21:00 to 09:00, and 09:00 to 21:00, the latter providing the daily Tmax figure. The other data period provides additional information, which may shed some significant light on why Heathrow has often featured in record temperature statistics. For heatwaves the Tmax figure for 21:00 to 09:00 is very likely to be the temperature at 09:00, except for when a heatwave ends overnight.

The following figure shows Tmax data for both 12-hour periods for Heathrow and some near neighbours, for the 2015 heatwave. The neighbours shown are Northolt, Wisley, St James Park, and Farnborough. The figure suggests that whatever caused the high temperature on the afternoon of 1st July was also operating at around 0900 on that day, when the Heathrow temperature was most exceptional for the period shown.

The first part of this post shows snapshots of daily Tmax data for Heathrow Airport for some recent summer heatwaves, together with the data for nearby (6 miles away) RAF Northolt, a small military and civilian airport. The data plotted are from the GSOD database, obtained via OGIMET, which gets the data from NOAA.

For future heatwave analysis the following website provides access to “recent” sub-daily data (up to 56 days old, hourly, or half-hourly or quarter-hourly), and indicates nearby stations, the link is for a station at Heathrow:

https://weather.gladstonefamily.net/site/03772

Heathrow often features in reports of record daily maximum temperatures, and it is of interest to check the validity of its data. Questions have been raised about the influence of urban heating (tarmac and concrete) and aircraft (hot air from engine exhausts). Apologies to those who have raised and analysed these issues, this post does not (yet?) describe or link to that previous work.

Why does Heathrow often feature in reports of record daily maximum temperature? One reason must be its relatively low summer rainfall, with associated high sunshine and low evaporative cooling. It is also nearly as far from the cooling influence of the sea as it is possible to get in Southern England. The following figures (via the UK Met Office website) show summer rainfall and sunshine levels in 2003, one of the heatwave years examined:

The second part of this post shows a comparison of Heathrow monthly average Tmax data with the Berkeley-Earth regional average, showing some anomalous warming at Heathrow during the 20th century.

Daily maximum temperature snapshots, and the differences in temperature, are shown below for heatwaves in the summers of 1990, 2003, 2006, 2015 and 2020. These temperature plots reveal the following:

  • There is very good consistency between the 2 temperature records, suggesting an absence of equipment failures
  • Maximum temperatures recorded at Heathrow (blue curves) are generally, but not always, a bit higher than those recorded at Northolt (red curves).
  • The differences in recorded temperatures are generally lower for the very hot days
  • In one year (2020) the recorded temperatures were identical for 4 consecutive days

The following figure shows a scatter plot of all the Heathrow/Northolt temperature differences, versus Heathrow daily Tmax, for the 5 time periods shown above:

SPECULATION

The similarity between the recorded temperatures at the two sites on the hottest days may be a reflection of a common source of air: hot air from the European mainland, with the recorded temperature dominated by that air, with little influence from local heating. The exception of the 1st July 2015, one degree C hotter at Heathrow than at Northolt, may have been due partly or wholly to a local heating influence at Heathrow.

ANOMALOUS WARMING AT HEATHROW

The following figure shows:

  • UK Met Office monthly average Tmax data for Heathrow, Cambridge NIAB and Waddington, relative to the Berkeley-Earth monthly regional average Tmax for 52.24N, 0.0W

The following figure shows a comparison between:

  • UK Met Office monthly (Tmax + Tmin)/2 from its historical data webpage, for Heathrow
  • The diymetanalysis estimate of the regional average Tavg variations for Central England

The figures above reveals some anomalous warming in the 20th century at Heathrow, to be expected due to the conversion of the largely rural site in 1948 to one dominated today by tarmac and concrete. Tmax data shows a step change around 1969, likely due to relocation caused by developments.

REFERENCES

A recent (2021) paper on atmospheric blocking, and its involvement in extreme weather conditions (Atmospheric Blocking and Weather Extremes over the
Euro-Atlantic Sector – A Review):

END OF POST

Posted in Uncategorized | Leave a comment

CET Issues

Central_UK_map_02

Figure above: The Central England Temperature area (with apologies to the Welsh)

This post covers accuracy issues related to the Central England Temperature (CET) series, created initially by Gordon Manley in 1953, and maintained in recent decades by the UK Met Office Hadley Centre. The HadCET data itself is plotted on the HadCET page, which also contains download links.

Definition

CET is meant to be a series of absolute average temperatures, presumably at some location roughly in the “middle” of the area containing the source station data.

It might have been better to have made CET an average of temperature variations, which would have made it easier to change the station composition.

My Version of Central England Temperatures

A reconstruction of monthly average mean temperature variations back to 1760, from around 30 long temperature records, is given at diymetanalysis:

Example 05: CENTRAL ENGLAND Tavg

The analysis cited above reveals a small inhomogeneity in monthly CET Tavg, shown in the following two figures:

CENG_Fig12

CENG_Fig15

The small anomalous step down in temperatures is in 2004, which corresponds to a date on which there was a change in the station composition, see the following reference for details of that and other changes:

Click to access ParkerHorton_CET_IJOC_2005.pdf

Philip Eden version

An alternative version was being created by Philip Eden (a distinguished British meteorologist, now deceased), and the following figure shows both HadCET (the official version) and the Eden version, and their difference, all as 12-month moving averages:

HadCET_vs_Eden

Note that the Philip Eden version (in blue) is a bit warmer than HadCET from around 2005, removing some of the 2004 inhomogeneity shown above.

Sources

VERSION ISSUES

The following figure shows that there are small but potentially significant differences between different sources of HadCET data, in this case monthly Tmax data derived from GHCND, versus the Met Office website version. It should NOT be concluded from this single example that there are no other differences between versions.

END OF POST

Posted in Uncategorized | 2 Comments

GHCNM Spatial Sampling

SCOPE

This post is about the current spatial sampling of the stations with monthly average rainfall data in GHCNM version 2, and temperature data in GHCNM versions 3 and 4, the current source data for many “official” reconstructions of the global land rainfall and surface air temperature history.

Preliminary results are included for GHCNM version 4, which has not yet been released officially.

A figure of around 8000 stations is often quoted for GHCNM version 3, which might appear to be an adequate spatial sampling. However, most of those 8000 stations are no longer providing updates, and there are questions about the adequacy of the current spatial coverage in two distinct areas:

  • Spatial sampling of the varying climate around the globe
  • Detection and correction of inhomogeneities in the currently reporting stations

These two questions will be discussed on a per-country basis, starting with Australia.

AUSTRALIA

The Australian Bureau of Meteorology has 112 stations in ACORN-SAT(2012), intended to describe the varying temperature histories around the country:

ACORN-SAT-network-map

The currently reporting stations in GHCNMv3 (unadjusted), for monthly TAVG (average temperature) data is shown in the following list for Australia, generated by recording only those stations with data in 2018:

GHCNM_TAVG_JAN2018_AUS

These 62 stations in Australia that are currently reporting monthly TAVG to GHCNMv3 are possibly adequate to represent the varying temperatures around the country, but only if those stations remain unchanged in equipment, environment and procedures, and are free of errors.

ACORN-SAT was constructed by homogenisation involving many hundreds of other stations. This is no longer possible in GHCNMv3, which only has current data for 62 stations in Australia.

To confirm that all of the many other Australian stations in GHCNMv3 are currently non-reporting, here is a table of all Australian TAVG data (qcu: “unadjusted”) for 2017:

GHCNM_TAVG_2017_AUS

The 4-digit numbers in the table above, with separate columns for Jan to Dec, are the average temperature in hundredths of a degree C, -9999 means missing data.

GHCNM version 4 (preliminary) results are as follows. A total of 101 Australian stations contributed monthly average TAVG (unadjusted) data for January 2018, shown below in two parts:

GHCNMv4_TAVG_JAN2018_AUS_part1

GHCNMv4_TAVG_JAN2018_AUS_part2

101 stations will be a significant improvement on 62, but falls short of the 300 that is my guesstimate for the minimum number required to have a good chance of detecting and correcting inhomogeneities, and infilling missing data.

GHCNM version 2 (now containing only monthly rainfall totals)

Australian stations with rainfall data in 2018 are shown in the following list:

Australia_2018_GHCNMv2_PRCP

The above list includes the rainfall totals for January 2018, in tenths of a mm.

The 29 stations in the list are insufficient to allow analysis of contemporary rainfall, and detection/correction/infilling of anomalous/missing data.

The entirety of the 2017 rainfall data for Australia is given in the following table, with missing data (-9999) shown in red:

Australia_2017_GHCNMv2_PRCP

GHCNMv2 is no longer fit-for-purpose, and the Australian BoM has questions to answer about why so much data is missing from these stations, many/most of which are at Meteorological Offices.

More to follow later, about other countries …

Posted in Uncategorized | Leave a comment

DIYMETANALYSIS

Author: Dr Michael Chase

You are invited to visit a new website that describes a relatively simple, but nevertheless effective method of reconstructing the regional average history of monthly average surface air temperature variations for a region from its weather station data, aided by any metadata that is available:

https://diymetanalysis.wordpress.com/

The new website will be kept small and focused on the methodology, to help navigation.

This blog will continue to cover results from the method, and comparisons with “official” temperature reconstructions.

Posted in Uncategorized | Leave a comment

The Rumble at Rutherglen

Author: Dr. Michael Chase

rutherglen-fig2b

Photo above: A recent picture of the weather station at Rutherglen, Australia, from the BoM webpage cited below. Other photos are shown at the end of the post.

Post Summary and Conclusions

This post documents some analysis of changes in minimum temperatures (Tmin) at Rutherglen, a rural weather station in South East Australia. It is found that:

  • Early 20th century Tmin measurements are around 1.0C higher (annual average) than those that would have been measured if the recording system/location/environment of today had been in place then. There is some variation between months that make up this annual average.
  • The annual average ACORN-SAT(2012) correction of 1.7C for early data is therefore substantially too high
  • The daily ACORN-SAT(2012) corrections for 1920/21/22 (the only years examined) show a nonphysical discontinuity between the end of November and the start of December

Background

Rutherglen (BoM id 82039) is a rural weather station at a research farm, with no nearby man-made structures, at least from 1975, as revealed by photos and descriptions from the BoM webpage given below:

http://www.bom.gov.au/climate/change/acorn-sat/rutherglen/rutherglen-station.shtml

The RAW Tmin data from Rutherglen, and from many nearby stations, show a net cooling over the last 100 years, as revealed in the following figure:

RUTHER_01

Questions have been asked about why the raw temperature trend of net cooling has been adjusted in ACORN-SAT to a net warming trend, and the BoM have responded with the webpage cited above.

ACORN-SAT Corrections

The dates and sizes (annual average) of Tmin corrections applied by ACORN-SAT(2012) are given in the following extract from its adjustment summary document:

RUTHERG_AS_ADJ_Mins

A later (September 2014) summary from the BoM about Rutherglen does not mention the 1928 Tmin correction:

Click to access station-adjustment-summary-Rutherglen.pdf

but it is unclear if that correction has been disowned (without saying so) or simply not mentioned. The original 2012 documentation is taken to be definitive, as it matches daily temperature data available in October 2017.

Data prior to the last-listed correction in 1928 is reduced, on average, by 1.7C, the sum of all corrections. The following figure shows the daily corrections for 1920/21/23 (the only years examined):

ACORN_DAILY_RUTHERG_TMIN_01

The corrections appear to change in jumps from month to month, in particular with a very large jump (marked A in the figure above) from November to December, surely an undesirable and erroneous artifact rather than a genuine weather phenomenon.

My Analysis

I have estimated the monthly average corrections that would be needed to be applied to raw Rutherglen Tmin data to remove non-climatic influences relative to those present in recent years. The methodology is being documented in a separate blog:

https://diymetanalysis.wordpress.com

The following figure shows the annual average correction needed for periods of data (the bold blue lines are the moving averages) deemed to be stable, tracking the regional average (in red) reasonably closely:

RUMBLE_01

The required correction is the temperature difference between the bold blue and dashed red lines, which are respectively the 15-year moving average of raw Rutherglen Tmin data, and the 15-year moving average of the regional average temperature variations. The figure also shows the 12-month moving average of weather-corrected raw Tmin data at Rutherglen.

The key features of the data shown in the figure above are as follows:

  • 1914 to 1926: The average correction needed for Tmin data in this early period of stability is around 1.0C, the ACORN-SAT(2012) correction of 1.7C is too much
  • 1914: There was a step change in temperatures, probably associated with the station move in January 1914 (source: Torok thesis 1997), a move that fails to get a mention or a correction in ACORN-SAT(2012)
  • 1928: There was a step change in temperatures around 1928, but they recovered around 1936. ACORN-SAT (2012) has the step down in 1928, but not the recovery in 1936, an example of errors caused in ACORN-SAT by transient perturbations.
  • 1966: There was a large drop in temperatures
  • 1974: There was another drop in temperatures, but note that this was the date of some heavy rainfall (see below), and the temperature drop looks a bit like the sharp edge of a sawtooth perturbation
  • 1984: This marked the start of a long period of stable temperatures with a trend matching that of the regional average
  • 1998 (29th January): This was the date of a switch to an AWS system, which does not appear to have had a significant impact on measured temperatures
  • 2012: There was a drop in temperatures at that date, possibly associated with a period of heavy rainfall, more on that below

The regional moving average temperature history was derived by averaging periods of stable temperature (such as the ones shown above in bold for Rutherglen) across stations in the region.

Monthly Corrections

The following set of figures show eyeball-estimated corrections for each month, being the average temperature difference between the raw data (in black, red for its average) and the regional average (in blue/mauve):

RUMBLE_02

RUMBLE_03

RUMBLE_04

RUMBLE_05

The figures shown above confirm that the periods 1914-1966 and 1984 to 2012 were roughly stable in terms of non-climatic influence, justifying the use of these periods in obtaining the regional average temperature history. If a corrected (“homogenised”) version of Rutherglen Tmin data is required then early data (before 1966) must be reduced by around 1.0C, with some monthly variation in that figure.

Regional Average

The following figure shows more of the periods of data used to form the regional average temperature history:

RUTHER_02

The complete set of the data periods used in regional averaging at Rutherglen is shown here:

Example 01: RUTHERGLEN Tmin

Finally, the following figure shows a summary of the regional average Tmin and rainfall history back to 1885, indicating the heavy rain that may explain some of the anomalous changes in temperature around 1974 and 2012:

RUTHER_Fig1

Conclusions: See the start of this post.

Photos

The following photo of the Rutherglen station is from the ACORN-SAT station catalogue:

Ruther_photo_statcat

Photos of the Rutherglen site from the BoM website cited above (click to enlarge):

 

 

 

Posted in Uncategorized | Leave a comment

The Late 20th Century Climate Shift in SE Australia

Author: Dr. Michael Chase

Introduction

Monthly average surface air temperature data in South-East Australia (and probably in other regions) show a relatively sudden increase in maximum temperatures at the end of the 20th century. Unfortunately, this was also the time when the BoM introduced Automatic Weather Stations (AWS) at many of its sites. This post presents some data on temperature and rainfall changes around this “climate shift” and shows graphically that the calibration of the AWS systems in the area examined had close matches to those of the systems they replaced, at least at the level of monthly Tmax averages. The seasonal differences in temperature and rainfall variations may provide clues to the cause(s) of the climate shift.

Regional Average Temperatures and Rainfall

The figure below shows the climate shift in a region of NSW/VIC bounded by lines joining Mildura, Hillston, Wagga Wagga, Rutherglen, Echuca, Nhill and back to Mildura:

DIY_P1_Tmax_6mthav

The data shown in the figure above represent estimates of the regional average temperature history, in this case for 6-monthly Tmax data. Details of how to estimate regional averages, detecting and correcting inhomogeneities, will be given in later posts.

I have examined the regional average temperature history for each separate month, and find that each month from September to February has a similar upward shift in Tmax near the end of the 20th century, so have averaged over this 6-month period to illustrate the phenomenon (red curves above). The other months all show a similar lack of anything special happening around that time (blue curves for the 6-month average).

There is normally a close association between Tmax fluctuations  and rainfall levels, but the following figure shows that there was no particular trend in rainfall around the time of the climate shift:

DIY_P1_Rain_6mthav

Are AWS Systems Involved?

Many stations in the region had AWS systems installed in the late 20th century, for example becoming the primary sensors in November 1996 at both Mildura Airport and Wagga Wagga AMO. Fortunately, many nearby stations retained their manual systems, and I have checked their temperature histories against those that switched to AWS.

The following figure shows the temperature history (12-month and 15-year moving averages, after subtraction of regional average temperature fluctuations) for 3 stations that switched to AWS, together with the regional average temperature history (black curves):

DIY_P1_AWS

Note that there are no substantial deviations from the regional average when the AWS systems became the primary sensors. For comparison, the following two figures show the same data for 6 stations that did not get converted to AWS:

DIY_P1_nonAWS1

DIY_P1_nonAWS2

Conclusions

There may be calibration differences of a tenth or two degrees C at the level of monthly Tmax averages between the AWS and manual systems employed in the region, but not more than that. This conclusion is consistent with the absence of corrections for AWS installations in ACORN-SAT, the early one at Cape Otway being the only one that has a correction.

Later posts will look at how the climate shift varied around Australia, which may shed some light on cause(s).

 

 

 

Posted in Uncategorized | Leave a comment

Averaging Multiple Temperature Records

Author: Dr. Michael Chase

Introduction

This is the first in a series of posts with the general theme of “Do it yourself temperature homogenisation“. The full series of posts will outline a simple but effective procedure for turning raw instrumental temperature data (aided by any available station history and data on rainfall) into reconstructions of the background temperature history for any area with a “sufficient” number of weather stations, the sufficient number depending mostly on the quality and extent of the data.

The overall procedure involves visual detection of inhomogeneities, followed by averaging and integration of interannual temperature differences in which the inhomogeneous data is omitted. Effectively, large inhomogeneities allow visual detection, followed by removal from the data, small ones are suppressed by the averaging process used to obtain regional histories.

Averaging Multiple Records

This first post in the series deals with the final step of the procedure, the method by which multiple temperature records are combined to give a regional average temperature history. The following figure gives a schematic picture of what typical temperature data looks like:

TEMPAV_A

The figure above illustrates the following:

  • Black data: Historical urban warming, followed by a station move (typically to an airport or other out-of-town site), followed by a switch to automatic sensors
  • Red Data: A good rural station, but with some missing data
  • Blue data: A station with a transient perturbation, possibly of non-climatic origin, or possibly due to a period of localised heavy cloud/rainfall

It is assumed for the purposes of this post that the previous stage of the overall procedure has identified the inhomogeneities, marked all periods of “transition” within a computer program, and that the analyst has corrected the data for any large residual inhomogeneities at transition boundaries and at all ends (more on that last issue below and in subsequent posts). The computer program, under analyst control, then does the following:

  • Either infills missing raw data or leaves gaps if there is doubt about the station history within a gap. Infilling can also be done manually.
  • Computes all valid differences in temperature, separately for each month, between years N and (N-1). Valid differences are those that do not cross, or lie within transitions
  • Extrapolate temperature differences for all stations with missing years using the average of valid temperature differences

The following figure illustrates the resulting full coverage of temperature differences:

TEMPAV_B

The temperature differences can now be averaged again with the important feature that each station has a constant weight in the averaging process (1/3 for each station in the example shown in the figure above). Finally, the average temperature differences can be integrated forwards and backwards in time from any desired reference year/temperature to obtain average temperature histories for each month.

Error Analysis

The reason for extrapolating temperature differences for all stations can be seen from the example of the blue data in the figure above. In some cases the dip in the blue data will be deemed to be associated with a period of heavy rainfall, i.e. a genuine climatic effect. This genuine climatic effect will only produce the correct impact on the regional average (i.e. only within the period of rainfall) if the weighting of the blue data is constant in time, which is what results from the extrapolation of temperature differences for the red data.

Using interannual temperature differences avoids the need to estimate and correct most inhomogeneities, but there is a small price to pay for that substantial benefit, the problem of residual inhomogeneities at boundaries, illustrated in the following figure:

TEMPAV_C

The figure above shows an example of a temperature record with perturbations from the regional average. The perturbation within the record is not really a problem, because the downward shift in temperature is matched by the exact reverse in later data, with the constant weighting of each record ensuring that the perturbation will not influence the end-to-end shift in average temperature.

The problem illustrated in the figure above arises from the inhomogeneous ends of the record (more generally from any boundary, including ones created by defining “transitions”), which can distort the end-to-end variation of average temperature if the inhomogeneities are not detected and corrected, which can be done manually (see a later post).

 

Posted in Uncategorized | 2 Comments

Review of Thornton et al 2017

Author: Dr. Michael Chase, 3rd July 2017

metoffice_kendon_dec_2010

This post documents a review of a recent paper published in Environmental Research Letters about GB wind power and electricity demand:

The relationship between wind power, electricity demand and winter weather patterns in Great Britain

Hazel E Thornton1,5, Adam A Scaife1, Brian J Hoskins2,3 and David J Brayshaw3,4

Published 16 June 2017 • © 2017 Crown copyright
Environmental Research Letters, Volume 12, Number 6

http://iopscience.iop.org/article/10.1088/1748-9326/aa69c6/meta

I was alerted to the publication of this paper by a post about it at the “Energy Matters” blog by Roger Andrews: http://euanmearns.com/peak-demand-and-the-winter-wind/

The paper has generated some hype and fake news, such as this from “energylivenews”:

“Wind turbines produce more power on the coldest days than the average winter day.”

This post attempts to provide a more accurate description of what the paper says, and what it does not (but should) say.

The authors of the paper are all meteorologists or climatologists. The meteorological aspects of the paper are excellent, especially the insights provided into the particular weather patterns that lead to most cold spells and associated high demands for electricity. The absence of electrical engineering input is apparent in the incomplete analysis of the contribution of wind power to meeting future peak demands.

Incomplete Analysis

The paper quantifies how well current wind power deals with an old problem (high demand during cold spells on a system without wind power) but fails to quantify how additional wind power would contribute to solving current problems. Current GB wind power already has more than sufficient capacity to deal with the relatively small excess demands that appear to occur during some windy cold spells, so windy cold spells are no longer a problem. In fact, the current nameplate capacity of wind power of around 15GW (metered plus embedded generators) is so large that it has shifted the current problem (the peak demands placed on the rapidly diminishing conventional sources of supply) to times of such low wind conditions that additional wind power capacity will have negligible effect on the current capacity problem for the foreseeable future.

The following figure shows how the analysis could be improved to draw appropriate conclusions about additional wind power:

Thornton_Fig6_mod

The figure above has been copied and pasted from the paper, with the addition by me of the red line, which provides a rough estimation of where the current problem lies, the peak demands that conventional sources might be expected to have to meet when cold spells fall on working days. The slope of the red line follows from the current total nameplate capacity of GB wind power. I have assumed that the conventional sources can supply 1040 GWh per day, so the red curve starts at that level. As total wind power increases higher total demands can be met thanks to the wind contribution. The current problem is events below the red line, several of which had very low wind power. Those very low wind power events had a BIT less demand than the highest but a LOT less wind power than the average, and that is the current peak capacity problem, and additional wind power will not solve it.

The figure above can be used to see the outcome of several what-ifs. If demands increase (such as via increased electrification of heating and transport) then many more events will move to the right into the danger region. If more conventional supply is lost then the red line will move to the left, bringing many more events into the danger region.

What-if more wind power capacity is added? Suppose that an extra 1.5 GW (nameplate) is added in the next few years, will that improve the security of the GB electricity system? The answer can be seen from the effect on the red line, whose slope will merely decrease by 10% (since total nameplate capacity has risen by 10%), making very little difference to the problem area below the red line.

Wind power enthusiasts may be tempted to argue that there is very little in the way of events below the red line so there is not much of a capacity problem, especially when more wind power is added. There are two problems with that argument, firstly that the temporal resolution (daily wind averages) used in the paper underestimates the number of events below the red line (more on that below), but even if that issue is minor the capacity problem includes the large number of events that are poised to enter the danger region via a rise in demand and/or a fall in conventional supply. Wind power has changed the statistics of the supply/demand balance, but that change in statistics has now all but stopped, and somehow the rapidly falling conventional supply has to be reconciled with the expected rapid rise in demand.

Modelling Issues

The reanalysis data used, from 1979, includes long periods of relatively mild (and presumably windy) winters in the UK, and this is likely to have biased the statistics in the over-optimistic direction. The following figures show HadCET data for daily winter maximum and minimum temperatures from 1878, with exceptionally cold days shown with blue markers.

hadcet_all

hadcetmin_all

Finally, the paper uses daily average wind power, when it should have made an attempt to estimate wind power at the critical early evening period, when peak demands occur. Critical events with wind power lulls in the early evening will have been biased towards higher apparent wind power by the use of daily averages. There would be many more dots below the red line in Figure 6 of the paper (shown above) if the analysis had been done at a finer resolution, with Roger Andrews showing example data at 5-minute resolution from a particular cold spell in his blog post cited above.

Posted in Uncategorized | Leave a comment

Climate Distortion from Homogenisation

Author: Dr. Michael Chase

“When breakpoints are removed, the entire record prior to the breakpoint is adjusted up or down…”

Source: http://berkeleyearth.org/understanding-adjustments-temperature-data/

Many people suspect that there are inaccuracies in the major homogenisations of instrumental temperature records. This article asserts that there are substantial errors resulting from the homogenisation procedures commonly employed, and provides a general explanation for them. In short, there are many transient perturbations in temperature records, and the homogenisation procedure over-corrects for many of them. I am currently quantifying this over-correction in the ACORN-SAT version of Australian surface air temperatures, and hope that this article will inspire others to help, or to look at data from other countries. The article is based on knowledge mainly of ACORN-SAT, but there is no reason to suppose that the conclusions do not apply generally.

First of all the following figure illustrates why raw data has to be adjusted to reveal the true background temperature variations. The objective is to obtain the temperatures that would have been recorded in the past if the weather station had been at its current location, and with its current equipment. The figure below shows a typical history of the difference in effective temperature calibration between the past and the present:

HOMOGENO_DISTORTION_01

Station moves and equipment changes are the typical causes of sudden and persistent changes in temperature relative to neighbours, events that computer algorithms are good at detecting and correcting. If that were the whole story then everything would be fine, but things go rapidly downhill from this point on.

The main problem for large-scale homogenisations is that there are many “transient” (rather than persistent) perturbations of temperature. The computer detection algorithms still work to some extent with transient perturbations (though it would be better if they failed to work), but they cannot do the correction part of the procedure without adult supervision. The outcome is illustrated in the following figure, showing two typical transient perturbations, and the erroneous corrections that they generate:

HOMOGENO_DISTORTION_02

The problem arises when only one end of transient perturbations get detected, the procedure assumes that the transients are persistent, and therefore over-corrects the data before the transient.

It is likely that the most common transient perturbations of temperature involve sudden cooling, for example from the following mechanisms:

  • Removal of thermometers from an urban-warmed location
  • Replacement of damaged or degraded screens
  • Onset of a rainy period after a drought

There can be sudden warmings, for example when thermometers are removed from a shaded location, there is sudden screen damage, or a building  is erected nearby, but the other end of those transients are probably more likely to be detected than in the case of sudden coolings. It seems likely that poorly corrected transient perturbations give a bias towards cooling of the early part of temperature records.

There is always scope for improving computer algorithms, but I think that the problem lies with the functional design of the homogenisation procedure, which needs more involvement of expert analysts and less blind faith in what the computer says. The analysts need to examine rainfall data to remove false detections from that source, and they need to look over long periods of time to find both ends of transient perturbations. As the main interest in long temperature records is the end-to-end variation of temperature it may be OK to leave transient perturbations in place, but note that mid-20th century urban warming can convert cyclic variations of temperature into hockey sticks.

I am continuing a review of ACORN-SAT data, trying to separate its step change detections into two groups, those resulting from persistent changes (which need correction), and those resulting from transient perturbations, which don’t need correction. I hope that this article will encourage people to examine data from other regions, to determine the extent of climate distortion introduced by the homogenisation process.

Posted in Uncategorized | Leave a comment

Climate Distortion in ACORN-SAT, Part 3

Author: Dr. Michael Chase

Kerang_1930

Photo above: Kerang, Victoria, circa 1930

ACORN-SAT is the outcome of a “system” for detection and correction of non-climatic influences on surface air temperature data recorded in Australia. Previous posts have dealt with errors in the correction part of the process, and with false detections. This post deals with failure to detect what should be detected.

Many non-climatic influences on temperature measurements are transient in nature, so if an attempt is made to detect and correct, then both ends of the transient influence must be found. That fact alone makes the process rather risky, and this post shows examples of the risk being realised, with one end of transient perturbations not being detected, resulting in invalid correction of the data before the onset of the transient influence.

The following figure shows a transient warming influence on daily maximum temperatures at Kerang in Victoria. To make the transient warming easier to see the data from a nearby reference station (Echuca Aerodrome) has been subtracted from the Kerang data (black curve), removing most of the natural background variation in temperature. The figure also shows matching results for nearby Deniliquin (red) and Rutherglen (blue), for which transient perturbations (detected via ACORN-SAT, and verified visually by me) have been corrected.

Kerang_screen_transient

ACORN-SAT has a detection for Kerang in 1957 at the end of the transient influence, but no detection for the start, therefore it falsely corrects the data all the way back to 1910. The right answer is to correct the data only back to 1943, the onset of the transient, or to make no correction at all.

This post will be updated with any further detection failures that are found.

NOTE: Missing months of data have been infilled by interpolation, following the temperature variations of neighbouring stations, and partial quality control adjustments have been made for anomalous spikes and dips, in particular at Rutherglen in 1925.

Posted in Uncategorized | Leave a comment