Foundations of climate reconstruction from instrumental data
If you regard the measurement of surface air temperatures as a scientific experiment then it is one in which every rule in the book has been broken multiple times. Systems, locations, environments, procedures and personnel have been changed multiple times, many errors have been made in temperature readings, and some data and documentation (metadata) has been poorly recorded or lost.
However, from the point of view of reconstructing regional climates, the situation is not as bad as one might think, thanks to data redundancy. In many places, from a climate point of view, temperatures have been considerably over-sampled spatially. By comparing records from “nearby” stations one can detect non-climatic changes. One can then either correct the data or put it to one side as being unsuitable for further use.
The fundamental principle that allows the past climate to be reconstructed from land-based temperature stations that have undergone non-climatic changes is that there should be an “approximately” constant (or very slowly varying) DIFFERENCE in temperature between “nearby” locations, when averaged over “suitable” time periods.
I both detect and correct non-climatic changes in temperature data by examining the differences in “suitable” temperature averages between different stations. The averaging time periods vary according to the size of the temperature deviations from normality. Large deviations (typically greater than 1C) can be detected and corrected with monthly averages, progressively smaller deviations require bi-monthly, seasonal, 6-monthly, annual or multi-year averages.