The Temperature Of The UK Over The Last 100 Years (part 5)
A butcher's at daily maxima and minima over the last 100 years. Are things getting worse, and what does ‘worse’ mean?
Right then, it’s 7:40am and I’ve cleared away the rotting apples, gathered the decent ones and put the recycling bins out moments before the truck hauls into our sleepy close. I’ve a hot lemon tea to my right that is smacking my taste buds good and proper and I’ve a spreadsheet template open ready to crunch tmin (mean daily minimum temperature) for the 34 stations in my sample, data for which may be obtained here.
The tmin normals are relatively easy to calculate and I recommend subscribers go right back to this newsletter, this newsletter, this newsletter, and this newsletter to better understand what temperature anomalies mean, how they get calculated, how to correctly interpret them, and why I’m using the 1991-2020 climatological normal for my sample of stations and not the current WMO 1961-1990 normal.
Tmin & Sharks
Just in case folk are confused as to what tmin actually means, here’s the definition provided by the Met Office:
I’ll flesh this out by saying the lowest daily temperature recorded at each station is averaged over each month to provide the mean daily minimum temperature for that month. This makes for a robust estimate of how cold it generally gets at each station because the impact of outliers is averaged out. An outlier might represent an exceptionally cold spell but it might also represent issues with Stevenson screen siting and maintenance as well as probe calibration.
Here’s a good example of the weather Station at Ross-On-Wye getting off to a dodgy start back in 1930:
Point 78 is an outlier if ever I saw one! The reason for this is that, although Ross-On-Wye got going in 1930, it only recorded temperature data during December of that year, which makes for an exceptionally wacko anomaly and a fine candidate for exclusion.
With just 34 stations in my sample I can patiently pick over each and every single data point to ensure they pass muster, but if I sat on several thousand stations like NOAA, NASA, Berkeley, Hadley Centre/CRU et al then the best I could do is rely on automated quality control procedures to iron out the nonsense. The trouble is, there’s no way of telling whether these are doing a fine job unless somebody sits down and undertakes regular and extensive audits, which is both expensive and time-consuming. Trusting big data and relying on AI is like trusting a shark not to attack whilst you enjoy a splash around. Been there, done that, got the scars.