All You’ve Ever Wanted To Know About Anomalies But Were Afraid To Ask
A lightweight look at how we go about deriving temperature anomalies and what they actually mean in plain English
This is one of those ideas that came to me whilst wrestling with the duvet at 3am. This does not purport to be an exhaustive list of tricks, nor does it go into any kind of depth, but it does summarise what I have seen with mine eyes over the years. I shall use plain, unbuttered English where possible…
We shall start straight in with a famous person (Professor Brian Cox) holding up a graph during a question and answer session on a high-profile TV chat show called QandA. The graph was used to crush Australian senator-elect Malcolm Roberts:
Here’s the very latest incarnation of that same graph:
You can get the data for this graph from here.
What this graph shows is the global temperature anomaly from 1850 to 2022 as estimated by a joint effort between two giants of climate science – the Met Office Hadley Centre and the Climatic Research Unit at the University of East Anglia (hence Had-CRU-T). It sure looks like things are getting really hot since we started using fossil fuels in earnest, but let’s try and understand what this graph is actually showing.
Anomaly Ain’t Temperature
First up, we need to realise a temperature anomaly series is not a temperature series, which would look quite different. We shall see just how different in a moment. An anomaly series is designed to reveal variations in the mean annual global temperature from a reference period known as a climate normal. The reference period for the HadCRUT 5 series is 1961-1990. If you change the reference period then the value for the variations will also change, and I’ll provide an example below.
What we should note is that the Earth cooled after WWII, and got especially cold during the ‘60s when us Brits had to dig their way through deep snow in the bitter winter of 1962-63. Having an exceptionally cold episode in your reference period ensures a low overall mean and this generates inflated estimates for subsequent anomalies as we shall see. In this regard we must note the Hadley Centre have chosen not to adopt the more recent reference period of 1981 – 2010 (or even 1991 – 2020) but then again they are only following recommendations made by the WMO.
Global Ain’t Global
I think everybody will understand what we mean by the annual mean (a.k.a. average) but we ought to clear up what is meant by ‘global’. Global means land surface temperature and sea surface temperature combined. This process relies on a weighted average of land and sea anomalies whose weighting value depends on the grid sector, with the surface of the Earth being sliced into 2,592 five degree chunks.
This is all rather splendid but the question we need to ask is how many of those 2,592 chunks of the global surface contain an active land based weather station, buoy array or set of shipping logs sufficient to cover every day since 1850 (a total of 63,473 days and counting). The answer is not many, for we are looking at a database that would require a minimum of 2,592 x 63,473 = 164,522,016 data records to be considered vaguely representative.
In terms of land stations you can get a feel for just how many holes there are by considering this slide I knocked up back in Jan 2020 that reveals the distribution of GHCNd weather stations with a least 120 year’s worth of temperature data:
There are only 800 of these and most of them are in North America, with a small cluster in Germany and Austria. So how can you determine the historic temperature trend for South America, by way of example? The answer is you can’t. What the big players like NASA, NOAA, Hadley Centre/CRU do is fill out the many holes by extrapolation and statistical modelling. This isn’t exactly great and wonderful (though they have no choice), and when we consider they’re extrapolating data from existing land stations that are subject to the urban heat island effect (check out this juicy paper) then it becomes less than great and wonderful. We are looking at a dog’s breakfast, and a global record that isn’t truly global but is given the appearance of being so.
Reverse Engineering The HadCRUT 5 Anomaly
To get the ball rolling and the pie cooking we need to convert those HadCRUT 5 anomalies back into actual temperatures. NASA reckon the average temperature of the Earth was 14°C for the period 1951-1980 – so there’s our 30-year datum right there! What we do next is re-jig the HadCRUT 5 anomalies based on the reference period of 1961-1990 to a set of equivalent anomalies based on the earlier reference period of 1951-1980, this being a doddle to do with a spreadsheet. We can now add that 14°C to the revised anomaly series to arrive at a set of actual planetary temperatures to arrive at this slide:
We’ve now gone from looking at the HadCRUT 5 anomaly series to looking at the global mean surface temperature equivalent that will give rise to these very same anomalies. Magic!
Now that we’ve got a temperature series let me start by changing the y-axis scaling from the narrow range of 13.4 – 15.2 to a scale that embraces zero degrees. Back when I was a kid in school the maths teacher always insisted we label our axes and ensure they started at the origin, but climate scientists seem to overlook this old-fashioned principle in favour of alarmist graphics. You’ll see what I mean by ‘alarmist graphics’ when you clock this graph of exactly the same data plotted with an old-school origin:
That’s what all the fuss is about.
Folk understand temperatures and not temperature anomalies, and all I have done here is present a construct designed for use by climate scientists in terms that are more readily understood by the public. I think you can see why the MSM and those with vested interests prefer using anomalies for illustrating articles. The fact of the matter is that you can use a y-axis scaling that either builds the drama or hides details you don’t want folk to see. We may call this the y-axis fiddle, of which there are several variants.
A ‘normal’ Normal
We can also convert the HadCRUT 5 anomaly series that is based on a reference period of 1961 – 1980 to one that is based on a reference period of 1981 – 2010:
Instead of an anomaly series that is pushing +1.0°C above normal we now have an anomaly series that is pushing +0.6°C above normal, and I am going to suggest, in a somewhat cheeky fashion, that changing the climate normal reference period to 1981-2010 would be a quicker, cheaper and easier way to avoid that +1.5°C target set by the IPCC. Job done!
What I like about this anomaly series (raw or adjusted) is that it clearly shows global cooling from 1945 to 1965 at a time when CO2 emissions were rising in a quasi-exponential manner, though I suspect this subtlety goes over the head of those who prefer to glue themselves to things and waste good soup.
Worked Example
Herewith a modest example for use with trusty hand-held calculators, though you can always fiddle with the spreadsheet that can be found here:
All we are doing is deriving an overall period mean (in this instance for 2005 – 2014,which gives 14.67°C) then subtracting this value from the raw mean global surface temperature series (MGST) to arrive at our anomalies.
Cobblers
I’ve enjoyed cobbling this bake together and may continue the series with a look at other tricks of the trade including the x-axis fiddle, the since records began fiddle, the reverse axis fiddle, the annual mean fiddle, the logarithmic fiddle and, of course, the cherry-picking fiddle in which cherry-picking alarmists berate climate realists for cherry-picking. We must remember that the loudest sound in the jungle is giraffes eating cherries.
Kettle On!
"the loudest sound in the jungle is giraffes eating cherries" - brilliant!!
Almost had me snorting my coffee out of my nose...
Speaking of cherry-picking (and further massaging the data), have you seen this tweak to the CET data in 2022? They claim it is 'within the margin of error', but clearly if you lean on temps in the medium past and raise them in near past you can a non-marginal systematic bias: https://community.netweather.tv/topic/97122-updated-cet-data-base-v20-posted-9-may-2022-an-ongoing-analysis-of-changes-made-to-what-they-now-call-the-legacy-data-base/page/4/
Boy that’s a cheap trick by Cox, who should know better.
I enjoyed this recent lecture by Prof William Happer
https://youtu.be/v2nhssPW77I?si=L7-q8JYZwLRFfIZM