Holman Jenkins reminds that “Two measurements separated by less than the margin of error are the same.”
Yet when statistical discipline is observed, 2015 and 2016, the two El Niño years, are tied for warmest. And the years 1998, 2003, 2005, 2006, 2007, 2009, 2010, 2012, 2013 and 2014 are all tied for second warmest.
In other words, whatever the cause of warming in the 1980s and 1990s, no certain trend is observable since then.
Shall we posit a theory about all this? U.S. government agencies stopped mentioning uncertainty ranges because they wanted to engender a steady succession of headlines pronouncing the latest year unambiguously the hottest when it wasn’t necessarily so.
This doesn’t mean you should stop being concerned about a potential human impact on climate. But when government scientists deliberately seek to mislead, it’s a warning to raise your guard.
For instance, NOAA states its annual temperature estimate as an “anomaly” in relation to the 20th-century average. Do you really believe government scientists can reconstruct a global average temperature for years in the first half of the 20th century with sufficient accuracy to allow comparisons of 1/100ths of a degree?
You start to notice other things. The numbers keep changing. Years 2005 and 2010 were exactly tied in 2010, but now 2010 is slightly warmer, just enough to impart an upward slope to any graph that ignores statistical uncertainty.
Government scientists are undoubtedly ready with justifications for each of the countless retroactive adjustments they impose on the data, but are you quite sure they can be trusted?
Climate science is not a hoax. The U.S. government spends impressive sums to take the increasingly rigorous readings from which a global average temperature is distilled. But other countries like the U.K. and Japan also do sophisticated monitoring and end up with findings roughly similar to the findings of U.S. agencies, yet they don’t feel the need to lie about it.