Originally posted on May 16, 2013 at Bishop Hill

Nic Lewis has left a comment under Dana Nuccitelli’s astonishing article in the Guardian.

In his piece, Dana Nuticelli links to his earlier article “Climate Sensitivity Single Study Syndrome, Nic Lewis Edition” at a climate change/global warming blog he is associated with. As the author of the paper “An objective Bayesian, improved approach for applying optimal fingerprint techniques to estimate climate sensitivity” (Journal of Climate, in press) that Dana Nuticelli’s earlier article is about, I would like to take this opportunity to put on record my rebuttals of a number of misrepresentations he made of my paper, to avoid any Guardian readers who follow the link being misled. I apologise in advance for the length of this comment.

1. Nuticelli stated that my paper was an outlier. If it were, as his title suggested, the only study showing a low climate sensitivity – one below the bottom of the IPCC 4th assessment report (AR4) 2–4.5°C ‘likely’ (2/3rds probability) range – then that would be a fair point. But it seems increasingly clear that warming over the instrumental period (from the mid/late nineteenth century to date) indicates a lower ‘likely’ range for climate sensitivity than 2–4.5°C. As well as the Skeie et al. Norwegian study to which he referred, three recent peer-reviewed studies (Ring et al, 2012, Atmospheric and Climate Sciences; Aldrin et al., 2012, Environmetrics; and Masters, 2013, Climate Dynamics) all point to a considerably lower ‘likely’ range for climate sensitivity than 2–4.5°C.

2. Nuticelli stated that the Bayesian approach I employed involves “making use of prior knowledge of climate changes to establish a probability distribution function for climate sensitivity”. In fact, the purpose of my using an objective Bayesian approach was precisely to avoid making use of prior knowledge or assumptions about the likely values of the climate system parameters being estimated. Typically, Bayesian climate sensitivity studies have inappropriately used a uniform prior distribution for climate sensitivity (and sometimes for another key parameter), and thereby greatly exaggerated the risk of climate sensitivity being high.

3. Nuticelli floated “The Climate Variability Question Mark in Lewis’ Approach”. Referring to the 2013 study by Olson et al., he stated that they investigate “three main sources of what they call “unresolved climate noise”: (i) climate model error; (ii) unresolved internal climate variability; and (iii) observational error”. In fact, they focus only on item (ii). Their findings have limited relevance to my study, which (a) makes due allowance for internal climate variability and the uncertainty arising therefrom; (b) does not attempt (as Olson et al. did) to estimate aerosol forcing from purely global temperature measurements; and (c) avoids the uniform priors they use.

4. Nuticelli suggested that my study, while stating that it estimates “equilibrium climate sensitivity”, in fact estimates “effective climate sensitivity, which is a somewhat different parameter”. It does in fact estimate equilibrium climate sensitivity. I would anyway question whether there is any significant difference between the two parameters. The IPCC AR4 report uses the two terms virtually synonymously. The x-axis of its figure showing estimated probability density functions (PDFs) from studies based on 20th-century warming, including the study whose data I reanalysed, is labelled “Equilibrium Climate Sensitivity”, notwithstanding that, strictly speaking, a good proportion of the featured studies estimated effective climate sensitivity.

5. Perhaps most seriously, Nuticelli claimed that my study misrepresents Aldrin et al. (2012). In it, I stated that the 1.6°C mode I obtained for climate sensitivity was identical to that per the main results in Aldrin et al. (2012). The truth of my statement is easily verified by inspecting Figure 6.a) of that paper. The mode of a climate sensitivity PDF is the location of its peak value, and was referred to in the IPCC AR4 report as being the best estimate. An alternative measure is the median – the value with equal probability (area under the PDF) above and below it. However, it would have been difficult to be certain of the accuracy of a median estimate measured from Figure 6.a), and the mode has the advantage of being less affected than the median by the choice of prior distribution. I do not consider the mean, quoted by Aldrin et al., to be a suitable central measure for climate sensitivity PDFs, because the PDFs are skewed. Consistent with my view, the relevant chapter of IPCC AR4 quotes modes and medians for climate sensitivity estimates, but not means. For completeness, I also gave the 5–95% climate sensitivity range for the main Aldrin et al. (2012) results, of 1.2–3.5°C.