Jump to content
  • Member Statistics

    17,507
    Total Members
    7,904
    Most Online
    SnowHabit
    Newest Member
    SnowHabit
    Joined

Global Average Temperature and the Propagation of Uncertainty


bdgwx
 Share

Recommended Posts

Because the Frank 2010 paper is still being promoted by the author and making its rounds in the blogosphere I thought I would dedicate an entire post to it. The publication claims that the lower bound of the annual global average temperature uncertainty is ±0.46 C (1σ) or 0.92 C (2σ). This result is then used by the blogosphere to conclude that we do not know whether or not the planet has warmed. 

Before I explain what I think is wrong with the Frank 2010 publication I'll first refer readers to the rigorous uncertainty analysis provided by Berkeley Earth via Rhode et al. 2013, GISTEMP via Lenssen et al. 2019, and HadCRUT via Morice et al. 2020. Each of these dataset unequivocally show that the planet is warming at rate of about +0.19 C/decade since 1979. And each uncertainty analysis confirms that the true uncertainty on the monthly values is about ± 0.03 (1σ) despite using wildly different techniques. Berkeley Earth uses jackknife resampling, GISTEMP uses a bottom-up type B evaluation, and HadCRUT uses ensembling not unlike a monte carlo simulation.

The entirety of the conclusion from the Frank 2010 publication boils down to this series of calculations.

(1a) σ = 0.2 "standard error" from Folland 2001

(1b) sqrt(N * 0.2^2 / (N-1)) = 0.200 where N is the number of observations (2 for daily, 60 for monthly, etc.)

(1c) sqrt(0.200^2 + 0.200^2) = 0.283

(2a) σ = 0.254 gaussian fit based on Hubbard 2002

(2b) sqrt(N * 0.254^2 / (N-1)) = 0.254 where N is the number of observations (2 for daily, 60 for monthly, etc.)

(2c) sqrt(0.254^2 + 0.254^2) = 0.359

(3) sqrt(0 283^2 + 0 359^2) = 0.46

Explanation of above: (1b) and (2b) are an attempt to propagate Tmax and Tmin uncertainties into a 30yr average used as the anomaly baseline and for an annual average. (1c) and (2c) is the propagation of uncertainty for an annual anomaly value. (3) is the combined uncertainty of the Folland and Hubbard components after propagating into anomaly values.

Here are the 3 mistakes I believe the author made in order of increasing egregiousness. These are based on my direct conversations with the author. Even the first mistake is egregious enough to that it would get rejected by a reputable peer reviewed journal. The other 2 are so egregious it defies credulity that it even made it into the Energy & Environment journal which is actually more of a social and policy journal than a science journal and has a history of publishing research known for being wrong.

Mistake 1. The uncertainties provided by Folland 2001 and Hubbard 2002 are for daily Tmax and Tmin observations. The author combines these in calculation (3) via the well known root sum square or summation in quadrature rule under the assumption that Folland and Hubbard are describing two different kinds of uncertainty that must be combined. The problem is that Folland is terse on details. It is impossible to say exactly what that 0.2 figure actually means. But based on context clues I personally inferred that Folland is describing the same thing as Hubbard. They just came up with slightly different estimates of the uncertainty.

Mistake 2. The formula used in steps (1b) and (2b) is σ_avg = sqrt[N * σ^2 / (N-1)] where N is the number of the observations included in a time average and σ is the daily Tmax or Tmin uncertainty. For example for a monthly average N would be ~60 or for a 30yr average N would be ~21916. As you can see for large N the formula reduces to σ_avg = σ implying that the uncertainty on monthly, annual, and 30yr averages are no different than the uncertainty on daily observations. The problem is that formula is nonsense. All texts on the propagation of uncertainty including Bevington 2003, which the author cites for this formula, clearly say that the formula is σ_avg = σ / sqrt(N). This can be confirmed via the Guide to the Expression of Uncertainty in Measurement 2008 or by using the NIST uncertainty machine which will do the general partial derivative propagation described in Bevington and the GUM for you with an accompanying monte carlo simulation.

Mistake 3. The ±0.46 C figure is advertised as the annual global average temperature uncertainty. The problem is that it is a calculation only of the uncertainty for station anomaly value (annual absolute mean minus 30yr average). No where in the publication does the author propagate the station anomaly uncertainty into the gridding, infilling, and spatial averaging steps that all datasets require to compute a global average. Because the uncertainty of an average is lower than the uncertainty of the individual measurements that go into it the global average temperature uncertainty will be considerably lower than the individual Tmax and Tmin uncertainties. There are actually 3 steps in which an average is taken: a) averaging station data into a monthly or annual domain, b) averaging multiple stations into a grid cell, and c) averaging all of the cells in a grid mesh to get the global average. Not only does the author not calculate a) correctly (see mistake #2 above) but he does not even perform the propagation through steps b) and c). The point is this. That ±0.46 C figure is not the uncertainty for the global average as the author and blogosphere claim.

Dr. Frank, if you stumble upon this post I would be interested in your responses to my concerns and the other concerns of those who came before me.

In a future post under this thread I'll present my own type A evaluation of the monthly global average temperature uncertainty. Will it be consistent with the more rigorous analysis I mentioned above? I'll also try to periodically update the AmericanWx audience with various statistics and publications relevant to this topic. Comments (especially criticisms) are definitely welcome. I am by no means an expert in uncertainty propagation or the methods used to compute the measure the global average temperature. We can all learn together.

  • Thanks 1
  • Weenie 1
Link to comment
Share on other sites

I have downloaded the following global average temperature products from several datasets. This datasets include 4 surface, 2 satellite, 1 radiosonde, and 1 reanalysis.

UAHv6 - Satellite - Spencer et al. 2016 Data: https://www.nsstc.uah.edu/data/msu/v6.0/tlt/tltglhmam_6.0.txt

RSSv4 - Satellite - Mears & Wentz 2017 Data: https://data.remss.com/msu/monthly_time_series/RSS_Monthly_MSU_AMSU_Channel_TLT_Anomalies_Land_and_Ocean_v04_0.txt

RATPAC 850-300mb - Radiosonde - Free et al. 2005 Data: https://www.ncei.noaa.gov/pub/data/ratpac/ratpac-a/

NOAAGlobalTemp v5 - Surface - Haung et al. 2020 Data: https://www.ncei.noaa.gov/data/noaa-global-surface-temperature/v5/access/timeseries/

GISTEMP v4 - Surface - Lenssen et al. 2019 Data: https://data.giss.nasa.gov/gistemp/graphs_v4/graph_data/Monthly_Mean_Global_Surface_Temperature/graph.txt

BEST - Surface - Rhode et al. 2013 Data: http://berkeleyearth.lbl.gov/auto/Global/Land_and_Ocean_complete.txt

HadCRUTv5 - Surface - Morice et al. 2020 Data: https://www.metoffice.gov.uk/hadobs/hadcrut5/data/current/download.html

ERA5 (Copernicus) - Reanalysis - Hersbach et al. 2020 Data: https://climate.copernicus.eu/surface-air-temperature-maps

All datasets have been processed a common baseline: the full range 1979-2021 average. This is done so that they can be compared with each other.

It should be noted that it is my understanding that of the surface datasets NOAAGlobalTemp v5 is still only a partial sphere estimate. It appears research is underway to make it full sphere. See Vose et al. 2021 for details. I do not believe this is publicly available yet.

One criticism I often see is that the global average temperature warming trend is being overestimated. To test this validity of this claim I compared each of the above datasets. I also formed an equal weighted composite of the datasets for comparison. It is important to note that the dataset do not all measure the same thing. UAH and RSS measure the lower troposphere and do so at different average heights. RATPAC is the average from 850mb to 300mb which I selected to be representative of the UAH and RSS depths although neither UAH nor RSS are equally weighted in the 850-300mb layer. And as mentioned above NOAAGlobalTemp is a partial sphere estimate while GISTEMP, BEST, HadCRUT, and ERA are full sphere. In that context it could be argued that forming a composite is inappropriate. However, I do so here because all of these datasets are or have been used as proxy for comparisons of the "global average temperature" whether it was appropriate to do so or not.

In the graph below I have plotted each dataset including the 13 month centered average of the composite. The formula used Σ[Σ[Tdm, 1, m], 1, d] / (dm) where d is the number of datasets (8) and m is the number of months (13). The composite 2σ timeseries represents an implied uncertainty based off the standard deviation from the 13 month centered average of the 8 datasets that go into it. The formula used is sqrt(Σ[(Tdm - Tavg)^2, 1, dm] / (dm - 1)) * 2 which is the formula for standard deviation multiplied by 2. It is primarily confined to a range of 0.05 to 0.10 C. However, it is important to note that UAH and RSS add a considerable amount of variance to the composite. A 13 month average would be expected to have a lower uncertainty than a 1 month average. That is true in this case as well though it may be hard for the astute read to see since typical monthly uncertainties for HadCRUT, GISTEMP, and BEST are generally on the order of 0.05 C.

One thing that is immediately obvious is that UAH is, by far, the low outlier with a warming trend of only +0.135 C/decade. This compares with the composite of +0.187 C/decade. Also note the large difference between UAH and RATPAC and the small difference between RSS and RATPAC. It is often claimed that UAH is a better match to the radiosonde data. This could not be further from the truth at least when comparing with RATPAC which contains the homogenization adjustments making it valid climatic trend analysis.

What I find most remarkable about this graph is the broad agreement both in terms of long term warming and the short term variation even though the methods of measuring the global average temperature use wildly different methodologies and subsets of available data. 

clGZW3S.png

  • Like 2
  • Thanks 1
Link to comment
Share on other sites

For the first statistical test of the uncertainty I focused only on HadCRUT, GISTEMP, BEST, and ERA since these are all surface datasets that have full sphere converge. In other words, they are all measuring almost exactly the same thing.

This test will determine the difference between any two measurements. The quantity we are calculating is D = Ta - Tb where D is the difference and both Ta and Tb are temperature anomalies for the same month from randomly selected datasets. The period of evaluation is 1979/01 to 2021/12 which covers 516 months. With 4 datasets there are 6 combinations. This gives us a total of 3096 comparisons or values for Ta, Tb, and D. What we want to evaluate first is the uncertainty in the difference u(D). This is pretty simple since for type A evaluations it is the standard deviation of all values of D. In the graphic below you can see the histogram of the difference.  The distribution is pretty close to normal and has a standard deviation of 0.053 C. So we set u(D) = 0.053 C.

We're not done yet though. We know u(D) = 0.053 C, but what we really want to know is u(T). We can easily do this via the well known root sum square or summation in quadrature rule which says u(z) = sqrt[u(x)^2 + u(y)^2] for a function f in the form f(x, y) = x + y. The more fundamental concept that applies to any arbitrarily complex function f is the partial differential method, but there is no need to apply the more complex general form since our function f(x, y) = x + y is simple and is already known to propagate uncertainty via u(z) = sqrt[u(x)^2 + u(y)^2]. Assuming u(Ta) and u(Tb) are the same and can be represented with just u(T) the formula becomes u(D) = sqrt[2 * u(T)^2]. Solving this equation for u(T) we get u(T) = sqrt[u(D)^2 / 2]. So if u(D) = 0.053 then u(T) = 0.038 C. And our 2σ expanded uncertainty is 2*u(T) = 2 * 0.038 = 0.075 C.

That's the result. The expanded 2σ uncertainty for monthly global average temperature anomalies using a type A evaluation where we compare each dataset to the others yields is 0.075 C. Note that is only one among multiple different ways a type A evaluation can be performed.

 

qgdZiiF.png

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

  • 1 month later...

For the second statistical test I will again focus on HadCRUT, GISTEMP, BEST, and ERA since these are all surface datasets that have full sphere converge.

This test will compare the difference between monthly measurements of each dataset and the mean of all. The mean is considered to be the best expectation of the true value. The quantity we are calculating is Dx = Tx - Tavg where Dx is the difference between the temperature Tx of dataset x and the average temperature Tavg.  The period of evaluation is 1979/01 to 2021/12 which covers 516 months. That means each dataset x has 516 measurements than be compared to the average. We will determine the uncertainty of Dx as 2*u(Dx) by calculating the standard deviation of Dx.  Since it is common practice to report uncertainty at 95% confidence we will multiple by 2 for the 2-sigma range. Again, this a type A evaluation of uncertainty. For HadCRUT this value is 0.051 C, for Berkeley Earth it is 0.060 C, for GISTEMP it is 0.057 C, and for ERA it is 0.086 C. The average implied uncertainty is 0.065 C.

GYtKN8i.png

I am a little surprised by ERA. It has the highest uncertainty wrt to the mean of the datasets analyzed. I'm surprised because ERA is considered to be among the best reanalysis datasets and incorporates not only orders of magnitude more observations than the other datasets but many different kinds of observations including surface, satellite, radiosonde, etc. It has a much longer tails on the distribution.

So we've calculated average implied uncertainty 2u(D) as 0.065 C. But that is only the uncertainty of the difference wrt to the average. The average itself will have an uncertainty given by u(avg) = u(x) / sqrt(N). So u(Tavg) = u(D) / sqrt(N) = 0.0325 / sqrt(4) = 0.0163 C. So we have u(D) = 0.033 and u(Tavg) = 0.0163. We will apply the root sum square rule to find the final uncertainty u(T). It is u(T) = sqrt(u(D)^2 + u(Tavg)^2) = sqrt(0.033^2 + 0.0163^2) = 0.0368 C. And using the 2-sigma convention we have 2u(T) = 0.0368 * 2 = 0.074 C.

That's our final answer. Method 1 above yields u = 0.075 C while method 2 here yields u = 0.074 C. This isn't much of a surprise that both methods give essentially the same result since they are both calculated from the same data.

 

  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...

I often hear that UAH is the most trustworthy and honest global average temperature dataset because they do not adjust the data. I thought it might be good to dedicate a post to the topic and debunk that myth right now. Fortunately I was able to track down a lot of the information from the US Climate Change Science Program's Temperature Trends in the Lower Atmosphere - Chapter 2 by Karl et al. 2006 which Dr. Christy was the lead author at least for that chapter.

Year / Version / Effect / Description / Citation

Adjustment 1: 1992 : A : unknown effect : simple bias correction : Spencer & Christy 1992

Adjustment 2: 1994 : B : -0.03 C/decade : linear diurnal drift : Christy et al. 1995

Adjustment 3: 1997 : C : +0.03 C/decade : removal of residual annual cycle related to hot target variations : Christy et al. 1998

Adjustment 4: 1998 : D : +0.10 C/decade : orbital decay : Christy et al. 2000

Adjustment 5: 1998 : D : -0.07 C/decade : removal of dependence on time variations of hot target temperature : Christy et al. 2000

Adjustment 6: 2003 : 5.0 : +0.008 C/decade : non-linear diurnal drift : Christy et al. 2003

Adjustment 7: 2004 : 5.1 : -0.004 C/decade : data criteria acceptance : Karl et al. 2006 

Adjustment 8: 2005 : 5.2 : +0.035 C/decade : diurnal drift : Spencer et al. 2006

Adjustment 9: 2017 : 6.0 : -0.03 C/decade : new method : Spencer et al. 2017 [open]

That is 0.307 C/decade worth of adjustments with a net of +0.039 C/decade.

 

  • Thanks 1
Link to comment
Share on other sites

  • 2 weeks later...

Just a quick post here. Using the same procedure as above I calculated the type A uncertainty on UAH and RSS satellite monthly anomalies at ±0.16 C. This is consistent with the type B evaluation from Christy et al. 2003 of ±0.20 C and the monte carlo evaluation by Mears et al. 2011 of 0.2 C.

This compares to the surface station uncertainty of ±0.07.

It might also be interesting to point out that Spencer & Christy 1992 first assessed the monthly uncertainty as ±0.01 C then later reevaluated it as ±0.20 C. Anyway the point is that the uncertainty on global average temperatures from satellites is significantly worse/higher than those from the surface station datasets.

Link to comment
Share on other sites

  • 7 months later...

The 2022 data is in so here is an update. I wanted to present the data in a way that highlights the divergence of trends of the various datasets. To do this an ordinary linear regression is performed and the y-intercept is then used as the anchor point for each dataset. In other words, each dataset is normalized (or re-anomalized) to its starting point. This puts each dataset on equal footing and allows us to see how the differences in trends effect the values later in the period.

I have included an equal weighted composite of the datasets which serves as an ensemble mean and the best estimate of the true global average temperature change. Notice that RATPAC and RSS show the most warming while UAH shows the least. The composite trend has a 2σ uncertainty of ±0.058 C/decade. I'll dedicate an entire post on how I computed that uncertainty when I get time.

I have also included the CMIP5 RCP4.5 prediction for comparison. Notice that CMIP5 predicted a higher trend than actually occurred. However, with uncertainty of ±0.06 C/decade on the composite we cannot say that CMIP5 is inconsistent with the observation. 

XnhZtDD.png

  • Like 3
Link to comment
Share on other sites

17 hours ago, HailMan06 said:

Our current multi-year La Niña is probably affecting the global temperature trend post-2019. Just eyeballing the chart it makes the trend appear lower than it otherwise would be.

Correct. I'll post more information about this when I get time. Another thing that is affecting the composite trend is the fact that only HCRUT, GISS, BEST, and ERA are full sphere measurements. This is pulling the observed trend down because the highest rates of warming are occurring in the Arctic. Another possible (and probable IMHO) is that UAH is an outlier not because it holds a monopoly on truth, but because it may have one or more significant defects. If UAH showed the same warming rate as the next lowest dataset (NOAA) then the composite trend would jump up to +0.190 C/decade.

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

In the chart above the composite trend is labeled is as +0.186 ± 0.058 C/decade (2σ). The question might be...why is the uncertainty so high on that trend. The reason is because the global average temperature exhibits high autocorrelation. Autocorrelation is the correlation of a timeseries with a time-lagged copy of itself. What this means is that the value 1 month (or 2 months, etc.) ago partially determines (act as a predictor or estimator) the value of the current month. This can be interpreted where if the value is high/low it tends to stay high/low. This would then cause the trend to nudge up/down significantly as the monthly values stayed high/low for extended periods of time. In other words, the trend is variable as new monthly values are added to the timeseries. This creates uncertainty on top of the already existing linear regression trend standard uncertainty. 

The way I dealt with is to use the Foster & Rahmstorf 2011 method. In this method they calculate the ordinary linear regression standard uncertainty via the canonical σ_w = sqrt[ Σ[(y_actual - y_predicted)^2] / Σ[(x_actual - x_mean)^2] / (n-2) ] forumula. That results in σ_w = 0.0048 C/decade. We then multiple that by the square root of the correction value v such that σ_c = σ_w * sqrt(v). Our correction value v is defined as v = (1+p) / (1-p) and can be interpreted as the number of data points per the degrees of freedom. The p parameter is the autoregressive coefficient from an AR(1) model of the timeseries. For the composite timeseries in the graph above our AR(1) coefficient is p = 0.948. This makes our correction factor v = (1+0.948) / (1-0.948) = 37.5. So sqrt[v] = 6.15. Thus the expanded/corrected standard uncertainty of the trend is σ_c = 0.0048 * 6.15 = 0.03 C/decade. And, of course, 2σ_c = 0.06 C/decade. There are some rounding errors here. The full computation is 0.058 C/decade. Anyway, that is how we get ±0.058 C/decade for the trend over the period 1979-2022. Note that F&R say that the AR(1) correction still underestimates the trend uncertainty. They recommend applying an ARMA correction as well which factors in the n-month lagged autoregressive decay rate. Based on the calculations I've seen that additional correction is insignificant when analyzing long duration trends like from 1979 to 2022 so I'll ignore it for now.

To demonstrate just how wildly the trend can behave over short periods of time I plotted the trend from all starting months from 1979 to 2022 ending in 2022/12 with the confidence intervals computed from the method above. For example, the trend at 2014/09 is -0.005 C/decade. It happens to be the oldest trend that is <= 0 C/decade. But using the F&R method above we see that the uncertainty on that is a massive ±0.305 C/decade. The AR(1) corrected trend uncertainties gets smaller and stabilizes as you go back further in time toward 1979. 

The point...the global average temperature trend is highly variable over short periods of analysis. This is why the infamous Monckton Pause which starts in 2014/09 has an uncertainty of ±0.305 C/decade and so is statistically insignificant. You just simply can't look at short recent trends and draw any meaningful conclusion. This is true when selecting 2010/12 as the starting month and report the very high warming rate of +0.316 C/decade since the uncertainty there is ± 0.273 C/decade. What we can definitively conclude is that all trends older than 2011/08 show statistically significant warming.

xFN04bb.png

 

  • Like 2
Link to comment
Share on other sites

I want to address the observation that the current La Nina may be pulling hte trend down that @HailMan06 noticed. This will be a two part topic. The first part will focus on determing how we can model the global average temperature (GAT). We will take the result from this post to answer the question directly in the next post.

The goal is to find a model that explains and predicts the GAT and minimizes the root mean squared error (RMSE). We will use the following rules.

- Components of the model must be based on physical laws and known to directly force changes in atmospheric temperature. This rule eliminates things like population growth which may be correlated with atmospheric temperature but does not directly force it.

- Autocorrelation will not be considered. While autocorrelation is incredibly powerful in predicting how a phenomenon evolves in time it does not help in explaining why it evolved or more precisely why it persisted in the first place.

- The components should be as independent as possible. For example, since MEI and ONI are different metrics of the same phenomenon we should not use both.

- ENSO must be considered. This is necessary because in part 2 we will use the information from the model regarding ENSO to see how it effects the GAT trend.

- It must use a simple linear form so that it is easy to compute and interpret. 

With these rules in mind here is a list of components that I felt were easy to obtain and would adequately model the GAT.

- Oceanic Nino Index (ONI) https://www.cpc.ncep.noaa.gov/data/indices/oni.ascii.txt

- Atlantic Multi-Decadal Oscillation (AMO) https://psl.noaa.gov/data/correlation/amon.us.data

- Volcanic Aersools (AOD) https://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

- Total Solar Irradiance (TSI) https://lasp.colorado.edu/lisird/data/nrl2_tsi_P1M/

- Atmospheric Carbon Dioxide Concentration (CO2) https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_mlo.txt

The model will be in the form T = Σ[λ * f(D_i), 1, n] where λ serves as both a tuning parameter and a unit translation factor to convert units for the component into degrees K (or C), D is the component data, and f is a simple function that acts upon the component data. In most cases the function f just returns the value raw value of the dataset. The model is trained with multiple passes where each pass focuses on tuning both the λ parameter and the lag period for a single component one-by-one. After the 1st pass initializes the λ parameter for a 1-month lag for each component a 2nd pass perturbs the parameter and lag period up/down to see if the model output skill improves. This continues for as many iterations as needed until the model skill stops improving. It is important to understand that the λ parameters and lag periods are not chosen or selected. They appear organically from the training.

Without further ado here is the model.

GAT = 0.09 + [2.3 * log2(CO2lag1 / CO2initial)] + [0.11 * ONIlag4] + [0.20 * AMOlag3] + [-2.0 * AODlag3] + [0.05 * (TSIlag1 - TSIavg)]

This model has an RMSE of 0.091 C. Considering that the composite mean GAT has an uncertainty of around σ = 0.05 C that is an incredible match to the GAT observations leaving maybe 0.04 C of skill on the table.

The astute reader will notice that the λ parameter on the CO2 component is λ = 2.3 C/log2(PPM). Care needs to be taken when interpreting this value. It is neither the equilibrium climate sensitivity (ECS) nor the transient climate sensitivity (TCS). However, it is most closely related to the TCS which would imply an ECS of about 3.0 C per 2xCO2. And again...I did not pick this value. I did not manipulate the training of the model so that it would appear. I had no idea that the machine learning algorithm would hone in on it. It just appears organically. Anyway, that is neither here nor there. The important point here is that we now have an estimate for how ENSO modulates the GAT which can be used later to see how the current La Nina may have affected the trend.

sUpJ7gP.png

 

 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

This post is part 2 regarding the question of how ENSO and specifically the current La Nina is affecting the global average temperature (GAT) trend. 

Using the model we created and trained above we can see that ENSO modulation term is [0.11 * ONIlag4]. We can visual the ENSO affect by plotting the ENSO residuals from our model against the detrended GAT. As can be seen ENSO explains a lot of the GAT variability with an R^2 = 0.32. One quite noticeable problem is the Pinatubo 1991 eruption.

5Sj8dWT.png

If we then apply the ENSO residuals to the GAT we can get a ENSO-adjusted GAT timeseries. Each dataset has been ENSO-adjusted and presented with new trend (C/decade) figures. The graph below is the exact same format as the one above so it can be easily compared. Notice that applying the ENSO residual adjustment reduces the monthly variability and causes all of the trends to increase by about 0.01 C/decade including the composite trend which is +0.195 ± 0.049 C/decade vs  +0.186 ± 0.058 C/decade without the adjustment. Because we are removing a significant autocorrelation component in the process our AR(1) corrected uncertainty drops as well. So that is the answer. ENSO and specifically the current La Nina has reduced the trend by 0.01 C/decade. 

Another thing that may jump out at you is the Pinatubo 1991 eruption era. The discrepancy between CMIP5 and observation may be the handling of the eruption and its effects. It looks to me like CMIP5 underestimates the effect. As a result it does not cool the planet enough in the succeeding months. Had CMIP5 pulled the temperature down another 0.2 C it looks like the prediction would have been a much better match to observations later in the period.

Mhq0kNV.png

 

  • Like 1
Link to comment
Share on other sites

On 2/11/2023 at 3:58 PM, bdgwx said:

I want to address the observation that the current La Nina may be pulling hte trend down that @HailMan06 noticed. This will be a two part topic. The first part will focus on determing how we can model the global average temperature (GAT). We will take the result from this post to answer the question directly in the next post.

The goal is to find a model that explains and predicts the GAT and minimizes the root mean squared error (RMSE). We will use the following rules.

- Components of the model must be based on physical laws and known to directly force changes in atmospheric temperature. This rule eliminates things like population growth which may be correlated with atmospheric temperature but does not directly force it.

- Autocorrelation will not be considered. While autocorrelation is incredibly powerful in predicting how a phenomenon evolves in time it does not help in explaining why it evolved or more precisely why it persisted in the first place.

- The components should be as independent as possible. For example, since MEI and ONI are different metrics of the same phenomenon we should not use both.

- ENSO must be considered. This is necessary because in part 2 we will use the information from the model regarding ENSO to see how it effects the GAT trend.

- It must use a simple linear form so that it is easy to compute and interpret. 

With these rules in mind here is a list of components that I felt were easy to obtain and would adequately model the GAT.

- Oceanic Nino Index (ONI) https://www.cpc.ncep.noaa.gov/data/indices/oni.ascii.txt

- Atlantic Multi-Decadal Oscillation (AMO) https://psl.noaa.gov/data/correlation/amon.us.data

- Volcanic Aersools (AOD) https://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt

- Total Solar Irradiance (TSI) https://lasp.colorado.edu/lisird/data/nrl2_tsi_P1M/

- Atmospheric Carbon Dioxide Concentration (CO2) https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_mlo.txt

The model will be in the form T = Σ[λ * f(D_i), 1, n] where λ serves as both a tuning parameter and a unit translation factor to convert units for the component into degrees K (or C), D is the component data, and f is a simple function that acts upon the component data. In most cases the function f just returns the value raw value of the dataset. The model is trained with multiple passes where each pass focuses on tuning both the λ parameter and the lag period for a single component one-by-one. After the 1st pass initializes the λ parameter for a 1-month lag for each component a 2nd pass perturbs the parameter and lag period up/down to see if the model output skill improves. This continues for as many iterations as needed until the model skill stops improving. It is important to understand that the λ parameters and lag periods are not chosen or selected. They appear organically from the training.

Without further ado here is the model.

GAT = 0.09 + [2.3 * log2(CO2lag1 / CO2initial)] + [0.11 * ONIlag4] + [0.20 * AMOlag3] + [-2.0 * AODlag3] + [0.05 * (TSIlag1 - TSIavg)]

This model has an RMSE of 0.091 C. Considering that the composite mean GAT has an uncertainty of around σ = 0.05 C that is an incredible match to the GAT observations leaving maybe 0.04 C of skill on the table.

The astute reader will notice that the λ parameter on the CO2 component is λ = 2.3 C/log2(PPM). Care needs to be taken when interpreting this value. It is neither the equilibrium climate sensitivity (ECS) nor the transient climate sensitivity (TCS). However, it is most closely related to the TCS which would imply an ECS of about 3.0 C per 2xCO2. And again...I did not pick this value. I did not manipulate the training of the model so that it would appear. I had no idea that the machine learning algorithm would hone in on it. It just appears organically. Anyway, that is neither here nor there. The important point here is that we now have an estimate for how ENSO modulates the GAT which can be used later to see how the current La Nina may have affected the trend.

sUpJ7gP.png

 

 

Nice set of posts. Shows how predictable the global mean temperature is and how strong the signal to noise ratio is for greenhouse gas warming.

  • Like 1
Link to comment
Share on other sites

I was just made aware today that NOAAGlobalTemp was upgraded last week from 5.0 to 5.1. The trend from 1979/01 to 2022/12 increased from 0.172 C/decade to 0.180 C/decade.

https://www.ncei.noaa.gov/products/land-based-station/noaa-global-temp

The big change is the implementation of full spatial coverage.

See Vose et al. 2021 for details on the changes.

Here is the updated graphic based on preliminary data I had to download manually. 

2p9Vg2U.png

  • Like 1
Link to comment
Share on other sites

  • 4 weeks later...

I had a contrarian tell me that I should be assessing model skill based on older model runs a few days ago. So by request I have swapped CMIP5 with CMIP3. I also added JRA and MERRA to the mix. This graphic now contains 10 global average temperature datasets (1 radiosonde, 2 satellite, 3 reanalysis, and 4 surface). Anyway notice that CMIP3 predicted +0.21 C/decade and the multi-dataset composite is +0.20 ± 0.06 C/decade. That's pretty good.

k6rS1ez.png

  • Like 1
Link to comment
Share on other sites

On 3/21/2023 at 9:34 PM, csnavywx said:

you wanna see a scary good prediction -- try hansen '81 scen B compared to actual surface Ts

To me it is interesting that in thesis it is written, "...This temperature increase is consistent with the calculated effect due to measured increases of atmospheric carbon dioxide..." 

You know, this goes back to the Keeling curve (1958+) analysis - later, superimposing the temperature rise over the C02 increases demonstrates a pretty dead match.

It's like these Hansen et al findings are really just adding to a consilience. The science of climate change keeps formulating new approaches that just add to a correlative pool.  

Of course, 1981 is a long while ago.. Any skepticism about the C02 correlation by this point in history is either contrived by faux science or all but immoral. But it is interesting that the history/evolution of the present day understanding really should be more institutional at this point -

Personal furthering notion... the problem in that latter sense (I suspect) is human limitation related, a limitation that is there by way of all evolution of life at a biological level; constructing truth has to happen by means "corporeal-based reality."  Meaning, what is directly observable via the 5 senses. 

You tell a person to beware of something, they'll bear it in mind. If they hear, see, smell, taste or touch, sample that something directly, only then will they actually react.  This specter of CC, prior to ...perhaps 10 or 15 years ago, had no corporeal advocate. Simply put, it was invisible. Combining that notion with the shear grandeur of the whole scale of the planet, not being mentally tenable to most ( let's get real), acceptance --> recourse was going to be racing against time with a broken leg.

It really is only beginning to manifest in such a way that can be seen or heard or felt...etc, and even these cataclysms are being ..almost shelved because they are media spectacles and drama someplace else? While for the vaster majority, technology enable -blinded civilization.

Eventually, though, the industrial bubble does have a fragility.  And, until CC somehow either directly or through indirection of a multitude of factors finally pops that protection, humanity will unfortunately continue enabled from what it is they (apparently) must have in order to at last react and learn -

pain.

All the while, the wrong momentum gathers.... 

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

NOAA STAR just released v5 of their satellite dataset. STAR has been around for awhile but until now has not provided global average temperature timeseries. For that reason I never bothered looking at it...until now. In the past I have been critical of UAH because I have felt it is an outlier. I'm going to temper my criticism based off of what I'm seeing from STAR.

One thing I've done in this thread is to compare global average temperature datasets. I should point out that the comparisons I have made above aren't exactly applies-to-applies comparisons since the radiosonde and satellite datasets are not measuring the same thing as the surface and reanalysis datasets. For example, RATPAC is the mean of 850-300mb and RSS/UAH each have their own TLT weightings providing bulk means focused on different atmospheric layers. Not only does this make interpretations of the comparisons above a bit nebulous, but it even makes comparing satellite-only datasets nebulous. 

What I'm going to do here is apply the Fu et al. 2004 method of deriving a "corrected" TMT product for each satellite dataset. This allows me to make a like-to-like comparison of them. The TMT corrected product is simple. It is FuTMT = 1.156*TMT - 0.153*TLS. Interestingly Zou et al. 2023 produces a total tropospheric temperature (TTT) product that is computed as TTT = 1.15*TMT - 0.15*TLS and nearly identical to the Fu et al. 2004 method. Both methods are designed to remove the stratospheric cooling contamination from the TMT product. Anyway, since all 3 datasets (UAH, RSS, and STAR) provide TMT and TLS products from 1979 to present it is pretty easy to make the comparison.

Notice that UAH and STAR largely agree at least according to the linear regression trend. Based on this I have no choice but to consider UAH in a new light. Maybe it isn't an outlier afterall. I'll reserve judgement for a later time and after the experts way in on the STAR methodology. It is important to point out this is a double edged sword. While STAR agrees more with UAH than RSS in terms of the linear trend it does not agree so well with either in terms of the trend after 2002. As Zou et al. 2023 note STAR shows a significant acceleration in the warming even beyond that of UAH and RSS.

References

Zou et al. 2023

Fu et al. 2004 (paywalled)

QN8oQWx.png

Link to comment
Share on other sites

In addition to the Hansen et al. 1981 prediction we also have the IPCC FAR prediction from 1990 that is notable as well. This prediction is summarized in the SPM via figure 5 (pg. xix) for emissions scenarios and figure 9 (pg xxiii) for predictions. I think we could debate the details of which pathway humans chose, but we could probably all at least agree that it was closer to scenario B than scenario A. I think I land more in the camp that we were more likely choose something just slightly under scenario B based on the CH4 and CFC11 emissions. Given that it is more likely IMHO that the IPCC, like Hansen et al. 1981, underestimated the warming since HadCRUTv5 shows 0.66 C of warming putting us just slightly above the scenario B prediction. That's pretty good for 30 year prediction. But the main point here is that contrarian statements that the IPCC didn't get the global average temperature prediction right is farcical.

jAIr3dZ.png

 

9HjHFmY.png

Link to comment
Share on other sites

12 hours ago, Typhoon Tip said:

To me it is interesting that in thesis it is written, "...This temperature increase is consistent with the calculated effect due to measured increases of atmospheric carbon dioxide..." 

You know, this goes back to the Keeling curve (1958+) analysis - later, superimposing the temperature rise over the C02 increases demonstrates a pretty dead match.

It's like these Hansen et al findings are really just adding to a consilience. The science of climate change keeps formulating new approaches that just add to a correlative pool.  

Of course, 1981 is a long while ago.. Any skepticism about the C02 correlation by this point in history is either contrived by faux science or all but immoral. But it is interesting that the history/evolution of the present day understanding really should be more institutional at this point -

Personal furthering notion... the problem in that latter sense (I suspect) is human limitation related, a limitation that is there by way of all evolution of life at a biological level; constructing truth has to happen by means "corporeal-based reality."  Meaning, what is directly observable via the 5 senses. 

You tell a person to beware of something, they'll bear it in mind. If they hear, see, smell, taste or touch, sample that something directly, only then will they actually react.  This specter of CC, prior to ...perhaps 10 or 15 years ago, had now corporeal advocate. Simply put, it was invisible. Combining that notion with the shear grandeur of the whole scale of the planet just not being mentally tenable to most ( let's get real), acceptance --> recourse was going to be racing against time with a broken leg.

It really is only beginning to manifest in such a way that can be seen or heard or felt...etc, and even these cataclysms are being ..almost shelved because they are media drama someplace else? They are offset by technology protecting civility. Eventually, though, the industrial bubble does have a fragility.  And, until CC somehow either directly or through indirection of a multitude of factors finally pops that protection, humanity will unfortunately continue enabled from what it is they (apparently) must have in order to at last react and learn -

pain.

All the while, the wrong momentum gathers.... 

Speaking of momentum:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0237672

journal.pone.0237672.g004

 

Probably one of the most interesting papers I've read in some time.

Presents the issue as one of a giant momentum trap that's not easy to maneuver out of. Adding RE (renewable energy) simply grows the entire system of civilizational networks, growing the entire pie (including the fossil part) as you would a snowflake if you added water vapor to the environment around the branches (the facet competes with them but nevertheless still benefits from an expansion of the branches). Efficiency gains do nothing to decarbonize on a global scale because they are used for growth. Direct replacement possible but not easy because of offsetting network effects. Slowing its growth via constriction of energy consumption (like '09 GFC or '20 Covid) possible temporarily but causes big disruption and RE replacement efforts are often sidelined in favor of quick recovery. This growing "superorganism" does not like to be starved and will hurt everybody until it is properly fed again, so to speak.

As for global temperature, that kind of inertia argues for a continued expansion of EEI (earth energy imbalance) as we warm -- and that's exactly what we've seen. EEI continues to rip higher even after the last big ENSO cycle.

  • Like 1
  • Thanks 2
Link to comment
Share on other sites

11 hours ago, csnavywx said:

Speaking of momentum:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0237672

journal.pone.0237672.g004

 

Probably one of the most interesting papers I've read in some time.

Presents the issue as one of a giant momentum trap that's not easy to maneuver out of. Adding RE (renewable energy) simply grows the entire system of civilizational networks, growing the entire pie (including the fossil part) as you would a snowflake if you added water vapor to the environment around the branches (the facet competes with them but nevertheless still benefits from an expansion of the branches). Efficiency gains do nothing to decarbonize on a global scale because they are used for growth. Direct replacement possible but not easy because of offsetting network effects. Slowing its growth via constriction of energy consumption (like '09 GFC or '20 Covid) possible temporarily but causes big disruption and RE replacement efforts are often sidelined in favor of quick recovery. This growing "superorganism" does not like to be starved and will hurt everybody until it is properly fed again, so to speak.

As for global temperature, that kind of inertia argues for a continued expansion of EEI (earth energy imbalance) as we warm -- and that's exactly what we've seen. EEI continues to rip higher even after the last big ENSO cycle.

While for the vaster majority, technology blinds civilization.

These synergistic deadly heat waves, or freak 30" snow falls, California being bombed by 10 years of precipitation inside a single winter... or methane hydrate blowouts in Siberia... sea level rise...  oceanic deoxygenation..., humanity is just too ill-conditioned to really quantize those troubling omens for being pacified in perpetuity. It's really a failure about our nature:  too much hesitation unless directly harmed, right now. 

There's an interesting rub about that ...  how technology shelters inside the industrial bubble.  Technology is a bit of a metaphor for a virus on this planet - perhaps your "superorganism."  Just like a virus, it fools the natural "immuno-response," the ability to stop injury by way of blocking response mechanisms - in this sense, feeling the negative impact of our actions. Such that the pernicious result of our actions are largely unknowable by soothing the senses while technology consumes.

By the way... I have nothing against tech.  In fact, we are already too committed to it to survive a set back ( which is likely coming ). It's a race. We created tech and sold our evolutionary souls to it, and now we are inextricably dependent upon it . For example, should the grid truly fail, 1/3 the population is dead in 1 month.  Of the remainder,  70-90% estimated gone in 1 or 2 years ( this is not true for those human pockets that don't live in the socio-technological dependency, btw).  Whether reality in such a d-day scenario bears itself out, for the sake  of discussion we all carry about in an assumption most don't know they're making.  

What needs to happen is technologies need to be invented to compensate for the injury of the technology that has already been invented.   That's a tricky race... Probably one that will fail - because evolution never gets it right in one try.  Everything we see in the natural order that presently survives is successful after millions of years of trial-and-error, where the "errors" did not result in blowing themselves to kingdom come - metaphorically speak...   500 years of industrialization of the planet hardly seems like it's withstood the crucible of time.

This techno-evolutionary leap in humanity is iteration #1

Link to comment
Share on other sites

11 hours ago, csnavywx said:

Speaking of momentum:

https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0237672

journal.pone.0237672.g004

 

Probably one of the most interesting papers I've read in some time.

Presents the issue as one of a giant momentum trap that's not easy to maneuver out of. Adding RE (renewable energy) simply grows the entire system of civilizational networks, growing the entire pie (including the fossil part) as you would a snowflake if you added water vapor to the environment around the branches (the facet competes with them but nevertheless still benefits from an expansion of the branches). Efficiency gains do nothing to decarbonize on a global scale because they are used for growth. Direct replacement possible but not easy because of offsetting network effects. Slowing its growth via constriction of energy consumption (like '09 GFC or '20 Covid) possible temporarily but causes big disruption and RE replacement efforts are often sidelined in favor of quick recovery. This growing "superorganism" does not like to be starved and will hurt everybody until it is properly fed again, so to speak.

As for global temperature, that kind of inertia argues for a continued expansion of EEI (earth energy imbalance) as we warm -- and that's exactly what we've seen. EEI continues to rip higher even after the last big ENSO cycle.

Meh. In the long-term CO2 concentrations are going to turn around and head back to 280 ppm no matter what our cumulative economic production is.

 

Link to comment
Share on other sites

35 minutes ago, Typhoon Tip said:

While for the vaster majority, technology blinds civilization.

These synergistic deadly heat waves, or freak 30" snow falls, California being bombed by 10 years of precipitation inside a single winter... or methane hydrate blowouts in Siberia... sea level rise...  oceanic deoxygenation..., humanity is just too ill-conditioned to really quantize those troubling omens for being pacified in perpetuity. It's really a failure about our nature:  too much hesitation unless directly harmed, right now. 

There's an interesting rub about that ...  how technology shelters inside the industrial bubble.  Technology is a bit of a metaphor for a virus on this planet - perhaps your "superorganism."  Just like a virus, it fools the natural "immuno-response," the ability to stop injury by way of blocking response mechanisms - in this sense, feeling the negative impact of our actions. Such that the pernicious result of our actions are largely unknowable by soothing the senses while technology consumes.

By the way... I have nothing against tech.  In fact, we are already too committed to it to survive a set back ( which is likely coming ). It's a race. We created tech and sold our evolutionary souls to it, and now we are inextricably dependent upon it . For example, should the grid truly fail, 1/3 the population is dead in 1 month.  Of the remainder,  70-90% estimated gone in 1 or 2 years ( this is not true for those human pockets that don't live in the socio-technological dependency, btw).  Whether reality in such a d-day scenario bears itself out, for the sake  of discussion we all carry about in an assumption most don't know they're making.  

What needs to happen is technologies need to be invented to compensate for the injury of the technology that has already been invented.   That's a tricky race... Probably one that will fail - because evolution never gets it right in one try.  Everything we see in the natural order that presently survives is successful after millions of years of trial-and-error, where the "errors" did not result in blowing themselves to kingdom come - metaphorically speak...   500 years of industrialization of the planet hardly seems like it's withstood the crucible of time.

This techno-evolutionary leap in humanity is iteration #1

I'm not sure this is going to work, at some point you come up against a ceiling of what is possible on a single planet in terms of population, amount of technology, and the resources available.  I'm sure you are aware of Overshoot Day, it's the day every year we run out of the amount of resources the Earth can produce in that year.  It's simple math really, unless we start offloading people off the planet, nature is going to do all of this for us.

https://www.overshootday.org/

Earth's carrying capacity is 10 billion and if anyone thinks that any kind of technology will save us against the simple math of limited resources, a finite size and exponential population growth, they are just kidding themselves-- sometimes literarily!

 

 

Link to comment
Share on other sites

https://www.theworldcounts.com/populations/world/10-billion-people

 

People dont like to hear it, but either they change their way of life.... or else.....

We are already over-exploiting the Earth biocapacity by 75 percent. Put differently, humanity uses the equivalent of 1.75 Earths to provide the natural resources for our consumption and absorb our waste. And the world population is growing by more than 200,000 people a day.

counter icon

1.8005199599

Number of planet Earths we need

To provide resources and absorb our waste

 
 

How many people can the Earth sustain?

3.7 billion?

Our current way of life caused humanity to hit the Earth’s limit already around 1970 when the world population was 3.7 billion - less than half the population of today. Since the 1970s, we have been living in so-called “ecological overshoot” with an annual demand on resources exceeding what Earth can regenerate each year. In other words, we are taking an ecological “loan” and asking future generations to pay it back.

7.7 billion?

Is 7.7 billion people the sustainable limit?

A meta-analysis of 70 studies estimates the sustainable limit to the world population to 7.7 billion people. World population as of 2020: 7.8 billion...  

10 billion?

Can Earth sustain 10 billion?

According to an article in Live Science, many scientists think that Earth has a maximum capacity to sustain 9-10 billion people. One of these scientists is the Harvard sociobiologist Edward O. Wilson. He believes the Earth can sustain 10 billion - but it requires changes:

"If everyone agreed to become vegetarian, leaving little or nothing for livestock, the present 1.4 billion hectares of arable land (3.5 billion acres) would support about 10 billion people" 

- Edward O. Wilson, Harvard sociobiologist.
We need to change

It’s clear that whatever the maximum number of the people the Earth can sustain, we need to change! If we continue our current consumption patterns we will slowly but steadily use up the planet’s resources. If nothing changes we will need two planets by 2030.

1970: 3.7 billion people = 1 planet Earth

2030: 8.5 billion people = 2 planet Earths

2057: 10 billion people = ??? planet Earths

According to the global Footprint Network, in 2030 - when the global population has reached an estimated 8.5 billion people - we will need 2 planets to support the human population. Imagine how many we’ll need when we reach 10 billion people...

But of course, change is possible!

The UN paper: “How Many People? A Review of Earth’s Carrying Capacity” presents three different routes of change:

  1. The “bigger pie” scenarium: Technical evolutions in green energy and materials efficiency and reuse mean that we can get more out of the resources Earth has.
  2. The “fewer forks” scenarium: Meaning simply fewer people.
  3. The “better manners” scenarium: Humanity (as in every single one os us) reduces our impact on the planet and makes decisions based on the full impact on Earth and ecosystems (complete internalization of costs in economic terms). Examples are the use of renewable energy and reuse of materials instead of throwing them out.

The paper concludes that a combination of all three will surely be needed.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...