-
Posts
1,216 -
Joined
-
Last visited
About bdgwx

- Birthday 10/19/1977
Profile Information
-
Four Letter Airport Code For Weather Obs (Such as KDCA)
KSTL
-
Gender
Male
-
Location:
St. Louis
Recent Profile Visitors
4,393 profile views
-
With the onset of El Nino and the possibility that 2023 could be a new record in some datasets I thought it might be nice to start a thread dedicated to the 2023. As of this posting the Kalshi prediction market for a new record according to GISTEMP is trading at $0.31. https://kalshi.com/markets/gtemp/global-average-temperature-deviation The Brown & Caldiera 2020 method is showing about a 50% probability of a new record for 2023 with a 76% chance of such for 2024. https://www.weatherclimatehumansystems.org/global-temperature-forecast My own machine learning model is saying there is about a 50% chance as well so this could be close.
-
So far the dynamic models are beating out the statistical models in terms of skill. As I understand that is typical through the spring forecast barrier, but once June comes around the statistical models start exhibiting similar skill to their dynamic counterparts. I wonder if we are going to see a jump up in the statistical average on the June IRI update?
-
Does anyone know of actively updated ENSO forecasts from any of the AI/ML models (i.e. Taylor & Feng 2022 or Ham et al. 2019). Apparently they have superior skill to both statistical and dynamic modeling per Chen et al. 2023. I wonder what these models are saying for the current ENSO cycle.
-
Maybe I misinterpreted but I read that as it will only be a near normal season despite the warm Atlantic due to the robust El Nino expectation mitigating activity. In other words, had the expectation been for a weak El Nino or neutral ENSO phase then the CSU forecast might have been for an above average season. Or had the Atlantic been cooler with a robust El Nino then the CSU forecast might have been for a below average season. I read that as an acknowledgement of what is happening in both oceans. Did you read it differently?
-
As documented clearly on the GISS website GISTEMP did not begin making the corrections for the time-of-observation change bias, instrument package/shelter change bias, etc. until 1999. The graph on the left contains the biases while the one on the right is biased corrected. See Hansen et al. 1999. Also read Vose et al. 2003, Hubbard & Lin 2006, Menne & Williams 2009, Williams et al. 2012, and Hausfather et al. 2016. Beware those who don't acknowledge and make an attempt to correct for biases in historical data in any experiment...
-
To put this in perspective if (and that is a big IF right now) 2023 is record breaking and ENSO follows the IRI ensemble forecast then a new record will have been achieved with an average 4-month lagged ONI of -0.15.
-
This is getting interesting. My own model using the latest IRI ENSO forecast is now predicting that 2023 will come in 0.05 C above 2016 in the GISS record. I was not expecting a new record in 2023 so this does come as a bit of a surprise. Edit: Note that 2016 and 2020 in the GISTEMP dataset are tied.
-
It has certainly warmed since 1936-1965. However, more recently it has actually cooled. Note that I'm analyzing 1979/01 to 2023/04. Over that period the ENSO 3.4 trend via ERSSTv5 is -0.05 C/decade. From the 2011/01 to 2023/04 (not included in the chart above) this trend is -0.35 C/decade. The values I plotted above are the official ONI values using the published method with the centered 30 year baseline updated every 5 years. Here is the spatial distribution of SST changes since 1993 provided by Copernicus. Notice the cooling that has occurred over much of the ENSO region over the last 30 years.
-
Maybe. I created the following chart using ERSSTv5 using the published ONI method. The official ONI is plotted in blue. I then removed the global trend and plotted the result in orange. For those that may not be aware the ENSO 3.4 region has actually cooled slightly while the globe as a whole has warmed significantly. This causes negative ENSO phases to be accentuated and positive phases to be attenuated when viewed against the backdrop of the globe average. I have no idea what effect this has on ENSO forcing though. ERSSTv5 data for the ENSO 3.4 region is available here. The global data can be downloaded via the WRIT website here.
-
Speaking of lags NINO 3.4 seems to be lagging warm water volume more than usual right now. I have to be honest. I'm still a novice when it comes to ENSO forecasts. What do you guys make of this?
-
According to Berkeley Earth... In Feb 2016 the monthly anomaly was 1.32 C. This is highest in their period of record. In Mar 2023 the monthly anomaly was 1.24 C. Using a typical 4 month lag for the ENSO response the 2016/02 value matches up with an ONI of 2.4 and the 2023/03 value matches up with -0.9. We were only 0.09 C shy of eclipsing the old record, which occurred during a super El Nino, while in the midst of a triple dip La Nina. Yikes!
-
Here is the probability density function (PDF) corrected forecast. It is about 0.5 lower than the non-bias corrected forecast.
-
It looks like the global SST excursion peaked at 4.7σ on April 23rd.
-
Global Average Temperature and the Propagation of Uncertainty
bdgwx replied to bdgwx's topic in Climate Change
I had a pretty insightful conservation with Dr. Frank recently. I now know more about his methodology. First, he revealed that he used Bevington 4.22 for steps (1b) and (2b). This is a problem for two reasons. 1) It is not the uncertainty of anything and is only the weighted average variance of the data. 2) It is an intermediate step intended to be used in conjunction with 4.23. Furthermore, Dr. Frank told me that 4.22 is used for systematic error while 4.23 is used for random error. That is just patently false. Bevington says no such thing. And, in fact, Bevington seems to say the opposite. Section 4 like most of the work is for random error. Nevermind that 4.22 does not even compute uncertainty in the first place. Steps (1b) and (2b) are completely wrong. Second, he is using the summation in quadrature rule sqrt[a^2 + b^2] for steps (1c) and (2c). It is important to note that this rule is only valid for random uncorrelated error. The problem here is that he is trying to assess the uncertainty of the anomaly which is a subtraction of the baseline from the observation both of which share the same systematic error. That means there is correlation between the two values. For example, if the observation is biased by +0.2 C then the baseline will also be biased by +0.2 C as well and so when you do the subtraction the bias cancels since 0.2 - 0.2 = 0 C. To assess uncertainty when there is correlation you must use the law of propagation of uncertainty which Bevington describes in section 3 and presents as equation 3.13. Note that this also appears as equation 16 in JCGM 100:2008. Their other problems with his methodology. I was only focused on the equations he used in my most recent conversation with him. Anyway, I hope to have more content related to the global average temperature and our level of confidence in its estimation. The release of STARv5 is interesting so I hope to be able to post about it and UAH/RSS in the not too distant future. I've also been able to acquire more datasets for study. I've also been experimenting with predictive models using machine learning to estimate what these various datasets are going to report before they publish. -
I'm not sure what the question is here. Are you asking how I knew it is done? Are you asking how it is done? Are you asking if it is different before vs after 1978? Are you asking how we know bias mitigation is necessary? Now I'm confused. Over here you posted the UAH satellite data timeseries and you seemed to accept it. And when you say you will take direct temperature measurements do you really mean you will only take direct temperature measurements as long as they are contaminated with the time-of-observation change bias, instrument package change bias, station relocation bias, etc? I ask because that is the message we're all receiving right now.