Jump to content

bdgwx

Members
  • Posts

    1,352
  • Joined

  • Last visited

Posts posted by bdgwx

  1. On 6/15/2023 at 4:52 AM, chubbs said:

    Per this chart, Honga Tonga had a net cooling effect but it is fading rapidly. The cooling effect was a bigger peturbation than marine SOx at its peak.

    https://github.com/ClimateIndicator/forcing-timeseries/tree/main/plots

    This is great. I've been looking for an updated volcanic aerosol dataset for awhile now.  I had no idea this existed. They even have it in an easy csv file format and it goes through 2022.

    Anyway, it looks like H2O adds about 0.1 W/m2 to the imbalance. Like you said the AOD portion is fading rapidly though so if the H2O portion is long term like scientists are expecting then we should expect a net positive, albeit small, effect from Hunga Tonga soon.

    Somewhat interesting...my machine learning model said a 5 month lag with GISTEMP for this volcanic aerosol dataset was optimal. My model was showing a -0.05 C adder to start the year and wanes to -0.02 C by the end after I extrapolate out the AOD decay based on what happened with Pinatubo.

  2. ERA has a pretty high correlation with the traditional datasets. It correlates with GISTEMP at R^2 = 0.89.

    It's the same as JRA. The previous record via JRA was +0.34 (1991-2020 baseline) in 2019. So far the 2023 June value is +0.61. JRA correlation with GISTEMP is R^2 = 0.89

    https://climatlas.com/temperature/jra55_temperature.php

    Similarly with the GFS as well. The previous record via GFS was +0.55 (1981-2010 baseline) in 2019. So far in 2023 June value is 0.68. GFS correlation with GISTEMP is R^2 = 0.78

    http://www.karstenhaustein.com/climate.php

     

     

    • Like 1
  3. The May GISTEMP value came in at 0.94 C. This was a massive deviation and miss from my 1.05 ± 0.08 C expectation. A miss of that magnitude does not happen very often. And because May came in so much lower than my expectation my current June expectation drops to 1.06 ± 0.15 C. However, the Jan, Feb, and Mar values all came in 0.01 C higher so instead of dropping 0.11 from the yearly sum we only dropped 0.08 so the impact on the yearly average isn't as much as one might naively think. My current expectation for the full year average is now 1.05 ± 0.09 resulting in a probability of a new record (>= 1.03 C) of 67%. 

    I should note that Nick Stokes' TempLS dataset has a very high correlation with GISTEMP (R^2 = 0.97) and gets released several days prior to GISTEMP. His dataset was suggesting the May anomaly would come in at 0.96 ± 0.06 C. As of the time of my 1.05 ± 0.08 C expectation I was not exploiting Nick's data. I will do so going forward.

  4. I'm skeptical of the Hunga Tonga effect myself. I've been tracking stratospheric temperature anomalies since the eruption and while there were very noticeable effects for about 8 months I can no longer see much of an effect. That's not to say that I don't think it will cause some warming, but I think it will be low enough that it will be indistinguishable from the noise. 

    I do think there is merit to the marine aerosol reduction though. Even a tenth of watt change in the Earth energy imbalance (EEI) would be significant. Many of the marine emission rules went into effect in 2020 so it quite possible that at least some of the record setting temperatures can be traced back to the high EEI. If you look at @chubbs post above you'll see the CERES EEI is now +1.5 W/m2. CERES EEI calculations are known to have high uncertainty [Loeb et al. 2021], but +1.5 W/m2 is still high enough to raise eyebrows.

    There's no doubt that the main contribution of the higher 2023 temperatures is GHGs though. I think what is catching some off guard (like myself) is that the temperatures are higher than our expectations given that the 4 month lagged ONI corresponding to May was -0.7...well into La Nina territory. In other words, we're still under the La Nina influence albeit transitioning out now. Is it transient variation or is the warming really accelerating?

    One other significant contributing factor to the North Atlantic SSTs are believed to be high due to very low Saharan dust levels.

     

    • Like 1
  5. 6 hours ago, TheClimateChanger said:

    This is why I always use to say I’m a climate skeptic. Skeptical that it wasn’t much worse than the mainstream view, not that it wasn’t happening. They say it’s warmed 1C since preindustrial, but in fact it’s warmed nearly 1C just since  the 1981-2010 average. This should be a smoking gun that they’ve been downplaying this, but I doubt that’s how many will report it.

    I hear what you saying. I take more of conservative position myself, but I will say that Hansen et al. 2022 indict the scientific community and especially the IPCC of "gradualism" so you've got support from others regarding your position. And we're talking about big names like Hansen, Schuckmann, Loeb, etc. who made this indictment.

  6. The current data is interesting to say the least. I've been tracking both the record setting daily SST and daily 2mT. I'm not sure what to think of this. These are such extreme anomalies that they are statistically unlikely to persist even in a warming world so it makes me think they are transient and there will be a reversion to the trend soon. 

    I have my model updated to better predict near term GISTEMP values. My May GISTEMP expectation is still 1.05 C, but with a much reduced uncertainty of ±0.08 C. And even though the May update isn't even published yet I'm already starting to see a big jump up in the June expectation given the current data. I'm going to go with 1.10 ± 0.16 C for June. And that could be low if temperatures don't come down from the first 1/3 of the month. I'm not going to post the monthly breakdown until the May GISTEMP and June IRI ENSO forecast are published, but a sneak peak of the final 2023 expectation does get bumped up to 1.07 ± 0.09 which puts the odds of a new record at 78%.

     

    • Like 2
  7. 2 hours ago, Typhoon Tip said:

    Not sure I'm jiving with any distinction of ENSO warm biases, to date, when the whole "oceanasphere" is still recovering from a +1.5C spring spike ...

    Yeah, that spike is running nearly 4σ above the 1981-2010 average and over 1σ above the previous record from all the way back in 2022 (via OISST). I'm not sure how this will effect the evolution of the current ENSO cycle or how it will affect the global scale circulation patterns. I do think a lot of this spike is occurring outside the tropical region, but at a cursory glance it appears there is enough contribution in the tropical region that it may attenuate the RONI values more than it might have otherwise.

  8. On 6/8/2023 at 2:07 PM, chubbs said:

    I made an initial bet at Kalshi this week. Looks like a big enso swing this year and per chart below the satellite measured energy imbalance is at record levels.

    Yep. I just setup my account today. I'll probably engage next week. Obviously I think the market is undervalued.

    I just updated my model with the latest data a few minutes ago. Below are my current expectations with 2σ confidence intervals. This averages out to 1.05 C. I'm estimating the uncertainty on the average at about ±0.09 C right now. That puts the probability of >= 1.03 at about 67%. My hunch tells me that is high. But past experience also tells me that hunches are terrible predictors so I don't know.

    Interestingly I don't actually have my model setup to optimally predict GISTEMP right now. I have it setup in a non-autocorrelated mode because I use it primarily to help explain the long term trend and short term variability; not to actually make informed near term predictions. I'll see if I can get it better tuned for 6 month lead time predictions. I also need to get the GISTEMP source code running on my machine again. I was actually running it a few years ago when I participated in another prediction market. I can use it to drive down the current month prediction uncertainty if I can get the daily ERSST and GHCN-M files incorporated into it.

    Note #1. The June uncertainty is still high because only 1 week of data is in. The uncertainty improves throughout the month.

    Note #2. My model has been low biased the first 4 months of the year so far.

    Note #3. I usually list the last few reported months as ±0.01-0.02 C because some observations are delayed by a few months getting into the repositories. Older months can change as well. It's just not as likely as the last few months.

    Jan: 0.86 ± 0.00 C

    Feb: 0.97 ± 0.01 C

    Mar: 1.20 ± 0.01 C

    Apr: 1.00 ± 0.02 C

    May: 1.05 ± 0.18 C

    Jun: 1.03 ± 0.22 C

    Jul: 1.02 ± 0.23 C

    Aug: 1.04 ± 0.24 C

    Sep: 1.07 ± 0.26 C

    Oct: 1.10 ± 0.26 C

    Nov: 1.12 ± 0.26 C

    Dec: 1.13 ± 0.26 C

    2023 Average: 1.05 ± 0.09 with 67% chance of a new record (>= 1.03)

    • Like 1
  9. With the onset of El Nino and the possibility that 2023 could be a new record in some datasets I thought it might be nice to start a thread dedicated to the 2023 global average temperature.

    As of this posting the Kalshi prediction market for a new record according to GISTEMP is trading at $0.31. 

    https://kalshi.com/markets/gtemp/global-average-temperature-deviation

    The Brown & Caldiera 2020 method is showing about a 50% probability of a new record for 2023 with a 76% chance of such for 2024.

    https://www.weatherclimatehumansystems.org/global-temperature-forecast

    otAO0gT.png

    My own machine learning model is saying there is about a 50% chance as well so this could be close.

    1KBajmm.png

     

    • Like 1
  10. So far the dynamic models are beating out the statistical models in terms of skill. As I understand that is typical through the spring forecast barrier, but once June comes around the statistical models start exhibiting similar skill to their dynamic counterparts. I wonder if we are going to see a jump up in the statistical average on the June IRI update?

  11. 22 minutes ago, etudiant said:

    Have to say that this is disappointing.

    It recognizes the warmth in the Atlantic and then says  it does not count because the Pacific is having an unusual El nino.

    Basically says only global models have value, basin wide analysis is inadequate.

    Guess I'm not cut out to be a meteorologist, I've no grasp of global models..

    Maybe I misinterpreted but I read that as it will only be a near normal season despite the warm Atlantic due to the robust El Nino expectation mitigating activity. In other words, had the expectation been for a weak El Nino or neutral ENSO phase then the CSU forecast might have been for an above average season. Or had the Atlantic been cooler with a robust El Nino then the CSU forecast might have been for a below average season. I read that as an acknowledgement of what is happening in both oceans. Did you read it differently?

    • Like 3
  12. 2 hours ago, ChescoWx said:

    Beware those who change historical data in any experiment....we have rewritten 1930 and 1999 weather history

    image.png.260f0b90fea7c26caf7f8515810c3ee4.pngimage.png.56aca7bc66d7aae915e16f14fb3cf3e8.png

    As documented clearly on the GISS website GISTEMP did not begin making the corrections for the time-of-observation change bias, instrument package/shelter change bias, etc. until 1999. The graph on the left contains the biases while the one on the right is biased corrected. See Hansen et al. 1999. Also read Vose et al. 2003, Hubbard & Lin 2006, Menne & Williams 2009, Williams et al. 2012, and Hausfather et al. 2016

    Beware those who don't acknowledge and make an attempt to correct for biases in historical data in any experiment...

     

  13. This is getting interesting. My own model using the latest IRI ENSO forecast is now predicting that 2023 will come in 0.05 C above 2016 in the GISS record. I was not expecting a new record in 2023 so this does come as a bit of a surprise. 

    Edit: Note that 2016 and 2020 in the GISTEMP dataset are tied.

    • Like 1
  14. 1 hour ago, chubbs said:

    ENSO region 3.4 has warmed along with the rest of the globe. As described in the enso blog linked below, NOAA uses a 30-year ONI baseline which is updated every 5-years. Note also that baselines are centered, so recent years don't have their final ONI values yet. 2023 wont be finalized until the 2006-2035 baseline is available.

    https://www.climate.gov/news-features/understanding-climate/watching-el-niño-and-la-niña-noaa-adapts-global-warming

     

     

     

    climoNino34_465.jpg

    It has certainly warmed since 1936-1965. However, more recently it has actually cooled. Note that I'm analyzing 1979/01 to 2023/04. Over that period the ENSO 3.4 trend via ERSSTv5 is -0.05 C/decade. From the 2011/01 to 2023/04 (not included in the chart above) this trend is -0.35 C/decade. The values I plotted above are the official ONI values using the published method with the centered 30 year baseline updated every 5 years.

    Here is the spatial distribution of SST changes since 1993 provided by Copernicus. Notice the cooling that has occurred over much of the ENSO region over the last 30 years.

    bZY0mKG.png

    • Like 1
  15. 5 hours ago, 40/70 Benchmark said:

    I wonder if cool ENSO forcing is accentuated and warm ENSO forcing attenuated under the warming global canvass....

    Maybe.

    I created the following chart using ERSSTv5 using the published ONI method. The official ONI is plotted in blue. I then removed the global trend and plotted the result in orange.

    For those that may not be aware the ENSO 3.4 region has actually cooled slightly while the globe as a whole has warmed significantly. This causes negative ENSO phases to be accentuated and positive phases to be attenuated when viewed against the backdrop of the globe average. I have no idea what effect this has on ENSO forcing though.

    ERSSTv5 data for the ENSO 3.4 region is available here. The global data can be downloaded via the WRIT website here.

    bJlDT5g.png

     

    • Like 3
    • Thanks 1
  16. According to Berkeley Earth...

    In Feb 2016 the monthly anomaly was 1.32 C. This is highest in their period of record.

    In Mar 2023 the monthly anomaly was 1.24 C. 

    Using a typical 4 month lag for the ENSO response the 2016/02 value matches up with an ONI of 2.4 and the 2023/03 value matches up with -0.9.

    We were only 0.09 C shy of eclipsing the old record, which occurred during a super El Nino, while in the midst of a triple dip La Nina. Yikes!

  17. On 2/24/2022 at 9:21 PM, bdgwx said:

    The entirety of the conclusion from the Frank 2010 publication boils down to this series of calculations.

    (1a) σ = 0.2 "standard error" from Folland 2001

    (1b) sqrt(N * 0.2^2 / (N-1)) = 0.200 where N is the number of observations (2 for daily, 60 for monthly, etc.)

    (1c) sqrt(0.200^2 + 0.200^2) = 0.283

    (2a) σ = 0.254 gaussian fit based on Hubbard 2002

    (2b) sqrt(N * 0.254^2 / (N-1)) = 0.254 where N is the number of observations (2 for daily, 60 for monthly, etc.)

    (2c) sqrt(0.254^2 + 0.254^2) = 0.359

    (3) sqrt(0 283^2 + 0 359^2) = 0.46

    I had a pretty insightful conservation with Dr. Frank recently. I now know more about his methodology.

    First, he revealed that he used Bevington 4.22 for steps (1b) and (2b). This is a problem for two reasons. 1) It is not the uncertainty of anything and is only the weighted average variance of the data. 2) It is an intermediate step intended to be used in conjunction with 4.23. Furthermore, Dr. Frank told me that 4.22 is used for systematic error while 4.23 is used for random error. That is just patently false. Bevington says no such thing. And, in fact, Bevington seems to say the opposite. Section 4 like most of the work is for random error. Nevermind that 4.22 does not even compute uncertainty in the first place.  Steps (1b) and (2b) are completely wrong.

    Second, he is using the summation in quadrature rule sqrt[a^2 + b^2] for steps (1c) and (2c). It is important to note that this rule is only valid for random uncorrelated error. The problem here is that he is trying to assess the uncertainty of the anomaly which is a subtraction of the baseline from the observation both of which share the same systematic error. That means there is correlation between the two values. For example, if the observation is biased by +0.2 C then the baseline will also be biased by +0.2 C as well and so when you do the subtraction the bias cancels since 0.2 - 0.2 = 0 C. To assess uncertainty when there is correlation you must use the law of propagation of uncertainty which Bevington describes in section 3 and presents as equation 3.13. Note that this also appears as equation 16 in JCGM 100:2008.

    Their other problems with his methodology. I was only focused on the equations he used in my most recent conversation with him.

    Anyway, I hope to have more content related to the global average temperature and our level of confidence in its estimation. The release of STARv5 is interesting so I hope to be able to post about it and UAH/RSS in the not too distant future. I've also been able to acquire more datasets for study. I've also been experimenting with predictive models using machine learning to estimate what these various datasets are going to report before they publish. 

  18. 20 hours ago, ChescoWx said:

     

    How can you prove bias mitigation with dating starting in 1978? 

    I'm not sure what the question is here. Are you asking how I knew it is done? Are you asking how it is done? Are you asking if it is different before vs after 1978? Are you asking how we know bias mitigation is necessary?

    20 hours ago, ChescoWx said:

    Plus UHA satellite data does NOT measure temperature directly - it can only INFER temperature from the radiances in wavelength bands. I will take the direct temperature measurements!

    Now I'm confused. Over here you posted the UAH satellite data timeseries and you seemed to accept it. 

    And when you say you will take direct temperature measurements do you really mean you will only take direct temperature measurements as long as they are contaminated with the time-of-observation change bias, instrument package change bias, station relocation bias, etc? I ask because that is the message we're all receiving right now.

  19. 20 minutes ago, ChescoWx said:

    The loss of nearly 2/3 of the GHCN network since 1970...of which represented rural stations is another overall big problem and biases the data warm...

    I don't see anywhere near 2/3 loss of active GHCN stations since 1970. And it's moot because neither Chester County nor Philadelphia was lost. 

    And because I didn't see an answer I'll ask again...Is the start date of the dataset used as criteria for your acceptance of bias mitigation? In other words, are you okay with bias mitigation in UAH because the record starts in 1978 but not okay with GHCN because it starts prior to that?

×
×
  • Create New...