Jump to content

bdgwx

Members
  • Posts

    1,350
  • Joined

  • Last visited

Everything posted by bdgwx

  1. Here is weather historian Christoper Burt's list of barometric pressure records for each state from his blog. The February 1960 cyclone that Hoosier mentioned is listed frequently.
  2. A 974mb low near Little Rock, AR? What is the deepest a mid-latitude cyclone has gotten in Arkansas?
  3. I was just made aware today that NOAAGlobalTemp was upgraded last week from 5.0 to 5.1. The trend from 1979/01 to 2022/12 increased from 0.172 C/decade to 0.180 C/decade. https://www.ncei.noaa.gov/products/land-based-station/noaa-global-temp The big change is the implementation of full spatial coverage. See Vose et al. 2021 for details on the changes. Here is the updated graphic based on preliminary data I had to download manually.
  4. Paleoclimate and paleontology records. The latest evidence suggests "ice-free" (defined in literature as < 1e6 km2) will first occur around 2050 (IPCC AR6 WG1 SPM pg. 16). This is the most aggressive prediction I've seen based on the broad evidence. It is more aggressive then the IPCC's previous of prediction of 2070 in the early 2000's and 2100 in the 1990's. The IPCC's predictions are based on a consilience of evidence approach and so adequately represents the scientific expectation. Regarding Al Gore...his prediction of 2013 (made in 2008) and then later 2016 was based on a single cherry-picked source that never said what Gore claimed. That source is Maslowski. Specifically Maslowski et al. 2008 and later Maslowski et al. 2013. I encourage you to read the publications yourself. In fact, Maslowski goes to great lengths warning his audience to be careful with predictions based on extrapolation of recent trends and even warns against taking dynamically modeled predictions (like from his NAME model) verbatim due to the large uncertainty and limitations with sea ice modeling in general at the time. And note that Maslowski said in response to Gore's statement "It’s unclear to me how this figure was arrived at. I would never try to estimate likelihood at anything as exact as this.” Let that be a cautionary tail to 1) discount scientific predictions coming from non peer reviewed sources and 2) always check the sources provided.
  5. Correct. This fallacy is common enough that it has a name. It is called affirming a disjunct. I see it all of the time. Two options are presented: A (natural) or B (anthropogenic). Then the argument is because A is true therefore B is not true. This is essentially how the null hypothesis test I discussed just above plays out as well. Two options are presented: A (breakpoint analysis doesn't matter) and B (breakpoint analysis does matter). A test is performed showing A is true for an isolated case and then the erroneous conclusion that therefore B is not true follows. Both arguments (that it is only ever natural and that breakpoint analysis never matters) are absurd. They are absurd because in both cases A does not preclude B.
  6. Nevermind. I found it. I'm not sure the exact details of the test, but it only covered the overlap period between your PWS and Coatesville 2W from 2003/12 to 2007/12. That is important because Coatesville 2W (USC00361591) did not have any documented changes during that period so the expectation is that there would be no difference between the region and thus your PWS. Using a period in which it is expected for 2 stations to be equivalent and then finding that they are equivalent is not proof of an equivalency of other stations and other time periods. BTW...Berkeley Earth's analysis did not find any breakpoints for Coatesville 2W during this period. However, their analysis did find two breakpoints prior to this period. Both breakpoints biased the observations higher and so the breakpoint adjustment reduced the warming in this case [1]. [1] Note that although Berkeley Earth performs a breakpoint adjustment for each station they actually don't use it for the spatial averaging step. It is only provided for informational purposes. They actually use what they call the "scalpel" method. When a breakpoint is found they split the station timeseries and treat it as if it were another station. This is quite clever because it addresses the concern of adjustments and the impact it may have on the final global average temperature product. See Rohde et al. 2013 and Rohde & Hausfather 2020 for details.
  7. Arctic sea ice was likely lower during the Holocene Climate Optimum and almost certainly lower prior to the current ice age. It is a testament to the fact that given a big enough nudge not only can Arctic sea ice go lower, but it can entirely disappear. And I don't mean mean go "ice-free" in the summer with < 1e6 km2. I mean literally go to 0 km2 year round. Anyway, per Walsh et al. 2016 (see also Walsh et al. 2019) Arctic sea is lower than at any point since 1850 AD. And per Kinnard et al. 2011 Arctic sea ice is lower than at any point since 600 AD.
  8. Say what? Core samples prove that natural forces will always be greater than anthropogenic? In all my years of studying the climate I have never heard that. And considering that the 2 biggest contributors to climate forcing today are GHGs and aerosols which have both been implicated in numerous climatic shifts in the past and which are supported by core sample evidence I am left perplexed by this statement. Look at what happened during the PETM when large quantities of carbon got released into the atmosphere. Look at what happened when Tambora or even more recently Pinatubo released large quantities of SO2 into the atmosphere. Remember, the laws of physics say that anthropogenically modulated GHGs and aerosols work the exact same way in regards to their radiative properties as naturally modulated GHGs and aerosols.
  9. I must have missed it. Can you post details on that null hypothesis test or link to where you did it? I'd like to review it.
  10. First, I think you are deflecting and diverting here. The topic is the temperature in Chester County, PA. You are claiming that the observations show no increase in temperature in that region but not addressing the fact the claim is based on biased observations. That is what I'm asking you to address. Second, if by "popular global story" you mean the global average temperature (GAT) then know that the actual real world data in Chester County, PA is aligned with the GAT (aka "popular global story"). It is one of the inputs to the GAT afterall. The temperature in Chester County, PA has the same effect on the GAT as any other region of equal area.
  11. We know with a high degree of confidence that you are not addressing the land temps for Chester County, PA. You are only addressing biased measurements of them. That's a problem.
  12. The abundance and consilience of evidence says otherwise. Right now anthropogenically modulated forcings are about 30x higher than naturally modulated forcings. See IPCC AR6 WG1 Annex III for details. See Peterson et al. 2008 for details on this topic.
  13. This post is part 2 regarding the question of how ENSO and specifically the current La Nina is affecting the global average temperature (GAT) trend. Using the model we created and trained above we can see that ENSO modulation term is [0.11 * ONIlag4]. We can visual the ENSO affect by plotting the ENSO residuals from our model against the detrended GAT. As can be seen ENSO explains a lot of the GAT variability with an R^2 = 0.32. One quite noticeable problem is the Pinatubo 1991 eruption. If we then apply the ENSO residuals to the GAT we can get a ENSO-adjusted GAT timeseries. Each dataset has been ENSO-adjusted and presented with new trend (C/decade) figures. The graph below is the exact same format as the one above so it can be easily compared. Notice that applying the ENSO residual adjustment reduces the monthly variability and causes all of the trends to increase by about 0.01 C/decade including the composite trend which is +0.195 ± 0.049 C/decade vs +0.186 ± 0.058 C/decade without the adjustment. Because we are removing a significant autocorrelation component in the process our AR(1) corrected uncertainty drops as well. So that is the answer. ENSO and specifically the current La Nina has reduced the trend by 0.01 C/decade. Another thing that may jump out at you is the Pinatubo 1991 eruption era. The discrepancy between CMIP5 and observation may be the handling of the eruption and its effects. It looks to me like CMIP5 underestimates the effect. As a result it does not cool the planet enough in the succeeding months. Had CMIP5 pulled the temperature down another 0.2 C it looks like the prediction would have been a much better match to observations later in the period.
  14. You saw the breakpoint analysis clearly showing the low bias later in the period... No. That is not correct. First, the warming is not homogenous. There is no expectation that it will be the same everywhere on the planet or that every location will even experience warming at all. It is the opposite actually. We expect the temperature trends to be different from location to location. We even expect some locations to cool. Second, as I already pointed out land stations are notorious for having breakpoints that bias their trends downward due to time-of-observation changes and instrument package changes. See Vose et al 2003 and Hubbard & Lin 2006 for details on these particular issues and Menne et al. 2009 for details how it is handled.
  15. I want to address the observation that the current La Nina may be pulling hte trend down that @HailMan06 noticed. This will be a two part topic. The first part will focus on determing how we can model the global average temperature (GAT). We will take the result from this post to answer the question directly in the next post. The goal is to find a model that explains and predicts the GAT and minimizes the root mean squared error (RMSE). We will use the following rules. - Components of the model must be based on physical laws and known to directly force changes in atmospheric temperature. This rule eliminates things like population growth which may be correlated with atmospheric temperature but does not directly force it. - Autocorrelation will not be considered. While autocorrelation is incredibly powerful in predicting how a phenomenon evolves in time it does not help in explaining why it evolved or more precisely why it persisted in the first place. - The components should be as independent as possible. For example, since MEI and ONI are different metrics of the same phenomenon we should not use both. - ENSO must be considered. This is necessary because in part 2 we will use the information from the model regarding ENSO to see how it effects the GAT trend. - It must use a simple linear form so that it is easy to compute and interpret. With these rules in mind here is a list of components that I felt were easy to obtain and would adequately model the GAT. - Oceanic Nino Index (ONI) https://www.cpc.ncep.noaa.gov/data/indices/oni.ascii.txt - Atlantic Multi-Decadal Oscillation (AMO) https://psl.noaa.gov/data/correlation/amon.us.data - Volcanic Aersools (AOD) https://data.giss.nasa.gov/modelforce/strataer/tau.line_2012.12.txt - Total Solar Irradiance (TSI) https://lasp.colorado.edu/lisird/data/nrl2_tsi_P1M/ - Atmospheric Carbon Dioxide Concentration (CO2) https://gml.noaa.gov/webdata/ccgg/trends/co2/co2_mm_mlo.txt The model will be in the form T = Σ[λ * f(D_i), 1, n] where λ serves as both a tuning parameter and a unit translation factor to convert units for the component into degrees K (or C), D is the component data, and f is a simple function that acts upon the component data. In most cases the function f just returns the value raw value of the dataset. The model is trained with multiple passes where each pass focuses on tuning both the λ parameter and the lag period for a single component one-by-one. After the 1st pass initializes the λ parameter for a 1-month lag for each component a 2nd pass perturbs the parameter and lag period up/down to see if the model output skill improves. This continues for as many iterations as needed until the model skill stops improving. It is important to understand that the λ parameters and lag periods are not chosen or selected. They appear organically from the training. Without further ado here is the model. GAT = 0.09 + [2.3 * log2(CO2lag1 / CO2initial)] + [0.11 * ONIlag4] + [0.20 * AMOlag3] + [-2.0 * AODlag3] + [0.05 * (TSIlag1 - TSIavg)] This model has an RMSE of 0.091 C. Considering that the composite mean GAT has an uncertainty of around σ = 0.05 C that is an incredible match to the GAT observations leaving maybe 0.04 C of skill on the table. The astute reader will notice that the λ parameter on the CO2 component is λ = 2.3 C/log2(PPM). Care needs to be taken when interpreting this value. It is neither the equilibrium climate sensitivity (ECS) nor the transient climate sensitivity (TCS). However, it is most closely related to the TCS which would imply an ECS of about 3.0 C per 2xCO2. And again...I did not pick this value. I did not manipulate the training of the model so that it would appear. I had no idea that the machine learning algorithm would hone in on it. It just appears organically. Anyway, that is neither here nor there. The important point here is that we now have an estimate for how ENSO modulates the GAT which can be used later to see how the current La Nina may have affected the trend.
  16. Correct. Like I said earlier, Dr. Hausfather provides a good introduction to the topic.
  17. Let me get this straight. You want to use data that has not been adjusted for known bias caused by instrument package changes, time-of-observation changes, station sighting changes, measurement procedure changes (transition from bucket to ship intake to buoy)? Yes/No? If that is your bar for acceptance then no thank you. Using data that is known to be contaminated with biases without making any attempt to address those biases is unethical at best and fraudulent at worst. I'm going to have no part of it. And BTW...don't think I didn't notice that you completely ignored the fact that the "non NOAA adjusted long term weather data" shows more warming vs the adjusted data. It's worth repeating in all caps and bold...the unadjusted/raw data shows MORE warming than the adjusted data. So if you're bar for acceptance includes the unethical (and potentially fraudulent) analysis of data known to be contaminated with biases then you are going to have accept that the warming is MORE than is being reported by scientists since 1880.
  18. In the chart above the composite trend is labeled is as +0.186 ± 0.058 C/decade (2σ). The question might be...why is the uncertainty so high on that trend. The reason is because the global average temperature exhibits high autocorrelation. Autocorrelation is the correlation of a timeseries with a time-lagged copy of itself. What this means is that the value 1 month (or 2 months, etc.) ago partially determines (act as a predictor or estimator) the value of the current month. This can be interpreted where if the value is high/low it tends to stay high/low. This would then cause the trend to nudge up/down significantly as the monthly values stayed high/low for extended periods of time. In other words, the trend is variable as new monthly values are added to the timeseries. This creates uncertainty on top of the already existing linear regression trend standard uncertainty. The way I dealt with is to use the Foster & Rahmstorf 2011 method. In this method they calculate the ordinary linear regression standard uncertainty via the canonical σ_w = sqrt[ Σ[(y_actual - y_predicted)^2] / Σ[(x_actual - x_mean)^2] / (n-2) ] forumula. That results in σ_w = 0.0048 C/decade. We then multiple that by the square root of the correction value v such that σ_c = σ_w * sqrt(v). Our correction value v is defined as v = (1+p) / (1-p) and can be interpreted as the number of data points per the degrees of freedom. The p parameter is the autoregressive coefficient from an AR(1) model of the timeseries. For the composite timeseries in the graph above our AR(1) coefficient is p = 0.948. This makes our correction factor v = (1+0.948) / (1-0.948) = 37.5. So sqrt[v] = 6.15. Thus the expanded/corrected standard uncertainty of the trend is σ_c = 0.0048 * 6.15 = 0.03 C/decade. And, of course, 2σ_c = 0.06 C/decade. There are some rounding errors here. The full computation is 0.058 C/decade. Anyway, that is how we get ±0.058 C/decade for the trend over the period 1979-2022. Note that F&R say that the AR(1) correction still underestimates the trend uncertainty. They recommend applying an ARMA correction as well which factors in the n-month lagged autoregressive decay rate. Based on the calculations I've seen that additional correction is insignificant when analyzing long duration trends like from 1979 to 2022 so I'll ignore it for now. To demonstrate just how wildly the trend can behave over short periods of time I plotted the trend from all starting months from 1979 to 2022 ending in 2022/12 with the confidence intervals computed from the method above. For example, the trend at 2014/09 is -0.005 C/decade. It happens to be the oldest trend that is <= 0 C/decade. But using the F&R method above we see that the uncertainty on that is a massive ±0.305 C/decade. The AR(1) corrected trend uncertainties gets smaller and stabilizes as you go back further in time toward 1979. The point...the global average temperature trend is highly variable over short periods of analysis. This is why the infamous Monckton Pause which starts in 2014/09 has an uncertainty of ±0.305 C/decade and so is statistically insignificant. You just simply can't look at short recent trends and draw any meaningful conclusion. This is true when selecting 2010/12 as the starting month and report the very high warming rate of +0.316 C/decade since the uncertainty there is ± 0.273 C/decade. What we can definitively conclude is that all trends older than 2011/08 show statistically significant warming.
  19. Correct. I'll post more information about this when I get time. Another thing that is affecting the composite trend is the fact that only HCRUT, GISS, BEST, and ERA are full sphere measurements. This is pulling the observed trend down because the highest rates of warming are occurring in the Arctic. Another possible (and probable IMHO) is that UAH is an outlier not because it holds a monopoly on truth, but because it may have one or more significant defects. If UAH showed the same warming rate as the next lowest dataset (NOAA) then the composite trend would jump up to +0.190 C/decade.
  20. The 2022 data is in so here is an update. I wanted to present the data in a way that highlights the divergence of trends of the various datasets. To do this an ordinary linear regression is performed and the y-intercept is then used as the anchor point for each dataset. In other words, each dataset is normalized (or re-anomalized) to its starting point. This puts each dataset on equal footing and allows us to see how the differences in trends effect the values later in the period. I have included an equal weighted composite of the datasets which serves as an ensemble mean and the best estimate of the true global average temperature change. Notice that RATPAC and RSS show the most warming while UAH shows the least. The composite trend has a 2σ uncertainty of ±0.058 C/decade. I'll dedicate an entire post on how I computed that uncertainty when I get time. I have also included the CMIP5 RCP4.5 prediction for comparison. Notice that CMIP5 predicted a higher trend than actually occurred. However, with uncertainty of ±0.06 C/decade on the composite we cannot say that CMIP5 is inconsistent with the observation.
  21. Seriously @ChescoWx? I made a good faith effort to engage with you. You provided no rationalization for your continued use of data that you know is contaminated with breakpoint biases. You provided no response to the forecast models that do accurately predict 2m temperatures decades ahead. You provided no peer reviewed literature showing how the prediction that Atlantic City casinos would be flooded by now were made. And now you want us to all go join your "dissenting views" forum? Message received...there is nothing that will change your mind.
  22. Thanks. I'm not sure what I'm supposed to be looking at. What page does it discuss the Atlantic City casinos? Where does it say sea level would rise 1.8 meters by 2022?
  23. Can you post the peer reviewed literature which made that prediction? I'd like to review it if you don't mind.
  24. Here are the inputs for each scenario the IPCC considered with their inaugural prediction from 1990 documented in AR1. CO2 is running just slightly above B. CH4 is running at about C. And CFCs (incredibly potent GHGs) are way below scenario D due to the Montreal Protocol. Therefore the scenario that most closely matched that which humans choose was either B or more likely even C. And here are the predictions for each scenario. Notice that 0.55 C of warming was predicted for scenario C with about 0.50 C for scenario D and 0.60 C for scenario B. According to Berkeley Earth the actual amount of warming was 0.66 C. The IPCC was very close and if anything they actually underestimated the warming.
  25. I already posted the breakpoint analysis for the Coatesville. There were 16 breakpoints. How did you handle them? Dr. Hausfather's article here has links to all of the data. Here is the adjusted vs unadjusted comparison.
×
×
  • Create New...