Jump to content
  • Member Statistics

    17,506
    Total Members
    7,904
    Most Online
    SnowHabit
    Newest Member
    SnowHabit
    Joined

OCTOBER PATTERN INDEX (OPI) MONITORING WINTER SEASON 2014-2015


Recommended Posts

It's interesting that those -OPI 500mb maps coincide w/ the Aleutian trough, strongly -AO Octobers, as well as in the Atlantic, the longer term inverse correlation for Oct NAO --> DJF NAO.

 

This year, the inverse October/DJF NAO correlation is the one part of that equation that isn't all that promising...just based on the CPC values since roughly 10/1 and the forecast over the next week or so.  Hopefully, we'll see a turn-around there.

Link to comment
Share on other sites

  • Replies 132
  • Created
  • Last Reply

Sorry for the stupid question; is the value at the top of the screen the daily value or the daily value+past values+next 10 days?

 

Here's my understanding...the value at the top (from the OPI link) would be the most recent calculation.  The most recent calculation for Oct 10 would be based on the actual daily NH 500mb pattern that occurred during days 1-9 in October (using map reanalysis) + 10 days of the NH 500mb pattern from the GFS modeled forecast.  

 

If today was Oct 20, the calculation would be for days 1-19 in October (actual) + 10 days from the GFS (forecasted).

Link to comment
Share on other sites

YES ..    the   BIG   screw up in this method  is  using the  wretched  GFS  ... at least using the  GEFS would be   better.
 The OPI calculation last winter  was a  TOTAL Bust a    useless forecast  that made   many seasonal forecasts look a lot   worse.

 I urge folks to     downplay  their focus on  the  OPI ...

 

Anyone like to explain how the OPI went from -3.15 yesterday to just -0.19 today?  Is this a GFS pattern flip that has caused this or something else?

Link to comment
Share on other sites

 Their  OPI   forecast was   a disaster  waiting to happen
Forecasting  +1.64  ... very strong +AO   vs     +0.188   ..essentially Neutral ...  was  a massive bust. and we need to call
it  for what it was.

 

IMO.. the  reason it busted  or failed  last year was  the  reason I stated last OCT ...   they only looked at  Octobers with  +opi and OCT with Negative OPI.  They  never looked at  Octobers that   had one signal  but the signal "switched"  during the winter.

 

Thanks for providing this. It will be fun to track throughout the month.

Just curious about last years OPI DJF AO prediction of +1.64 when at the end of the day, the DJF AO came in at +0.188, barely positive. Was the large discrepancy a result of an X factor such as the strong pacific tropical typhones last fall or perhaps the record -EPO and stubborn Scandinavian block?

Link to comment
Share on other sites

YES ..    the   BIG   screw up in this method  is  using the  wretched  GFS  ... at least using the  GEFS would be   better.

 

 

There's no screw up. The current report on daily OPI values(based on 10 days ahead FORECAST of GFS) is just for "fun".

The real OPI value will become complete in 1 November. For now, the current values are just to get an idea.

 

Link to comment
Share on other sites

 Their  OPI   forecast was   a disaster  waiting to happen

Forecasting  +1.64  ... very strong +AO   vs     +0.188   ..essentially Neutral ...  was  a massive bust. and we need to call

it  for what it was.

 

You don't seem to understand what coefficient of determination means right?

Let's look at the diagram again: opiao_zps4e655ef3.png

 

r=0.91 and r^2= 0.83.

They are not 100% so a SINGLE failure for one year for example is expected and does not erase the validity and the results of the method. We are talking about 37 years of r^2 = 0.83. So don't act like the last year's failure(+1.65 predicted if i recall correctly versus about +0.20 real value) makes it r=0 or something.

Link to comment
Share on other sites

Personally I am not prepared to accept the r=0.91 until I see how they calculate it. A quick look at AO monthly average AO values in Oct shows what appears to be a correlation in strongly -ve AO Octobers and a much weaker signal for others. So one wonders what other wonder variable they have that lifts the correlation up to r=0.91?  Plus WSI have shown on their winter outlook that Oct NAO signals have been trending increasingly negative in the last 20-25 years or so suggesting more blocking in the N Atlantic region but they also claim it is "uncorrelated to winter". How does this show up in the OPI? Frankly I am not convinced about the OPI unless they can prove to me how they gained their astonishing correlation stats.

Link to comment
Share on other sites

Personally I am not prepared to accept the r=0.91 until I see how they calculate it. A quick look at AO monthly average AO values in Oct shows what appears to be a correlation in strongly -ve AO Octobers and a much weaker signal for others. So one wonders what other wonder variable they have that lifts the correlation up to r=0.91?  Plus WSI have shown on their winter outlook that Oct NAO signals have been trending increasingly negative in the last 20-25 years or so suggesting more blocking in the N Atlantic region but they also claim it is "uncorrelated to winter". How does this show up in the OPI? Frankly I am not convinced about the OPI unless they can prove to me how they gained their astonishing correlation stats.

 

 

We will have to wait until it is published to get the answers to those questions.

 

One thing I always become concerned about with very high correlations is "curve fitting". Now I'm not saying that was done here, but until the details are presented in a paper, then there will always be some level of skepticism.

Link to comment
Share on other sites

AO from 1-14 Oct average is -1.56 with 2 more days of sub -2 values to come before rising. So I think conservative average to mid Oct is -1.7 then the second half of Oct according to EC forecasts suggest probably around +0.2 to +0.4 average second half of Oct. This means Oct average AO is likely to be only around -0.75 to -0.65 range not to mention November probably starting strongly +ve. So One suspects this OPI will then only mean a risk of one month of weak to moderate -ve AO which is hardly worth risking going long on snow clearing equipment company stock.

Link to comment
Share on other sites

 Their  OPI   forecast was   a disaster  waiting to happen

Forecasting  +1.64  ... very strong +AO   vs     +0.188   ..essentially Neutral ...  was  a massive bust. and we need to call

it  for what it was.

 

IMO.. the  reason it busted  or failed  last year was  the  reason I stated last OCT ...   they only looked at  Octobers with  +opi and OCT with Negative OPI.  They  never looked at  Octobers that   had one signal  but the signal "switched"  during the winter.

 

 

The NAO forecast you endorsed in your winter outlook last year busted even worse.

Link to comment
Share on other sites

AO from 1-14 Oct average is -1.56 with 2 more days of sub -2 values to come before rising. So I think conservative average to mid Oct is -1.7 then the second half of Oct according to EC forecasts suggest probably around +0.2 to +0.4 average second half of Oct. This means Oct average AO is likely to be only around -0.75 to -0.65 range not to mention November probably starting strongly +ve. So One suspects this OPI will then only mean a risk of one month of weak to moderate -ve AO which is hardly worth risking going long on snow clearing equipment company stock.

The Euro ENS and GEFS have the AO below -1.5 through atleast the 20th so based on your numbers it should be at -1.6 or so for 2/3 of October and if we say the last 10 days of Oct avg +0.3, per your numbers, that leaves the AO at -1.45 for Oct.

Link to comment
Share on other sites

The Euro ENS and GEFS have the AO below -1.5 through atleast the 20th so based on your numbers it should be at -1.6 or so for 2/3 of October and if we say the last 10 days of Oct avg +0.3, per your numbers, that leaves the AO at -1.45 for Oct.

 

Plus, there has been a +ve bias in the LR for the AO lately as well.

Link to comment
Share on other sites

The Euro ENS and GEFS have the AO below -1.5 through atleast the 20th so based on your numbers it should be at -1.6 or so for 2/3 of October and if we say the last 10 days of Oct avg +0.3, per your numbers, that leaves the AO at -1.45 for Oct.

 

Pack,

 Actually, this works out closer to -1.0 for October as a whole. Regardless, as CR stated, the GEFS has had some +bias for days 8+.

Link to comment
Share on other sites

We will have to wait until it is published to get the answers to those questions.

 

One thing I always become concerned about with very high correlations is "curve fitting". Now I'm not saying that was done here, but until the details are presented in a paper, then there will always be some level of skepticism.

 

This is my concern as well. It's only bolstered by the fact that the first year this thing was operational was also the biggest "bust" year. It's quite possible the index is useful, but the correlation values from the "hindcasts" may be somewhat inflated.

Link to comment
Share on other sites

Thanks GaWx!  What is the AO so far for October?  Is it -1.5 or so?

 

Yes. I have it at ~-1.6 and at ~-1.8 through 10/18. That part is pretty certain. How far it rises afterward is the question. A -1.0 for the month is still very doable but the -1.5 shown on the prior two days of GEFS looks higly unlikely as of now.

Link to comment
Share on other sites

Pack,

 Actually, this works out closer to -1.0 for October as a whole. Regardless, as CR stated, the GEFS has had some +bias for days 8+.

 

Global ensembles all agree that the AO will go neutral around the 21st and stay near neutral or positive. The member spread looks like a bias for + readings through the end of the month. In the grand scheme I don't think it matters much. We already booked the extreme blocking event. Both 02 and 03 had one in Oct but returned to neutral afterwards.

 

2002 

 

post-2035-0-88707100-1413387976_thumb.gi

 

2003

 

post-2035-0-16291700-1413387990_thumb.gi

 

 

Other extreme years persisted longer (79 & 09)

 

post-2035-0-17995200-1413388053_thumb.gi

 

post-2035-0-49673100-1413388106_thumb.gi

 

 

All 4 years featured big blocking events during DJF but with different behaviors. 09-10 was door to door and very unusual. I personally doubt much chance at that occurring again.

 

79-80 featured a + December but went strongly negative in JFM. Dec of 79 featured an extreme +AO event that peaked at +5sd's. during the beginning of the month.  

 

02-03 was front loaded in DJ but relaxed in late Jan and stayed that way. The anomalous period began early though Nov or even Oct depending on how you look at it. 

 

03-04 was was neutral in Dec but solid - in JF.

 

I've noticed that anomalous blocking periods "usually" last 45 days or so during winter. They can reassert back negative or flip. Depends on which year you look at. 

 

Personally, if we do get a good -ao period during DJF I would like to see it develop in December and not November. Not saying that a neg Nov would be bad it's just that the odds of the AO state relaxing during prime winter months goes up. 

 

I took a look at Dec -AO data back in the fall of 2012 and pulled up some old data spreads. You can add 2012 to the list. Even though the winter sucked (at least in my yard it did), the AO behaved similar. Big neg departure in Dec that lasted through JFM.

 

post-2035-0-78153200-1413389527_thumb.jp

 

There's some compelling evidence that anomalous Dec -AO's have legs into Jan and can set the tone for the entire DJF period. Especially when the monthly mean comes in @ -1.75 or lower. 

Link to comment
Share on other sites

We will have to wait until it is published to get the answers to those questions.

 

One thing I always become concerned about with very high correlations is "curve fitting". Now I'm not saying that was done here, but until the details are presented in a paper, then there will always be some level of skepticism.

 

 

This is my concern as well. It's only bolstered by the fact that the first year this thing was operational was also the biggest "bust" year. It's quite possible the index is useful, but the correlation values from the "hindcasts" may be somewhat skewed too high.

 

 

This was discussed a bit last year as well. I think it has to be assumed that there was most definitely "curve fitting" as is with any new predictive model; the first iteration may very well have had a much lower r value when tested against historical data, but naturally, that wasn't the point where they called it a wrap. Not that there is anything wrong with that from a process standpoint, but we have one year of data to test the model's predictive capabilities, and it didn't do very well.

Link to comment
Share on other sites

This is my concern as well. It's only bolstered by the fact that the first year this thing was operational was also the biggest "bust" year

 

Not really. Predicted average AO was +1.65 and final(real) value was something like +0.20.

As we see in the graph i gave, 1982 prediction was -1.1 and final value was +0.3 or so. In 1999 predicted value was -0.2 and final value was +1.1. 1989 was also way off, 1988 also.

So there wasn't only last year that it was way off.

 

I'm very curious about this year and about the final paper(with the cooperation/co-authorship of Cohen) of it. I have the Italian paper(about 28 pages long) but google translation doesn't do a good job to help me understand what is going on.

 

 

 Also, personally i give a break to OPI for its failure in last year, because last year was extraordinary, very odd and exceptional. Europe had one of its milder winters, UK had one of the windiest and the most rainy of all time, and USA one of the coldest and with most snow, etc etc. The polar jet had camped in eastern USA for the whole winter after december and it was not a "normal" winter. I don't know if anyone here or elsewhere had predicted such kind of circulation. That would be tremendous.

Link to comment
Share on other sites

 Also, personally i give a break to OPI for its failure in last year, because last year was extraordinary, very odd and exceptional. Europe had one of its milder winters, UK had one of the windiest and the most rainy of all time, and USA one of the coldest and with most snow, etc etc. The polar jet had camped in eastern USA for the whole winter after december and it was not a "normal" winter. I don't know if anyone here or elsewhere had predicted such kind of circulation. That would be tremendous.

 

 

You can't give a break to an index based upon perceived notions of anomalous weather patterns. How exactly would we define "normal" winter? One could make an argument that each and every winter is "extraordinary" in a particular way. The most objective way of examining it is simply testing the validity through successive years of observation: do the resultant AO values mirror closely what was forecasted or not? Last year was the first observational period, and the modality ended up being correct but the magnitude fell short significantly. We've got to include all years, even the perceived "abnormal" ones, into the observation basket, otherwise there's no legitimate correlation. This new research definitely sounds promising and it's quite possible last year was just one of the "miss" years. But we're going to need to experience many winters to verify this (and of course reading the paper when its released would be helpful).

Link to comment
Share on other sites

I think you need to cut the OPI a break on last year...think of it this way...

The last 37 years there is a scale of of approximately +3.5 to -2.5 so the scale has a range of 6...

The +AO of last year was the 3rd highest of the 37 years at 1.64...truely a positive outlier

What was the OPI value in those outlier years?

1988 AO 3.6? OPI 2.4? "Bust by -1.2"

1992 AO 1.7? OPI 1.7 ...nailed it

2014 AO 1.65 OPI 0.18 "Bust by -1.4"

So does this 2nd larger bust of -1.4 invalidate the model?

Hardly..although it appears to be the largest variance, you have to add that into the other 37 occurences and average it..

The correlation itself with R squared of .83 suggests that 17% of the variance in the AO is NOT explained by OPI...

Also..the worst bust in 2014 is only 1.4/6.0 or 23% of the range of values...

2014 was an outlier AO and an outlier variance by the OPI...but has to be considered in context and surely doesn't suggest that you throw out the model..

Also it seems Cohen is endorsing this...and we know his SCE/SAI work is solid...

Its too early to throw this model in the dumpster..

Their  OPI   forecast was   a disaster  waiting to happen

Forecasting  +1.64  ... very strong +AO   vs     +0.188   ..essentially Neutral ...  was  a massive bust. and we need to call

it  for what it was.

 

IMO.. the  reason it busted  or failed  last year was  the  reason I stated last OCT ...   they only looked at  Octobers with  +opi and OCT with Negative OPI.  They  never looked at  Octobers that   had one signal  but the signal "switched"  during the winter.

Link to comment
Share on other sites

Not really. Predicted average AO was +1.65 and final(real) value was something like +0.20.

As we see in the graph i gave, 1982 prediction was -1.1 and final value was +0.3 or so. In 1999 predicted value was -0.2 and final value was +1.1. 1989 was also way off, 1988 also.

So there wasn't only last year that it was way off.

 

I didn't say it was the only bust year, but the biggest bust year. The two years you mentioned were relatively close (something like 1.3-1.4 instead of the 1.45 bust last year), but it's still notable that last winter was the both the first "test" of the index, and the biggest bust.

Link to comment
Share on other sites

2014 was an outlier AO and an outlier variance by the OPI...but has to be considered in context and surely doesn't suggest that you throw out the model..

Also it seems Cohen is endorsing this...and we know his SCE/SAI work is solid...

Its too early to throw this model in the dumpster..

 

I don't think anyone said to throw the model in the dumpster. It is too early, however, to say that the model can predict the DJF AO with 83% accuracy.

Link to comment
Share on other sites

Not really. Predicted average AO was +1.65 and final(real) value was something like +0.20.

As we see in the graph i gave, 1982 prediction was -1.1 and final value was +0.3 or so. In 1999 predicted value was -0.2 and final value was +1.1. 1989 was also way off, 1988 also.

So there wasn't only last year that it was way off.

 

I'm very curious about this year and about the final paper(with the cooperation/co-authorship of Cohen) of it. I have the Italian paper(about 28 pages long) but google translation doesn't do a good job to help me understand what is going on.

 

 

 Also, personally i give a break to OPI for its failure in last year, because last year was extraordinary, very odd and exceptional. Europe had one of its milder winters, UK had one of the windiest and the most rainy of all time, and USA one of the coldest and with most snow, etc etc. The polar jet had camped in eastern USA for the whole winter after december and it was not a "normal" winter. I don't know if anyone here or elsewhere had predicted such kind of circulation. That would be tremendous.

 

 

I wouldn't give it much of a break until it produces going forward rather than just having a high correlation in the past. Especially considering we don't have any published work on this index. If something correlates well due to curve-fitting, then it will usually start diverging beyond the hindcasts. 2013-2014 was strike 1.

 

Obviously we said that doesn't mean it is a bad index or that it is not going to be very useful, but it is very easy to understand how it creates skepticism. If/Once is it published, I think this will create more confidence.

Link to comment
Share on other sites

Your use of "Curve fitting" implies bias...no one knows what goes in the black box equation that converts October 500mb heights to the OPI yet, but you can be sure it was somehow mathematically manipulated to standardize and scale the data to fit with the AO values...all necessary to test for any LINEAR relationship between two variables..as long as the math was the same for each year the OPI was calculated..

I think METS don't like this index at first glance in the same way that MD's hate multiple regression calculators that are becoming rampant in medicine. Life and death decisions are made based on the basis of multiple regression calculators taking in as many as 20 to 30 variables and R values in some cases as low as .5 or .6...but in practice they work better than the guesstimates and the experience based judgement of the MD. The data are what the data are. If something seemingly so complex could be predicted with such precision by something so simple, it is threatening to those that have spent time to get the letters behind their name..

I see posts all the time where someone posts a samples size of 4 or 5 or 6 with temps or snowfall or tele values and throws around the word "correlation" and no one calls them out...despite the fact the sample size is much less than the n of 30 where natural variabiility fades and no R value is calculated..and here comes solid science 2 variables with an n of 36 and R2 of .83 and everyone tries to poke holes in it before its published..

Link to comment
Share on other sites

Your use of "Curve fitting" implies bias...no one knows what goes in the black box equation that converts October 500mb heights to the OPI yet, but you can be sure it was somehow mathematically manipulated to standardize and scale the data to fit with the AO values...all necessary to test for any LINEAR relationship between two variables..as long as the math was the same for each year the OPI was calculated..

I think METS don't like this index at first glance in the same way that MD's hate multiple regression calculators that are becoming rampant in medicine. Life and death decisions are made based on the basis of multiple regression calculators taking in as many as 20 to 30 variables and R values in some cases as low as .5 or .6...but in practice they work better than the guesstimates and the experience based judgement of the MD. The data are what the data are. If something seemingly so complex could be predicted with such precision by something so simple, it is threatening to those that have spent time to get the letters behind their name..

I see posts all the time where someone posts a samples size of 4 or 5 or 6 with temps or snowfall or tele values and throws around the word "correlation" and no one calls them out...despite the fact the sample size is much less than the n of 30 where natural variabiility fades and no R value is calculated..and here comes solid science 2 variables with an n of 36 and R2 of .83 and everyone tries to poke holes in it before its published..

 

It's always a bit treacherous to use purely statistical methods in a forecast without any sense of the underlying dynamics. That's one of positives of Dr. Cohen's work that, although it is based on the statistical relationship between SCE/SCA and the AO, the proposed dynamical mechanism (wave-mean flow interaction) at least has some modelling/observational support. See their recent paper Cohen et al. (2014) in JCLI for details.

 

I'm also not sure why the OPI correlation does not include data before 1976. If the index is based solely on the 500 mb "pattern" (height, wind or temperature, I assume), it should be trivial to use the reanalysis data that extends back to 1948 to increase the sample.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...