Jump to content
  • Member Statistics

    17,502
    Total Members
    7,904
    Most Online
    Weathernoob335
    Newest Member
    Weathernoob335
    Joined

January/February Mid/Long Range Disco IV: A New Hope


stormtracker
 Share

Recommended Posts

After looking across all the overnight guidance, there was a noticeable positive trend.  We are in better shape now than 24 hours ago.  But expectations should still be low.  We are seeing a bit more interaction between the STJ waves and the cold boundary in the last few runs.  Not an amazing amount, still most runs are kinda anemic with the intersection of moisture and cold on these waves, but better than it was 48 hours ago when 90% of all guidance pretty much had no interaction until north of our latitude which would do us no good.  

Averaging the 3 major ensembles gives the DC/Balt metro areas about a 51% chance of 1" of snow through day 10 and a 27% chance of 3" through day 10.  That is by far the best odds we have had inside day 10 yet this year.  But keep in mind those numbers are still below climo.  This is the snowiest period of the winter and average for this 10 day period is actually about 2" so a 50% chance of 1" and a 30% chance of 3" is still slightly below normal.  But of course a crumb looks like a feast to a starving man.  

  • Like 12
Link to comment
Share on other sites

Just now, psuhoffman said:

After looking across all the overnight guidance, there was a noticeable positive trend.  We are in better shape now than 24 hours ago.  But expectations should still be low.  We are seeing a bit more interaction between the STJ waves and the cold boundary in the last few runs.  Not an amazing amount, still most runs are kinda anemic with the intersection of moisture and cold on these waves, but better than it was 48 hours ago when 90% of all guidance pretty much had no interaction until north of our latitude which would do us no good.  

Averaging the 3 major ensembles gives the DC/Balt metro areas about a 51% chance of 1" of snow through day 10 and a 27% chance of 3" through day 10.  That is by far the best odds we have had inside day 10 yet this year.  But keep in mind those numbers are still below climo.  This is the snowiest period of the winter and average for this 10 day period is actually about 2" so a 50% chance of 1" and a 30% chance of 3" is still slightly below normal.  But of course a crumb looks like a feast to a starving man.  

I thought so too!  I never understand the bit of gloom and doom around here sometimes.  It's like we're looking at two different set of models.

  • Like 8
Link to comment
Share on other sites

Just now, jayyy said:

51892688e1e1645d1810e73d39fb570b.jpg
It’s the icon, and it’s at range, but the low takes an unfavorable track, otherwise we likely get snow out of this. Plenty of time for adjustments.


.

I think the ICON is the only one showed this pumped up solution?  Or was there other guidance?   We'd like a blend of this and the southern scrapers. 

Link to comment
Share on other sites

I think the ICON is the only one showed this pumped up solution?  Or was there other guidance?   We'd like a blend of this and the southern scrapers. 

I believe the 00z CMC did as well. A blend of this and the southern scrapers would be swell. ICON juices that wave up for sure

Edit — yep, the 00z Canadian. Very similar outside of HP location 15a161499317febd5c35d733415782fa.jpg
Link to comment
Share on other sites

2 minutes ago, Solution Man said:

We have good bars though 

we notched a couple wins in the snow column...been here since 99...when 200K seemed like a lot of money for a house...I will ride it out and then try and dupe my wife into a location that has better snow climo doing the jedi mind trick into thinking its warmer than here...."there are palm trees in northern PA??......yes, yes there are"

  • Like 2
Link to comment
Share on other sites

I think the ICON is the only one showed this pumped up solution?  Or was there other guidance?   We'd like a blend of this and the southern scrapers. 

CMC had a pumped up wave 1 as well fwiw. Btw if you bought icon solution verbatim it would be worst case scenario because it leaves no energy out west for wave 2, it’s likely out to lunch though….


.
  • Like 2
Link to comment
Share on other sites

1 minute ago, Heisy said:


CMC had a pumped up wave 1 as well fwiw. Btw if you bought icon solution verbatim it would be worst case scenario because it leaves no energy out west for wave 2, it’s likely out to lunch though….


.

Ah, that's right, the CMC did have that solution.  I still have hope for a blend.  A NEW HOPE.

See what I did there?  Sophisticated humor is my forte.  

  • Like 3
  • Haha 3
  • Sad 1
Link to comment
Share on other sites

I have teased before the idea of trying to find a way to calculate snowfall probabilities using all the major ensembles.  The mean is always a bad tool to use alone because it can be skewed by extreme outliers.  The probabilities are a better tool but the problem with them with any one ensemble is they are susceptible to internal bias error.  All the permutations are still based off the same parent model and its equations.  They tinker with the initial conditions and the models equations a bit but only within certain parameters.

Each model system has to deal with certain problems.  One is how to initialize the atmosphere given our incomplete data and resolution limitations.  How each model deals with these limitations and how to compensate for them affects the outcome.  There is only so much they can perturb these factors within each's parameterization schemes.  If there is a bias error for a specific synoptic event inherent in the parent model it is likely to infect the ensemble permutations as well. 

Another issue is how the models resolve factors that are impossible to actually be accurately depicted in the model either because of the complexity of the process or because of the spatial resolution limitations of the model.  Some processes take place mostly at the molecular level and are too small scale for the model to resolve the way they actually occur.  Other factors are too complex and trying to actually model them with all the variables would create ridiculous exponential errors.  So the guidance comes up with ways to compensate and model the effects of these processes.  But each model handles this problem slightly differently.  An error caused by these factors in the parent model would also be likely to infect the ensemble permutations.  

The problem with using probabilities produced by any one model ensemble system is that the whole system is infected with some of the same error biases and the system does not know anything outside the system.  In short, the model does not know what it does not know.  

By creating a probability using multiple systems we can offset some of these biases some.  It's still not perfect because at the end of the day we are using a still limited physical understanding to apply the primitive equations to mathematically represent a chaotic fluid system like the atmosphere with nearly infinite permutations based on nearly infinite processes at nearly infinite levels.  We're just not even close to being able to do that accurately at long leads.  But I do think using all 3 major global ensembles will turn out to be more accurate than any one.

The next issue is how to weight them based on their overall accuracy.  I decided to go EPS 40%, GEFS 35% and GEPS 25%.  Further investigation based on verification scores might move me to tinker with that calculation some but for now lets see how it goes.  

Using this math here is where we stand based on 0z guidance using BWI as a central location. 

These probabilities are through day 10, 0Z Feb 6th

51% chance of 1" of snow

27% chance of 3" of snow

9% chance of 6" of snow

I will try to update these numbers when I have time after each run going forward (when there is a realistic chance of snow, not wasting time on this during shit the blinds patterns).  

  • Like 12
  • Thanks 10
Link to comment
Share on other sites

I have teased before the idea of trying to find a way to calculate snowfall probabilities using all the major ensembles.  The mean is always a bad tool to use alone because it can be skewed by extreme outliers.  The probabilities are a better tool but the problem with them with any one ensemble is they are susceptible to internal bias error.  All the permutations are still based off the same parent model and its equations.  They tinker with the initial conditions and the models equations a bit but only within certain parameters.
Each model system has to deal with certain problems.  One is how to initialize the atmosphere given our incomplete data and resolution limitations.  How each model deals with these limitations and how to compensate for them affects the outcome.  There is only so much they can perturb these factors within each's parameterization schemes.  If there is a bias error for a specific synoptic event inherent in the parent model it is likely to infect the ensemble permutations as well. 
Another issue is how the models resolve factors that are impossible to actually be accurately depicted in the model either because of the complexity of the process or because of the spatial resolution limitations of the model.  Some processes take place mostly at the molecular level and are too small scale for the model to resolve the way they actually occur.  Other factors are too complex and trying to actually model them with all the variables would create ridiculous exponential errors.  So the guidance comes up with ways to compensate and model the effects of these processes.  But each model handles this problem slightly differently.  An error caused by these factors in the parent model would also be likely to infect the ensemble permutations.  
The problem with using probabilities produced by any one model ensemble system is that the whole system is infected with some of the same error biases and the system does not know anything outside the system.  In short, the model does not know what it does not know.  
By creating a probability using multiple systems we can offset some of these biases some.  It's still not perfect because at the end of the day we are using a still limited physical understanding to apply the primitive equations to mathematically represent a chaotic fluid system like the atmosphere with nearly infinite permutations based on nearly infinite processes at nearly infinite levels.  We're just not even close to being able to do that accurately at long leads.  But I do think using all 3 major global ensembles will turn out to be more accurate than any one.
The next issue is how to weight them based on their overall accuracy.  I decided to go EPS 40%, GEFS 35% and GEPS 25%.  Further investigation based on verification scores might move me to tinker with that calculation some but for now lets see how it goes.  
Using this math here is where we stand based on 0z guidance using BWI as a central location. 
These probabilities are through day 10, 0Z Feb 6th
51% chance of 1" of snow
27% chance of 3" of snow
9% chance of 6" of snow
I will try to update these numbers when I have time after each run going forward (when there is a realistic chance of snow, not wasting time on this during shit the blinds patterns).  

Appreciate the thought you put into that.


.
Link to comment
Share on other sites

People need to remember there is an ignore feature on this site. Using it will make your experience much better. Just saying. 

I am actually feeling quite confident that just about everyone will see at least an inch of show over the next week. This winter has performed exactly how you would expect a Nina to perform. The Midwest has won and we have lost. But the advertised pattern is pretty much how we should expect to score in Nina. If we get a 4th Nina next year I am not even going to look at the models. They are simply miserable for our area.

  • Like 2
Link to comment
Share on other sites

14 minutes ago, psuhoffman said:

I have teased before the idea of trying to find a way to calculate snowfall probabilities using all the major ensembles.  The mean is always a bad tool to use alone because it can be skewed by extreme outliers.  The probabilities are a better tool but the problem with them with any one ensemble is they are susceptible to internal bias error.  All the permutations are still based off the same parent model and its equations.  They tinker with the initial conditions and the models equations a bit but only within certain parameters.

Each model system has to deal with certain problems.  One is how to initialize the atmosphere given our incomplete data and resolution limitations.  How each model deals with these limitations and how to compensate for them affects the outcome.  There is only so much they can perturb these factors within each's parameterization schemes.  If there is a bias error for a specific synoptic event inherent in the parent model it is likely to infect the ensemble permutations as well. 

Another issue is how the models resolve factors that are impossible to actually be accurately depicted in the model either because of the complexity of the process or because of the spatial resolution limitations of the model.  Some processes take place mostly at the molecular level and are too small scale for the model to resolve the way they actually occur.  Other factors are too complex and trying to actually model them with all the variables would create ridiculous exponential errors.  So the guidance comes up with ways to compensate and model the effects of these processes.  But each model handles this problem slightly differently.  An error caused by these factors in the parent model would also be likely to infect the ensemble permutations.  

The problem with using probabilities produced by any one model ensemble system is that the whole system is infected with some of the same error biases and the system does not know anything outside the system.  In short, the model does not know what it does not know.  

By creating a probability using multiple systems we can offset some of these biases some.  It's still not perfect because at the end of the day we are using a still limited physical understanding to apply the primitive equations to mathematically represent a chaotic fluid system like the atmosphere with nearly infinite permutations based on nearly infinite processes at nearly infinite levels.  We're just not even close to being able to do that accurately at long leads.  But I do think using all 3 major global ensembles will turn out to be more accurate than any one.

The next issue is how to weight them based on their overall accuracy.  I decided to go EPS 40%, GEFS 35% and GEPS 25%.  Further investigation based on verification scores might move me to tinker with that calculation some but for now lets see how it goes.  

Using this math here is where we stand based on 0z guidance using BWI as a central location. 

These probabilities are through day 10, 0Z Feb 6th

51% chance of 1" of snow

27% chance of 3" of snow

9% chance of 6" of snow

I will try to update these numbers when I have time after each run going forward (when there is a realistic chance of snow, not wasting time on this during shit the blinds patterns).  

I really do think that advanced statistical analysis is the way forward in making more accurate forecasts. I think another component is using normal distributions to help account for climo. This would be especially helpful if these distributions could be able to change based on current indices or other correlations developed through AI. Many times modeling spits out a solution to the far right of the distribution and everyone gets all excited, but more often than not, the error is to the left then to the right.

Link to comment
Share on other sites


Yea and 6z cmc trended better you can tell wave 1 would have ended farther S vs 00z (yes there is a 6z cmc but it only goes out to 84 hours, and no it isn’t the then)

656cd172469cf2ac59fc848d13f406c6.gif


.

Looks really similar to me. Is it hanging back more energy? I’m looking on a mobile device too FYI so I may be missing something.


.
Link to comment
Share on other sites

26 minutes ago, clskinsfan said:

People need to remember there is an ignore feature on this site. Using it will make your experience much better. Just saying. 

I am actually feeling quite confident that just about everyone will see at least an inch of show over the next week. This winter has performed exactly how you would expect a Nina to perform. The Midwest has won and we have lost. But the advertised pattern is pretty much how we should expect to score in Nina. If we get a 4th Nina next year I am not even going to look at the models. They are simply miserable for our area.

You better hope next year isn't enso neutral...those are even worse.  This is BWI snow data by esno the last 30 years. 

Neutral   Nina   Nino
Avg 13.1   Avg 17.2   Avg 28.6
Median 11.7   Median 14.4   Median 18.3
% above mean 12.5%   % above mean 25%   % above mean 44.4%
1994 17.3   1996 62.5   1995 8.2
1997 15.3   1999 15.2   1998 3.2
2002 2.3   2000 26.1   2003 58.1
2004 18.3   2001 8.7   2005 18
2013 8   2006 19.6   2007 11
2014 39   2008 8.5   2010 77
2017 3   2009 9.1   2015 28.7
2020 1.8   2011 14.4   2016 35.1
      2012 1.8   2019 18.3
      2018 15.4      
      2021 10.9      
      2022 14.4      
  • Like 3
Link to comment
Share on other sites

You better hope next year isn't enso neutral...those are even worse.  This is BWI snow data by esno the last 30 years. 
Neutral   Nina   Nino
Avg 13.1   Avg 17.2   Avg 28.6
Median 11.7   Median 14.4   Median 18.3
% above mean 12.5%   % above mean 25%   % above mean 44.4%
1994 17.3   1996 62.5   1995 8.2
1997 15.3   1999 15.2   1998 3.2
2002 2.3   2000 26.1   2003 58.1
2004 18.3   2001 8.7   2005 18
2013 8   2006 19.6   2007 11
2014 39   2008 8.5   2010 77
2017 3   2009 9.1   2015 28.7
2020 1.8   2011 14.4   2016 35.1
      2012 1.8   2019 18.3
      2018 15.4      
      2021 10.9      
      2022 14.4      

This formatted very odd on my phone. It’s all in a line. What ENSO state was 2010


Edit — nvm!
Link to comment
Share on other sites

@Terpeast made an astute observation last night in another thread.  This year looks more like an esno neutral v nina.

all Enso Neutral in the last 30 years H5 Composite 

Neutral.png.67f65f2a4f6a52e975f587f320e2fc68.png

Nina Composite 

Nina.png.a741db9d574e5802d3e9bbf254d49324.png

Obviously no one year is going to match a composite exactly but we have had a more typical enso neutral pattern overall than nina.  I've said before I don't understand the obsession with rooting for a nina to fade during winter since there is no objective evidence it helps improve our odds later in a season AND enso neutral is even worse for snow here than la nina.  But the good for next year could be the atmosphere is already transitioning away from the nina base state and maybe that could be helpful if we do get a nino by next year.

Just a warning though...enso neutral following a nina is actually WORSE than a nina.  So hopefully we are getting that out of the way this year.  But if the projections of a nino fail and next year ends up enso neutral...well historically we are probably looking at another dreg awful snowfall year. 

Pray for a Nino.  Do a dance.  Light the candles.  Sacrifice whatever and whoever it takes.  Do it now.  

  • Like 4
Link to comment
Share on other sites

1 hour ago, psuhoffman said:

I have teased before the idea of trying to find a way to calculate snowfall probabilities using all the major ensembles.  The mean is always a bad tool to use alone because it can be skewed by extreme outliers.  The probabilities are a better tool but the problem with them with any one ensemble is they are susceptible to internal bias error.  All the permutations are still based off the same parent model and its equations.  They tinker with the initial conditions and the models equations a bit but only within certain parameters.

Each model system has to deal with certain problems.  One is how to initialize the atmosphere given our incomplete data and resolution limitations.  How each model deals with these limitations and how to compensate for them affects the outcome.  There is only so much they can perturb these factors within each's parameterization schemes.  If there is a bias error for a specific synoptic event inherent in the parent model it is likely to infect the ensemble permutations as well. 

Another issue is how the models resolve factors that are impossible to actually be accurately depicted in the model either because of the complexity of the process or because of the spatial resolution limitations of the model.  Some processes take place mostly at the molecular level and are too small scale for the model to resolve the way they actually occur.  Other factors are too complex and trying to actually model them with all the variables would create ridiculous exponential errors.  So the guidance comes up with ways to compensate and model the effects of these processes.  But each model handles this problem slightly differently.  An error caused by these factors in the parent model would also be likely to infect the ensemble permutations.  

The problem with using probabilities produced by any one model ensemble system is that the whole system is infected with some of the same error biases and the system does not know anything outside the system.  In short, the model does not know what it does not know.  

By creating a probability using multiple systems we can offset some of these biases some.  It's still not perfect because at the end of the day we are using a still limited physical understanding to apply the primitive equations to mathematically represent a chaotic fluid system like the atmosphere with nearly infinite permutations based on nearly infinite processes at nearly infinite levels.  We're just not even close to being able to do that accurately at long leads.  But I do think using all 3 major global ensembles will turn out to be more accurate than any one.

The next issue is how to weight them based on their overall accuracy.  I decided to go EPS 40%, GEFS 35% and GEPS 25%.  Further investigation based on verification scores might move me to tinker with that calculation some but for now lets see how it goes.  

Using this math here is where we stand based on 0z guidance using BWI as a central location. 

These probabilities are through day 10, 0Z Feb 6th

51% chance of 1" of snow

27% chance of 3" of snow

9% chance of 6" of snow

I will try to update these numbers when I have time after each run going forward (when there is a realistic chance of snow, not wasting time on this during shit the blinds patterns).  

So you’re saying there’s a chance…

Link to comment
Share on other sites

3 minutes ago, psuhoffman said:

@Terpeast made an astute observation last night in another thread.  This year looks more like an esno neutral v nina.

all Enso Neutral in the last 30 years H5 Composite 

Neutral.png.67f65f2a4f6a52e975f587f320e2fc68.png

Nina Composite 

Nina.png.a741db9d574e5802d3e9bbf254d49324.png

Obviously no one year is going to match a composite exactly but we have had a more typical enso neutral pattern overall than nina.  I've said before I don't understand the obsession with rooting for a nina to fade during winter since there is no objective evidence it helps improve our odds later in a season AND enso neutral is even worse for snow here than la nina.  But the good for next year could be the atmosphere is already transitioning away from the nina base state and maybe that could be helpful if we do get a nino by next year.

Just a warning though...enso neutral following a nina is actually WORSE than a nina.  So hopefully we are getting that out of the way this year.  But if the projections of a nino fail and next year ends up enso neutral...well historically we are probably looking at another dreg awful snowfall year. 

Pray for a Nino.  Do a dance.  Light the candles.  Sacrifice whatever and whoever it takes.  Do it now.  

Now the general consensus (at least as it is right now) is indeed for a niño next year, right? (This may be a question for the enso thread though, lol) I've been hearing that things are already warmer deep underneath.

Link to comment
Share on other sites

11 minutes ago, brooklynwx99 said:

ha, the stronger NS in the end actually leads to a stronger first wave. more energy just gets sheared out

rooting against this evolution, definitely a lower ceiling

9ED4D2B5-DF83-4CDE-BE79-218B2BC0312D.thumb.gif.62eb23661fe3ce0e7acf7681e1b800a2.gif

It's close to a good setup, but the real limiting factor on this whole period is the fact the true mid latitude ridge axis is about as horrible as you can get, look at that heat bubble in the gulf.  There is no blocking, the pac ridge is too far west, the only thing suppressing the SE ridge is the TPV.  But that is a double edge sword because the thing that will prevent the SE ridge from going ape is also a suppressive factor.  So we are left with very small margins for error.  On wave 1 we need to cold to press but not so much that is shunts the wave south.  IOW we need perfect timing.  Cold press too slow it goes north, too fast it goes south.  

With wave 2 we need perfect timing with the TPV movement and associated high pressure.  There is a very narrow window where something can amplify enough to get precip to our latitude but not press the boundary too far north.  Really only like a 12-24 hour window where the flow is relaxing but has not relaxed too much.  

It's a real threat, which is more than we have had...but there is a serious cap on the probabilities here unless we get perfect timing with these features.  

Link to comment
Share on other sites

  • WxUSAF unpinned this topic

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...