Jump to content
  • Member Statistics

    17,509
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

"We're gonna need a bigger plow..." Massive, persistent singal now emerges discretely in the models, 20th-23rd


Typhoon Tip
 Share

Recommended Posts

15 minutes ago, Baroclinic Zone said:

if, big IF

this ends up with a huge trough setup and low originates from south, beware the latent heat release and a coastal hugger.  Could mean a mess along coast.  Just something to ponder.  Not a forecast.

I imagine that's a nightmare being right on the coast.

Link to comment
Share on other sites

14 minutes ago, Baroclinic Zone said:

if, big IF

this ends up with a huge trough setup and low originates from south, beware the latent heat release and a coastal hugger.  Could mean a mess along coast.  Just something to ponder.  Not a forecast.

Which is why, for the most part, we’d like it just bit se once (if) we get inside D3. 

  • Like 2
Link to comment
Share on other sites

37 minutes ago, Sey-Mour Snow said:

That’s bc some members cut and not all qpf is snow. 

I don't get that.  If the temps and qpf are means of all members, then so is the snow a mean of all members.  If the mean temps for an area are low to mid 20's, and upper level means are well below freezing for the same area, shouldn't the mean snow for that area correspond with those indicators?

Otherwise, it would appear that the cutting members are skewing the snow totals.  The placement of the low on the GEFS is also a mean, and the resulting metrics should be based on that. 

I think the problem with most of the ensemble models is that they take a mean of the snow totals of all members (including the cutters showing little or zero) and plug that into their maps, when in reality the maps should be reflecting the snow resulting from the placing of the mean slp and resultant lower and upper level means.  The storm vista maps appear to take the latter approach with the GEFS.

 

Link to comment
Share on other sites

36 minutes ago, 78Blizzard said:

I don't get that.  If the temps and qpf are means of all members, then so is the snow a mean of all members.  If the mean temps for an area are low to mid 20's, and upper level means are well below freezing for the same area, shouldn't the mean snow for that area correspond with those indicators?

Otherwise, it would appear that the cutting members are skewing the snow totals.  The placement of the low on the GEFS is also a mean, and the resulting metrics should be based on that. 

I think the problem with most of the ensemble models is that they take a mean of the snow totals of all members (including the cutters showing little or zero) and plug that into their maps, when in reality the maps should be reflecting the snow resulting from the placing of the mean slp and resultant lower and upper level means.  The storm vista maps appear to take the latter approach with the GEFS.

 

So if 50 members showed rain and 1 showed a 51” blizzard. You’d want the mean to be 51” of snow? 

  • Haha 1
Link to comment
Share on other sites

11 minutes ago, Sey-Mour Snow said:

So if 50 members showed rain and 1 showed a 51” blizzard. You’d want the mean to be 51” of snow? 

Wrong. You're misinterpreting what I said.  Using your example in what I said, the GEFS mean would have shown an slp that would have indicated by its lower and upper level metrics clearly a rain situation, and the mean snow would be zero.  As I said, it should all be based on what emanates from the placement of the mean slp.

Link to comment
Share on other sites

7 minutes ago, RUNNAWAYICEBERG said:

I think he means don’t take a mean of the member clown maps but instead base the mean qpf/clowns off the mean upper air, sfc low placement, pressure, etc. 

Yeah, that's how I read it too, but it's just not how those maps are generated. They aren't post-processing "if the low was here, it would show this average QPF/snow", it just takes an average of all the members. 

We're getting pretty close to having means of clusters of members though. Pretty soon you should be able to sort out all the cutters and only see what's left.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, OceanStWx said:

Yeah, that's how I read it too, but it's just not how those maps are generated. They aren't post-processing "if the low was here, it would show this average QPF/snow", it just takes an average of all the members. 

We're getting pretty close to having means of clusters of members though. Pretty soon you should be able to sort out all the cutters and only see what's left.

Ninja’d. I thought weathermodels already let’s you do that? I want to say Ginxy cat has done it before on there but I could be wrong. He may just be a magician…like calling for a Dec10/Jan11 start to winter a very long time ago.

Link to comment
Share on other sites

48 minutes ago, 78Blizzard said:

I don't get that.  If the temps and qpf are means of all members, then so is the snow a mean of all members.  If the mean temps for an area are low to mid 20's, and upper level means are well below freezing for the same area, shouldn't the mean snow for that area correspond with those indicators?

Otherwise, it would appear that the cutting members are skewing the snow totals.  The placement of the low on the GEFS is also a mean, and the resulting metrics should be based on that. 

I think the problem with most of the ensemble models is that they take a mean of the snow totals of all members (including the cutters showing little or zero) and plug that into their maps, when in reality the maps should be reflecting the snow resulting from the placing of the mean slp and resultant lower and upper level means.  The storm vista maps appear to take the latter approach with the GEFS.

 

There's no way they have the ability to code that.

I write/say this all the time, but they should mention exactly what they used to produce their plots. Snowfall typically doesn't come straight from a model so post-processing is usually necessary. Post-processing is solely reliant on the vendor/user so don't be surprised if you see discrepancies from vendor-to-vendor. 

Some questions that come to mind:

1) Which members did they use to calculate mean snowfall. 

2) Which fields did they use to post-process snowfall using 10:1 ratios.

Without looking at the GEFS' output, there are a couple ways they can get snowfall using the 10:1 algorithm ->

2.a) if ptype == snow, SF = LWE*10.

2.b) Use SWE output from an ensemble member then simply multiply it by 10.

I'm sure there are other options too.

2.a wouldn't make sense to me. Determining ptype for global ensemble members would be a waste of resources. My guess is that they use different ensemble members (or SWE fields) to calculate their mean. Unfortunately, you'll never know unless you see their code.

  • Like 2
Link to comment
Share on other sites

7 minutes ago, OceanStWx said:

Yeah, that's how I read it too, but it's just not how those maps are generated. They aren't post-processing "if the low was here, it would show this average QPF/snow", it just takes an average of all the members. 

We're getting pretty close to having means of clusters of members though. Pretty soon you should be able to sort out all the cutters and only see what's left.

 

4 minutes ago, MegaMike said:

There's no way they have the ability to code that.

I write/say this all the time, but they should mention exactly what they used to produce their plots. Snowfall typically doesn't come straight from a model so post-processing is usually necessary. Post-processing is solely reliant on the vendor/user so don't be surprised if you see discrepancies from vendor-to-vendor. 

Some questions that come to mind:

1) Which members did they use to calculate mean snowfall. 

2) Which fields did they use to post-process snowfall using 10:1 ratios.

Without looking at the GEFS' output, there are a couple ways they can get snowfall using the 10:1 algorithm ->

2.a) if ptype == snow, SF = LWE*10.

2.b) Use SWE output from an ensemble member then simply multiply it by 10.

I'm sure there are other options too.

2.a wouldn't make sense to me. Determining ptype for global ensemble members would be a waste of resources. My guess is that they use different ensemble members (or SWE fields) to calculate their mean. Unfortunately, you'll never know unless you see their code.

Thanks for the input.  I agree with you.  So basically all these vendor snow maps are all somewhat useless not knowing their coding.

Link to comment
Share on other sites

5 minutes ago, 78Blizzard said:

Thanks for the input.  I agree with you.  So basically all these vendor snow maps are all somewhat useless not knowing their coding.

More or less. Most ensemble members are going to be 10:1, because it's easiest. Some may be Kuchera. Rarely are you going to see anything else besides that. 

Some models have more sophisticated ways, like the HRRR variable density snow depth. Another good trick is to look at model or ensemble snow depth, that often gives you a more accurate representation of what will fall than the clown maps.

  • Like 1
Link to comment
Share on other sites

2 hours ago, Baroclinic Zone said:

if, big IF

this ends up with a huge trough setup and low originates from south, beware the latent heat release and a coastal hugger.  Could mean a mess along coast.  Just something to ponder.  Not a forecast.

I hate Miller As...so much more that can go wrong...that is one.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...