Jump to content
  • Member Statistics

    17,502
    Total Members
    7,904
    Most Online
    Weathernoob335
    Newest Member
    Weathernoob335
    Joined

Winter model performance discussion


cae
 Share

Recommended Posts

April 7, 2018

Below is the stage IV precipitation analysis (verification data) for the event.  The color scale is the same as used for the model runs.  This captures 12z 04/06 to 12z 04/08. 

2C4AeLO.jpg

Below are the 00z and 12z model runs up to the event. The Euro is top left, GFS is top right, GGEM is bottom left, and ICON is bottom right.  This gif starts 48 hours before the last run.  Before that there was another event, and weather.us doesn't have a way to distinguish between the precip totals. 

ZGhzDsY.gif

I'm still looking for a consistent way to compare model output for snowfall.  Below are the available Kuchera ratio plots from pivotalweather for the GFS (top left), 12k NAM (top right), GGEM (middle right), RGEM (bottom left), and 3k NAM (bottom right).  The 12z and 00z runs are shown. The middle left image is the actual snowfall total from the national snowfall analysis using the same color scale.

tKwFUdR.gif

Here are the tropicaltidbits maps for the ICON (top right, model-calculated snow ratios), 3k NAM (bottom left, 10:1 ratios), and HRDPS (bottom right 10:1 ratios).  The top left image is the actual snowfall total from the national snowfall analysis using the same color scale.  The 12z and 00z runs are shown.  I added the 3k NAM to ths image as well because pivotalweather is missing some images from one of its runs.

mT61hai.gif

Finall here are the Euro snow depth maps from weather.us.  The image on the right is the actual snowfall total from the national snowfall analysis using the same color scale.

dSIrt3s.gif

Link to comment
Share on other sites

@Fozz posted a link to this site, which has snowfall total files I can use to create maps for comparison with model output (above).  I was hoping for a better system to try it out on, as we barely got any snow.  LWX didn't even put up a map of spotter reports.  There have been storms this winter that gave our region more snow than this one that I didn't bother adding to this thread.  But I thought it was worth a write-up in this thread because the models all showed a much bigger event up until about 72 hours before snow started to fall.  

A few comments on model performance:

1.  The Euro, like the other models, showed a significant event fairly late in the game compared to what happened.  But if you look at the total precip panels above, you'll see that it was the first model to catch on to the coming bust.  The ICON caught on around the same time.  The GFS and GGEM had precip coming too far north up until their last runs before the event started.  Arguably, the Euro was the best global model for this event, but none of them were very good.

2.  The ICON caught on late too, but it had one anomalous 18z run that might have been a warning sign.  Around a time when the models appeared to be showing a north trend, the ICON had a run that brought the snow down to the VA / NC border before coming back north.

76nWrRs.gif

This was probably a good indication that a continuation of the north trend was probably not going to happen, because it's unlikely the ICON would be off by that much 96 hours out.  However I didn't expect that the system would come back down so far south.

3.  Something weird is going on with the ICON maps on TT.  I'm not sure if it's a problem with the snow ratio algorithm or something else, but they look blotchy.  The precip maps on weather.us look fine. 

4.  This is almost completely unrelated, but I think I might have figured out what model sets the boundary conditions of the Swiss model on weather.us.  At first I thought it might be the Euro, then the ICON, but after looking through many maps (I'll spare you the images) I think it's actually the GFS.  The Swiss model often appears to be a high-resolution (and colder) version of the corresponding GFS run.

Link to comment
Share on other sites

I’d like to see a peer-reviewed study that divides pro mets into two groups: the first group gets to look at current conditions plus models, the second group gets to look at current conditions only (no models). They are both tasked with giving a forecast at different intervals (one day, two days, three days, etc.)

I bet the results would show that beyond one or two days, that both groups forecasting accuracy was roughly the same.

 

Link to comment
Share on other sites

  • 8 months later...

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...