Jump to content

cae

Members
  • Posts

    1,511
  • Joined

  • Last visited

Everything posted by cae

  1. NCEP used to post GEFS and GEPS verification scores, but for some reason this year they no longer post scores for the GEPS (or NAEFS). From what I've seen the GEPS are generally better than the GEFS in the long range (more spread) and worse in the short range (more spread). At this range, they should be similar to the GEFS.
  2. No problem. I don't know how much time I'll have this season, but I'll try to post them when I take a look at them.
  3. Still some big hits on the GGEM ensemble north of DC. I count about 5/21 that give me more than 0.5" qpf as snow. About 6 shut me out.
  4. There was some discussion earlier about the performance of the ICON, so I thought I'd fill in the gap between the EPS and NAM extrapolation with some data from last year. The best site I've been able to find with ICON verification stats is this one. It comes with the following disclaimer: "These scores are provided by the WMO LC-DNV for testing and demonstration purposes only. They should only be used to give feedback to the WMO LC-DNV on the layout and functionality of these web pages." so the numbers might not be reliable, but it matches up well with other sources I've seen. Below are some scores from last winter for the 12z runs of models commonly mentioned here, except the NAVGEM (no scores available). "NCEP" is the GFS, "MetOffice" is the UKMET, and "DWD" is the ICON. All are three-month averages for January-March 2018 in North America and go out to 120 hours. H5 anomaly correlation Sea-level pressure anomaly correlation 850 temps Of course these are for last winter, and there's a lot of day-to-day variability hidden in these averages. The ICON performed about as well as the (old) GFS, which I think is consistent with what we saw last winter in a number of events.
  5. Ukie, ICON, and 00z Euro are all similar. But they're from Europe. They won't be able to properly sample this storm until it hits Portugal.
  6. I agree. Especially from this range. We've seen the models collectively adjust one way or another with 5+ days to go. Hopefully we get some more support in the ensembles.
  7. Yeah, the spread has narrowed, closing in on probably about where we should have expected it to. Looks like DC might be the northern fringe, or close to it. Up here we're hoping for some systematic model bias.
  8. I like that about the ICON - it doesn't seem to be highly correlated with the other models, which gives it extra value in the ensemble of globals. Last year it did well on several systems, but it seemed to struggle more with coastals.
  9. I just took at look at FV3 bias scores... there still seems to be some room for improvement. The below charts are for temps at 850 hPa and 1000 hPa (near sea level) over North America for the last month. There's a similar bias for H5, but I'm not sure how meaningful that is. The FV3 does well for H5 anomaly correlation, which I think is more important.
  10. I haven't seen any evidence that the Canadian does better with northern stream systems, but I've never really looked. There's lots of evidence that it's not so good with tropical systems though. There do seem to be some regions that models handle better than others; for example, all of the globals seem to do better in the northern hemisphere than southern -- I suspect it's because we have better assimilation data in the northern hemisphere. I usually look at the North American verification scores, as that appears to be the smallest region for which scores are widely available. LWX has recently developed an in-house tool that allows them to quickly compare recent verification scores in our area. They gave a presentation on it at AMS this year. https://ams.confex.com/ams/29WAF25NWP/videogateway.cgi/id/47907?recordingid=47907&uniqueid=Paper345889&entry_password=580936 Unfortunately I don't think they plan on releasing it to the public.
  11. We do have sccores (purple, below). More here: http://www.emc.ncep.noaa.gov/gmb/STATS_vsdb/. It has been consistently doing better than the GFS, but that's no reason to ignore the GFS; I'd just give it less weight. Thanks for posting those. There are also "PNA" (North America) scores, which might be better for our area than the NHX (Northern hemisphere) scores. Unfortunately the GGEM has been lagging in those scores as well (at least at H5), but not by as much.
  12. It's doing very well. I've been checking in on it for the last few months and it has been consistently better than the GFS.
  13. BWI: 27.6 DCA: 21.8 IAD: 31.3 RIC: 12.7 Tiebreaker SBY: 11.7
  14. Just got back in town. Did I miss anything?
  15. Another 1"+ so far today. Over 13 inches for the month.
  16. Another day, another 1+ inches of rain. Just hit 12 inches for the month. One of my kids hasn't had soccer practice in four weeks.
  17. I just noticed that Baltimore could close out the year with average rainfall and still break the annual precipitation record.
  18. Thanks - I bookmarked that site. I'm not sure how accurate it is though. That site has me at close to 2" in the last 24 hours through noon today, but my rain gauge has me at nearly 4" through noon today, including over 1" from today's late-morning downpour. Today's rain put me over 9" for the month. Good job by the Euro on picking up on the stripe of heavy rainfall. It didn't get the details right, but none of the other globals saw something like this (though the Ukie was close).
  19. @Fozz posted a link to this site, which has snowfall total files I can use to create maps for comparison with model output (above). I was hoping for a better system to try it out on, as we barely got any snow. LWX didn't even put up a map of spotter reports. There have been storms this winter that gave our region more snow than this one that I didn't bother adding to this thread. But I thought it was worth a write-up in this thread because the models all showed a much bigger event up until about 72 hours before snow started to fall. A few comments on model performance: 1. The Euro, like the other models, showed a significant event fairly late in the game compared to what happened. But if you look at the total precip panels above, you'll see that it was the first model to catch on to the coming bust. The ICON caught on around the same time. The GFS and GGEM had precip coming too far north up until their last runs before the event started. Arguably, the Euro was the best global model for this event, but none of them were very good. 2. The ICON caught on late too, but it had one anomalous 18z run that might have been a warning sign. Around a time when the models appeared to be showing a north trend, the ICON had a run that brought the snow down to the VA / NC border before coming back north. This was probably a good indication that a continuation of the north trend was probably not going to happen, because it's unlikely the ICON would be off by that much 96 hours out. However I didn't expect that the system would come back down so far south. 3. Something weird is going on with the ICON maps on TT. I'm not sure if it's a problem with the snow ratio algorithm or something else, but they look blotchy. The precip maps on weather.us look fine. 4. This is almost completely unrelated, but I think I might have figured out what model sets the boundary conditions of the Swiss model on weather.us. At first I thought it might be the Euro, then the ICON, but after looking through many maps (I'll spare you the images) I think it's actually the GFS. The Swiss model often appears to be a high-resolution (and colder) version of the corresponding GFS run.
  20. April 7, 2018. Below is the stage IV precipitation analysis (verification data) for the event. The color scale is the same as used for the model runs. This captures 12z 04/06 to 12z 04/08. Below are the 00z and 12z model runs up to the event. The Euro is top left, GFS is top right, GGEM is bottom left, and ICON is bottom right. This gif starts 48 hours before the last run. Before that there was another event, and weather.us doesn't have a way to distinguish between the precip totals. I'm still looking for a consistent way to compare model output for snowfall. Below are the available Kuchera ratio plots from pivotalweather for the GFS (top left), 12k NAM (top right), GGEM (middle right), RGEM (bottom left), and 3k NAM (bottom right). The 12z and 00z runs are shown. The middle left image is the actual snowfall total from the national snowfall analysis using the same color scale. Here are the tropicaltidbits maps for the ICON (top right, model-calculated snow ratios), 3k NAM (bottom left, 10:1 ratios), and HRDPS (bottom right 10:1 ratios). The top left image is the actual snowfall total from the national snowfall analysis using the same color scale. The 12z and 00z runs are shown. I added the 3k NAM to ths image as well because pivotalweather is missing some images from one of its runs. Finall here are the Euro snow depth maps from weather.us. The image on the right is the actual snowfall total from the national snowfall analysis using the same color scale.
  21. Filler post so we don't have too many graphics on one page.
  22. Filler post so we don't have too many graphics on one page.
  23. Filler post so we don't have too many graphics on one page.
  24. Filler post so we don't have too many graphics on one page.
  25. Filler post so we don't have too many graphics on one page.
×
×
  • Create New...