cae Posted January 13, 2019 Author Share Posted January 13, 2019 Here's the LWX preliminary snowfall map for this event. I'll go through the major models below, and then I'll write up some extra thoughts at the end. The below gifs show the runs from 12z 1/09 to 18z 1/12. Some snow had fallen in the western areas by 18z 1/12, but I chose 18z instead of 12z so we could get more model runs in. The precip totals are from the start of the run to 12z 1/14. For the RGEM and HRDPS, I include runs that end as early as 00z 1/14 because they only go out to about 48 hours. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 Euro Here's the precip analysis. The Euro predictions are below that, using the same color scale. Again the Euro was too dry around us, only catching on near the end. For the first part of the storm (up to 12z on Saturday), it was arguably the worst model. Below is its 24-hour total precip forecast at 12z on Friday. This is what we actually got over those 24 hours. Another global did much better with the first part of the storm. We'll get to that below. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 UKMET The Ukie was also too dry near us, even at the end. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 GFS Like the Euro, the GFS also kept the precip too far to our south but started to catch on at the end. It did better than the Euro with the first part of the storm though. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 GGEM From the model discussion thread: "I tell you what too... CMC has consistently pointed at higher totals (little ebb and flow) and better coastal enhancement. The 12z is even better than 0z lol. If it scores a coup on the gfs/euro I’ll build a mini shrine to it on my desk." In some ways this was similar to the last storm, with the GGEM generally showing higher totals around Washington than most of the other globals. It ended up being too dry in the end, but for a while it looked like a wet outlier. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 ICON The ICON bounced around a lot but ended up looking pretty good in the end. Arguably the best of the globals at game time. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 FV3 The best global model overall might have been the FV3. Like the ICON, it looked pretty good at the end, but unlike the ICON it had been fairly consistent. I mentioned above that the Euro was arguably the worst model for the first 24 hours of the storm. The FV3 was arguably the best. Below is the actual precip totals from the first 24 hours followed by the FV3's final 24-hour prediction. Looks good to me. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 NAM The long-range NAM was way off, and in the short range it was overdone, but overall it was a signal that some of the globals were too dry. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 3k NAM The 3k NAM was way off in its early runs, but after it caught on (about 54 hours before the end of the storm) it was arguably the best of the mesoscale models. 1 Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 RGEM The RGEM did better than most of the globals, but like the GGEM it missed the higher precip totals in central MD. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 HRDPS The HRDPS was similar to the RGEM on this one. Link to comment Share on other sites More sharing options...
cae Posted January 13, 2019 Author Share Posted January 13, 2019 REPS (RGEM ensemble) I added the REPS to this one because I think this was a good example of why a short-range ensemble is useful. At 00z on 1/11, less than 48 hours before snow started falling, the Euro, Ukie, and GFS had DC getting less than 0.3" of precipitation. The wettest global was the GGEM, which showed DC in the 0.4" - 0.6" contour. The RGEM and HRDPS were both out of range, and the NAMs effectively were. This is what the 3k NAM was showing at the time, with precip having moved offshore at the end of the run. But the RGEM ensemble mean put DC in the 0.6" - 0.8" contour, which was even wetter than the GGEM. In retrospect, it was a good sign that this system still had a lot of upside potential. At the time it was an outlier, but it arguably ended up busting low. The below images use the weather.us color scale. Link to comment Share on other sites More sharing options...
cae Posted January 14, 2019 Author Share Posted January 14, 2019 Final thoughts The FV3 might have been the most impressive global model. It and the GGEM were consistently the wettest models, but the FV3 did better than the GGEM at game time. The Euro and ICON eventually caught on, but only very late. Of the mesos, the 3k NAM was probably the best once it got in range. Below is the actual snowfall, compared with the 3k NAM runs from 12z Saturday. Kuchera was similar. However even if you'd looked at the above maps, you might have been surprised by the 1 foot+ numbers that were put in in Central MD. It turns out that area outperformed not just on precip, but on ratios. If you divide the snow analysis by the precip analysis, you can get maps of the ratios. Unfortunately I can only get these for 24 hours at a time. Here's the 1st part of the storm (12z Saturday to 12z Sunday). And here's the 2nd (12z Sunday to 12z Monday). According to those maps, parts of the jackpot zone in central MD saw greater than 15:1 ratios on Sunday. Kuchera was generally more like 11:1. This is one of the limitations of the Kuchera method. I believe it only looks at the maximum column temperatures, and it doesn't consider factors like lift in the dendritic growth zone. 4 1 Link to comment Share on other sites More sharing options...
gopper Posted January 15, 2019 Share Posted January 15, 2019 cae, excellent post analysis. You obviously put a good deal of time into collecting all of the map comparisons! Always interesting to see which models are picking up on certain aspects of a storm. Thank You! Link to comment Share on other sites More sharing options...
cae Posted January 16, 2019 Author Share Posted January 16, 2019 50 minutes ago, gopper said: cae, excellent post analysis. You obviously put a good deal of time into collecting all of the map comparisons! Always interesting to see which models are picking up on certain aspects of a storm. Thank You! Thanks! The new format takes a bit more time than the old one did, but I think it works better. I'm working on streamlining the process. Sometimes gathering the maps is a good way to make use of the time while waiting for the radar to fill in. 3 Link to comment Share on other sites More sharing options...
MegaMike Posted February 21, 2020 Share Posted February 21, 2020 I stumbled upon this thread by chance trying to look for a precipitation analysis method for an evaluation with respect to multiple (44) winter weather events. The idea was that if I conduct MODE analysis between StageIV precipitation w.r.t modeled hourly precipitation, I can determine the accuracy of banded precipitation vs. several modeling systems (ICLAMS/WRF). Unfortunately, it's recommended (strongly suggested) by the developers to avoid such a methodology due to poorly ingested liquid water equivalent observations under snowy conditions... The rain gauges struggle to observe liquid water equivalent when snow is falling... I'm now considering the RTMA, URMA, or possibly, a satellite derived product instead. "Each River Forecast Center (RFC) has the ability to manually quality control the Multisensor Precipitation Estimates (MPE) precipitation data in its region of responsibility before it is sent to be included in the Stage IV mosaic. The precipitation values,however, are not intentionally modified downwards during snow events. Rather, due to inaccurate measuring of liquid equivalents at many gauge locations (e.g., a lack of the equipment to melt and measure the frozen precip), zero or very low values are reported at these locations. These "bad" gauge values then go into the MPE algorithm, resulting in liquid precip estimates that are too low during winter storms. There are also problems with zero or too low precipitation values at many RFC gauge locations even outside of heavy snowfall events." "There are problems with the RFC precip data in the eastern U.S. during heavy snow events. While ASOS stations have the equipment to melt the snow and derive the liquid equivalent precip, the RFC stations in the East do not. So when there are big snowfall events such as the January 2016 blizzard, the snow accumulations get recorded, but the corresponding liquid equivalents often come in as zero or near zero amounts, which are incorrect." If you're curious (Model Evaluation Tool for MODE analysis): https://dtcenter.org/sites/default/files/community-code/met/docs/user-guide/MET_Users_Guide_v8.1.2.pdf Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now