• Member Statistics

    16,545
    Total Members
    7,904
    Most Online
    ampongabezo
    Newest Member
    ampongabezo
    Joined
Sign in to follow this  
usedtobe

Article on how winter weather forecasting has changed

Recommended Posts

The article below was rejected by the Capital Weather Gang as being too technical so I thought I'd post it here for any interested in how forecasting has changed.  I'm a little bummed out about the rejection.  hope some here find it interesting. 

Forecasting winter weather has changed significantly since the 1970s.  Driven by improved weather models, data coverage that has improved exponentially due to satellites. Increased understanding of winter storms has led to conceptual models that have helped forecasters visualize the structure and dynamics of storms.  Finally, the advent of personal computers and workstations has revolutionized how forecasters utilize those same weather models.    The following article discusses some of the challenges forecasters faced when attempting to predict winter weather from a perspective of a forecaster who worked at NMC (now NCEP’S Weather Prediction Center (http://www.wpc.ncep.noaa.gov/#page=ovw)) and attempts to outline a few of the techniques and methods that were utilized in the 1970s and ‘80s to make forecasts.  

The weather models back then had much coarser vertical resolution than they have today.   The two primary forecast models, the LFM and PE models had only seven layers and the LFM had grid spacing varied between 127 and 190.5 km.  To resolve a wave you need at least 4 grid points so the smallest wavelength the LFM could resolve was over 500 nm.  The models were only run twice a day as opposed to the 4 times daily today.   The NAM/WRF model has at least 60 layers and a high resolution version of it has 4 km resolution allowing it to sometimes forecast some of those pesky smaller scale bands of the heavy precipitation that the old LFM or PE models had no hopes of forecasting.  The lack of vertical and horizontal resolution limited the how small and feature the models could predict and also hampered their handling of low level cold air. When you have only 7 layers in the vertical, there is no way to resolve small warm layers or important upper level features like jet streaks. 

Back then, PCs were not available.  Forecast fields were received either by a facsimile machine or a plotter.  4 basic forecast fields were received: a MSL surface and thickness plot; a 850mb height (around 5,000 ft.) and temperature map; a 700 height, relative humidity and vertical velocity; and a 500 mb height and vorticity (spin) plot.   Once we received the model data forecasting would commence.  Our snow forecasts were made on acetates with grease pencils and then traced to a paper copy and then transmitted by fax to other locations.   Today winter weather forecasts by meteorologists at NCEP’s Weather Prediction Center are drawn on workstations and probabilistic forecasts of various snowfall amounts are made.   In the 1970s, paper maps cluttered the walls, today the office is virtually paperless.  

Most forecasters would start the forecast process with a hand analysis of the 500 and 850 mb heights and compare them with the initial analyses from the models to try to glean whether the model initial field looked like it was handling the various waves in the atmosphere correctly. For example, if a trough looked quite a bit sharper on the hand analysis than on the model analysis, the model might end up with a stronger system then forecast especially if there was more of a 500 ridge behind that wave.  Often these differences proved fruitless in trying to help one modify a model forecast but occasionally would lead to a forecaster correctly modifying the guidance.   A forecaster would also note where relative to a low pressure system the strongest pressure falls were located to help get a feel for if the model was handling its movement correctly.  Pressure falls often pointed to the short term direction the storm would move. 

Forecasts were often grounded in not just the models but also on pattern recognition and rules of thumb.  For example, Rosenbloom’s rule avowed that model forecasts of rapidly deepening cyclones almost always were too far to the east. Therefore a forecaster had to adjust not only the forecast track but also had to realign where the model was predicting the rain-snow line and axis of heaviest snow a bit farther to the west than forecast.    There were rules for forecasting the axis of heaviest snow based on the track of the surface low (around 150 nautical miles to left of the track),  850 mb low (90 nm left of the track) and 500mb vorticity center (around 150 nm to the left of the track).  The heaviest snow typically was forecast during the period when the low was deepening most and then you damped down snowfall amounts as the surface low started to become vertical with the upper center of circulation.  Of course, if the model had the storm track wrong, your heavy snow forecast might go down in flames. 

One of the trickiest winter weather problems was and still is where to forecast the rain-snow line.  In the 1970s and early 80s,  There was no way to look at the vertical structure of the atmosphere in enough detail to parse out whether there was a warm layer located somewhere above the ground.   Forecasters relied on model forecast of the depth between two pressure levels, 1000 mb (a level near the ground) and 500 mb which is located at around 18,000 feet.  The depth between those two layers is not really 18,000 as it varies based on the average temperature in that layer between the two pressure levels and distance between them shrinks when it is cold and expands when it is warm.  For the DC area, the critical thickness value was around 540 so below that value snow was deemed more likely than rain. 

Unfortunately, in the middle of winter when low level temperatures were really cold it can snow at a 546 thickness or can rain with a thickness below 534 when the near ground temperatures are warm.    To try to further parse the probabilities of different precipitation types,  forecasters starting looking at other thickness layer (1000-850mb and 850-700) which worked better but still was much less accurate than using model sounding like we do today.  Model output statistics (MOS) (https://www.weather.gov/mdl/mos_home) also offered guidance about the most likely precipitation type and was quite good unless the model was having serious problems with a cold air damming case or arctic air mass.  Then relying on MOS was a prescription for a big wintery bust. 

The lack of enough vertical resolution also played havoc when trying to predict how far south an arctic air mass might push.   Both the LFM and NGM (a model introduced in 1987) surface and thickness forecasts busted consistently at holding arctic fronts too far north.  The 36 hour LFM forecast below is a case in point.  Note how the model erroneously has a low in central Illinois and hints that the front may have pressed southward to the Oklahoma-Texas border while on the analysis for the same time the front (annotated blue line) is located across Kentucky and the front has pressed well south of the Texas border.  Forecasters learned that the leading edge of where the lifted index gradient slackened was usually a good approximation where the front would end up.  In Oklahoma where the model was predicting rain the end result might be freezing rain, sleet or even snow

  • Hist_fig_1.png

The LFM’s and GFS’s rather crude horizontal and vertical resolution also led to problems trying to resolve smaller scale features.   Cold air damming and coastal fronts were notoriously poorly forecast as the model often predicted high pressure systems to move of the coast prematurely.  Note on the figure below that the 36 hour LFM forecast lowered the pressures too much over TN and pushed the surface high off the coast quicker than was observed.  Instead of the high being off the coast, the high pressure system ended up still located over New England in an ideal location for cold air damming.  Just as important is the coastal trough that was setting up.  By 7AM January 8, a low had formed on that trough just of the North Carolina coast.  Instead of an LFM forecast that suggested that the snow might change to rain, the damming helped produce a 6 inch plus snowstorm.  I can remember using one of the rules of thumb to implement a similar correction to an LFM forecast of storm when model’s thickness and 850 temperature forecasts suggested a rainstorm and forecasting freezing rain and having it end up as a snowstorm.  On those bad damming model forecasts parsing out precipitation type was really tough.

  • hist_fig_2.png

Forecasts beyond 48 hours relied heavily on the PE model and forecasts extended out to 5 days.  Correct model predictions of major storms beyond 72 hours were rare. One exception was the PE hit the February 7, 1978 Boston Blizzard on an 84 hour forecast.  When the PE forecast development of major snows along the east coast beyond the first 3 days, more often than not if the low developed it would end up considerably west of the forecast.  That led one forecaster to proclaim “all lows go to Chicago”.  Today, day 3 forecasts on average are better than day 1 forecasts from the 70s and 80s.  Forecast now routinely extend out to day 7 with today’s day 7 being better on average than the day 5 forecast from the late 80s.

The lack of resolution also played a role in the LFM’s under-prediction of snow during President’s Day storm of 1979.  The 36 hour LFM and GFS drastically underplayed the strength of the 500 mb system approaching the coast and was 22 mb too high with the storm’s central pressure off the North Carolina coast.   Both the new model on the block, the NGM, and LFM failed to forecast the 1987 Veteran’s day storm which was a rather small scale event compared to most of our big snowstorms.  The NGM which was implemented in 1987  was an improvement over the LFM in most cases but it was often even worse in forecasting how far south an arctic air mass would push and it had its own biases of tracking lows to far west and being too strong with them over land.  The model was horrid at forecasting the southward movement of arctic fronts. 

By the 1990’s workstations and PC’s became commonplace and started revolutionizing how we looked at model data.  Forecasters now could look for important upper level features that often play a role in storm development and help produce banding features and various kinds of instability that can lead to heavy snow.   The famous no surprise snow storm that was a surprise in January 2000 helped pave the way for developing ensemble products.  Post storm runs of a model with simple tweaks to their initial analysis had some members that predicted the storm to lift northward and spread snow into Washington something the operational models failed to do leading up to the storm.

Today ensemble model runs with slightly different initial conditions or tweaks to their physics are routinely available to forecasters.  Their guidance offers a more probabilistic approach to forecasting storms.  Last winter they allowed forecasters to start crowing about the potential of a major possibly crippling snowstorm almost 5 days in advance of the storm.   By early on January 19th The European ensemble forecast system gave the portions of the DC area a greater than 90% probability of having 12” of snow on the ground by 7AM (see below).  The U.S. ensembles (GEFS) were similarly bullish.  Prior to the February 5-6 blizzard of 2010 the GEFS was similarly bullish days in advance of the storm. 

.

  • hist_fig_3.png

Forecasters and weather enthusiasts with the click of a mouse can now pull up a model forecast sounding from the GFS or NAM anywhere on a map and look to see whether a warm layer is present or whether there is an elevated unstable layer that might product convection.    Not only that, but sophisticated precipitation type products are available that automatically assess the thermal structure at each grid point of the model to provide precipitation type guidance that mapped so any individual can look at them to see where the model thinks the rain-snow line will be located . The availability of these precipitation type products taken together forecasts from the 3 or 4 best operational models and the GEFS and European ensemble output allow forecasters to offer possible scenarios that a potential storm might follow and weight their probabilities.  The models can often provide a heads up days in advance of a snowstorm. 

The current models can now usually resolve the development of the coastal trough and forecast cold air damming though they still sometimes have problems holding onto the cold air at the surface long enough.  Winter weather forecasting has come a long way and is much more grounded in science than it was back in the 70s and 80s.   Winter weather forecasts have improved significantly, however, despite all the improvements, busts still happen. 

Share this post


Link to post
Share on other sites

The WaPo audience isn't limited to weather enthusiasts and professionals, so of course the tone of the article has to be less technical. I think the above could easily be adapted to a mass audience while keeping the basic premise, that winter weather forecasting has changed greatly, that we have much better tools and do a heck of a lot better than 40 years ago, and that we're constantly developing new ways to do even better in the future and to incorporate uncertainty and mitigate surprises.

Share this post


Link to post
Share on other sites
1 hour ago, billgwx said:

The WaPo audience isn't limited to weather enthusiasts and professionals, so of course the tone of the article has to be less technical. I think the above could easily be adapted to a mass audience while keeping the basic premise, that winter weather forecasting has changed greatly, that we have much better tools and do a heck of a lot better than 40 years ago, and that we're constantly developing new ways to do even better in the future and to incorporate uncertainty and mitigate surprises.

This is what we wanted to do and still may do. I think there was some confusion and miscommunication. We'll try to clear this up with Wes tomorrow.

Share this post


Link to post
Share on other sites

Here's a list of things no weather weenie under fifty ever heard ...

... wait a minute, I have to wash the Alden fax crap off my hands

... can you plot me up a quick 14z map? Just the local stations. 

... now they have a 96 hour map ... why bother? 

... I just saw the radar, let me read it to you

... this looks just like your prog from yesterday

Share this post


Link to post
Share on other sites

Gotta find a way to publish that in the Post. Brings back great memories from checking height falls to shaving 20 degrees off MOS. Write up is amazing!

Water vapor is a valuable tool for checking PVA, jet streaks, and moisture trends. New GOES satellite will offer priceless data in that and other regards. Yes, we still do water vapor and looks like a lot more coming with GOES-R.

Cook's Method, related to potential vorticity, was/is another favorite of mine for snow forecasting. Well, I still use it! Look at the high and low temps at 200 mb along the general path of the forecast snow. One is looking for WAA even that high up. For example stratospheric folding puts warmer air at 200 mb over Texas. Top of the downstream ridge, say over Pennsylvania, should have colder at at 200 mb because the tropopause is higher. The bigger the difference, the bigger the Mid-South to Ohio Valley snow.

Share this post


Link to post
Share on other sites
On 2/2/2017 at 5:42 PM, usedtobe said:

The article below was rejected by the Capital Weather Gang as being too technical so I thought I'd post it here for any interested in how forecasting has changed.  I'm a little bummed out about the rejection.  hope some here find it interesting. 

Great article! I miss the wall of DIFAX. We used those in school and at SC Air Quality.

Perhaps they'd publish a "watered down" version?   

 

17 hours ago, nrgjeff said:

Gotta find a way to publish that in the Post. Brings back great memories from checking height falls to shaving 20 degrees off MOS. Write up is amazing!

Water vapor is a valuable tool for checking PVA, jet streaks, and moisture trends. New GOES satellite will offer priceless data in that and other regards. Yes, we still do water vapor and looks like a lot more coming with GOES-R.

Cook's Method, related to potential vorticity, was/is another favorite of mine for snow forecasting. Well, I still use it! Look at the high and low temps at 200 mb along the general path of the forecast snow. One is looking for WAA even that high up. For example stratospheric folding puts warmer air at 200 mb over Texas. Top of the downstream ridge, say over Pennsylvania, should have colder at at 200 mb because the tropopause is higher. The bigger the difference, the bigger the Mid-South to Ohio Valley snow.

Ha, I forgot about this. We used it often at DMX, late 90s. 

Share this post


Link to post
Share on other sites

I love Cook's Method because it is based on the upper air chart, not a model. Magic is great when the model is accurate. Garcia is a powerful tool, but requires isentropic charts. I love those; however, outside AWIPS it is like pulling teeth to find isentropic charts. Unfortunately I'm not NWS so no AWIPS. Current isentropic charts are at weather.cod.edu but no forecast charts. So, Cook's is still my main reality check.

On 2/11/2017 at 7:26 AM, isohume said:

Great article! I miss the wall of DIFAX. We used those in school and at SC Air Quality.

Perhaps they'd publish a "watered down" version?   

Ha, I forgot about this. We used it often at DMX, late 90s. 

 

Share this post


Link to post
Share on other sites
22 hours ago, nrgjeff said:

I love Cook's Method because it is based on the upper air chart, not a model. Magic is great when the model is accurate. Garcia is a powerful tool, but requires isentropic charts. I love those; however, outside AWIPS it is like pulling teeth to find isentropic charts. Unfortunately I'm not NWS so no AWIPS. Current isentropic charts are at weather.cod.edu but no forecast charts. So, Cook's is still my main reality check.

 

Yeah isent progs are hard, if not impossible, to find on the internet. They make a huge difference in weeding out distinct forcings/locations of moist ascent associated with "weaker" or uncertain systems. We use them a lot and I'm not sure why they're not more popular or available.

Share this post


Link to post
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...
Sign in to follow this  

  • Recently Browsing   0 members

    No registered users viewing this page.