Jump to content

usedtobe

Meteorologist
  • Content count

    8,642
  • Joined

  • Last visited

About usedtobe

Profile Information

  • Gender
    Male
  • Location:
    Dunkirk, Maryland

Recent Profile Visitors

3,835 profile views
  1. January Banter String

    getting ready to put up new blue bird houses. We've seen a couple of males in the area. Thinking about spring, feels like it has sprung for awhile.
  2. January Med/Long Range Disco Part 2

    Exactly, it was a bowling ball upper where the dynamics cooled us just enough for snow. The neat thing is the models did a great job with the storm once to within a day or 2 of it. Too bad the local governments didn't believe the forecasts. It was a fun storm to forecast.
  3. I wrote this article for the Capital Weather Gang in 2011. I've attached the link but for those who don't get the Post, I've reproduced it below. https://www.washingtonpost.com/blogs/capital-weather-gang/post/why-are-snowstorm-forecasts-sometimes-so-wrong-part-one/2011/11/23/gIQA4ZfaoN_blog.html?utm_term=.a23049645d7c Almost every year, at least one snow forecast ends up busting in our region. Many readers probably remember last year’s December 26 bust (when we called for 3-6” of snow, and little fell). The fallout elicited remarks like “weather forecasting is the only job where you can be wrong 90 percent of the time and still keep your job.” While that’s a huge overstatement about the state of weather forecasting, it certainly captures the frustration that many feel when a forecast fails. A number of factors that can contribute to a poor forecast: 1) many of the physical processes that govern the atmosphere act non-linearly, 2) uncertainty about the initial state of the atmosphere, 3) certain part of a model’s physics have to be approximated, 4) there is often more than one stream of flow that has to be handled correctly by the models, 5) we live close to a huge heat and energy source (the ocean), 6) forecaster making poor decisions. Any one of these factors can help lead to a poor forecast and a perceived bust. In the following discussion I’ll attempt to explain how these first three factors can sometimes negatively impact upon a forecast and how meteorologists try to mitigate them. Next week, I’ll tackle the last three factors in part two of this series. The non linear nature of weather The non-linear nature of the atmosphere comes into play in causing forecasting problems in several ways. Forecasters cannot just extrapolate features as they come eastward expecting them to move and change in a linear manner. Weather systems cannot be anticipated to change in a steady manner. Imagine a series of numbers as representing the development of a storm system. A linear extrapolation of the developing of such a system would be 2, 4, 6, 8, essentially a steady measure of increase in the system’s strength. However, a non-linear change is represented by a number sequence like 2, 5, 15, 60. Weather systems can and do change and develop rapidly. These non -linear changes not only impact the strength of the system but also how it tracks. That’s why computer models are of such value. They can often anticipate the rapid changes. Without physically-based computer models, it is doubtful that forecasters would have been able to predict the massive October storm that hit the northeast. The non-linear nature of weather is what makes it possible to get monster snowstorms but also is partly the reason why forecasting them is so difficult. Because atmospheric responses are non-linear, errors in a model can sometimes grow quickly. Uncertain initial conditions MIT scientist Edward Lorenz published two seminal papers in the 1960s that discussed that small differences in two models simulating the initial state of the atmosphere can grow non-linearly when projected forward and produce two diametrically opposed solutions. Steve Tracton has previously written about Lorenz’s work and its implications to forecasting. Unfortunately, there is no way to measure atmospheric variables (the temperature, winds, moisture, etc) accurately at every point on the globe. Furthermore, atmospheric measurements from various sources (balloons, satellite, radar, ships, planes) are imperfect. So models never have a 100% accurate representation of the actual atmosphere. The incomplete set of imperfect observations have to be brought into a model in such a way as to minimize errors that might later grow and contaminate a simulation. This quality control and assimulation process somewhat smooths the data. Therefore, the initial state of the atmosphere is always somewhat uncertain and that uncertainty can and does sometimes lead to major forecast problems. The two forecasts below are from the exact same model with identical physics but with slightly different (probably not discernable to the naked eye) initial fields (sets of data). Note that one has a strong low (left hand panel) located north of D.C. implying a rain storm while the other has a much weaker low farther to the south suggesting the storm would either miss us to the south or would produce snow. Any errors in the initial fields grow faster in some patterns than in others. That is the basis for developing ensemble forecasting systems. The National Centers for Environmental Prediction (NCEP) runs a number of simulations four times each day in which they perturb (tweak slightly) the initial conditions to try to get an idea of the probabilities associated with any storm system. The resultant array of solutions provides information that can be used to assess the probabilities of getting a snowstorm. However, even if every ensemble member is forecasting a snowstorm on day 5 projection, that is no guarantee that a snowstorm will occur. Occasionally, the actual truth lies outside of any of spread of any of the solutions. Approximations of some physical processes Another source of error is that certain atmospheric processes (convection, clouds, radiation, boundary layer processes, etc) are either too small to be represented in the model, not well understood, and/or too computationally expensive to simulate. Probably the most problematic process to deal with is convection. Convection occurs on a scale too small for models to simulate and must be parameterized, a procedure for representing it on a scale that the model resolves. Parameterization requires approximations, which can lead to forecast problems. The uncertainty of the initial conditions, possible errors introduced by the approximations of the physics and the non-linearity of the physical processes are a dynamic mix. Together they are the factors that lead to the models jumping from solution to solution leading up to a storm. In the longer ranges, the differences between solutions can be quite large. In shorter time ranges, the differences are not as large but our location near the ocean makes small differences in the track and intensity of a storm crucial to getting a snow forecast right. The differences between two operational models, the GFS and NAM, prior to the October 29 storm are a case in point. The NAM suggested that the D.C. area would see accumulating snow while at least one run of the GFS suggested almost all the precipitation in the area would be rain. Because there is always some uncertainty about any forecast, meteorologists are evolving towards issuing probability based forecasts. Three other factors also make snow forecasts difficult. * There is often more than one stream of flow that has to be handled correctly by the models * The ocean to our east and mountains to the west * Forecasters may make poor decisions There is often more than one stream of flow that has to be handled correctly by the models The ingredients needed to get a snowstorm are often governed by more than one stream of flow and can be impacted by what is happening both upstream and downstream of the approaching storm. Let's go back to our surface forecasts that showed one low over the Great Lakes and another forecast that had the low suppressed to our south. In the figures above, note the differences in the surface pressure patterns to the northeast of the storm (over western Pa. in left panel, northern Al. and Ga. in right panel). In the right panel, the pressure gradient (change in pressure over some distance) implies that surface winds are still from the northwest (see arrow) over New England as there is a strong low located near the Canadian Maritimes. That cyclonic circulation helps to force the storm approaching the East Coast to take a southerly track. On the left panel, the low is weaker and farther to the east allowing the winds across New England to be southeasterly (see arrow). Without the north winds over New England and the strong low over near Nova Scotia, the low approaching the East Coast has more room to come northward and develop. On the bottom panel, the ridge is located much farther west (near the West Coast) than on the top panel (over the Rockies). It also has two distinct upper level impulses that have not yet phased (merged together) and it therefore has a weaker low located farther off the coast than solution shown on the top panel. By contrast, the top panel with more eastward location of the ridge has essentially phased the two upper level disturbances producing a sharper upper level trough which produces a strong low that is tucked in closer to the coast. In the above maps, the ECMWF forecast (the top two panels) was forecasting a major snowstorm while the GFS (bottom two panels) was predicting a near miss. The more upper level features that are in play as a potential storm approaches, the tougher it is for the models to get the forecast right. The warm waters off the coast. Most of our potential snow storms track across the southeast and then turn up the coast. They therefore usually tap into some of the air coming northward from near the Gulf Stream setting up a very tight thermal gradient (how quickly the temperature changes as you move across the front). Having a strong frontal boundary along the coast is a double edged sword. If you get a favorable track there, lots of energy is available to crank up a storm. However, it also means that there is plenty of warm air nearby that can mess up a forecast with a slight deviation in the storm track. In the image shown above right, if you shift that center of the low to the west a little, it would introduce freezing rain or rain where heavy snow actually fell. Shift the storm track a little east and there would have been no mixing problems east of I-95, where snow changed to sleet for a time. Most major storms are associated with a very tight temperature gradient so D.C. is usually right near the rain-snow line. Any small last minute shift in the storm track can make a forecaster look really foolish. Mountain challenges Whereas the oceans supply warm air that can mess up a forecast, the mountains help promote cold air damming that tends to keep low level cold air across the area longer than it would last without them there. It’s often tricky to determine how long the cold air will stick around before being eroded by warmer air from the ocean that might trickle in (see above). Cold air damming requires cold high pressure to our north. The presence of the high results in cold flow from the north at low levels. The mountains then essentially act as barrier, keeping cold air trapped to their east. Because the cold air is dense and difficult to dislodge, sometimes it can linger longer than forecast by the models leading to unexpected icing problems. Just as the mountains can be conducive to wintry weather, they can also impede it. When flow is from the west and northwest, the mountains also lead to downsloping winds. These winds produce drying when a storm tracks just to our north, cutting off moisture, even if the temperatures are cold enough to support snow. Meteorologists may have bias or misinterpret the data. The most common reason meteorologists err in their forecasts is misjudging the probabilities associated with a storm especially in those tricky situations where there is no consensus among the models. Failure to communicate probabilities Sometimes the errors are a result of hubris in trying to make a deterministic single forecast during an iffy forecast situation. The general public wants a best guess so we try to provide that. However, if we fail to a good job describing the uncertainty of the forecast situation, we can really get burned. That certainly was the case during the infamous Decenber 26 non-storm last year. Despite cloaking the Dec. 26 forecasts in probabilistic terms, I made the mistake of saying I was “bullish” about accumulating snow giving a false impression of the certainty of the forecast. The CWG forecasters were then slow pulling away from the snowy solution even though radar and satellite were indicating that the heavier clouds and weather to our southwest were moving more eastward than northward. For an in depth discussion of what went wrong with that forecast, click here. In that case CWG forecasters were afraid to write off the storm because of the rapid changes that sometimes occur to the precipitation shield during snowstorms (the 1987 Veteran’s day storm comes to mind). Overreliance (and underreliance) on models Sometimes meteorologists bust in the shortest time ranges by relying too much on models. However, those same models are powerful tools that also correctly predicted this year’s October snowstorm along the East Coast and suggested that the precipitation would linger longer than indicated by extrapolation of the radar images. It’s very doubtful that anyone would have forecast such a storm without the computer simulations. Human psychology and emotions can also sometimes lead to forecast mistakes. For example, if a forecaster predicts that a storm will miss the area and then the area is gridlocked due to that same storm he or she is often lambasted with severe criticism from the media and general public. The next time a similar looking storm appears on the models he or she might let their recollections of the previous bust creep into their forecast. While it’s important to learn from forecast mistakes, it’s also possible to be blinded by them. The CWG forecasters guard against that by sharing forecast ideas with other members of the team prior to issuing a forecast. Forecaster bias (wishcasting and hero syndrome) Most meteorologists quickly learn to reign in any bias that might result from their love or hatred of snow. However, a few suffer from the hero syndrome (no Capital Weather Gang forecaster) wanting to be first to call for a major snowstorm based on one or two model runs when the storm is still several days from actually occurring. Trumpeting such a forecast is usually a mistake that implies more skill than actually exists in the longer time ranges. The same forecasters may then sometimes be slow to back away from their deterministic forecasts. Anytime you hear someone calling for a major snowstorm 4 or 5 days in the future, view it with lots of skepticism. The only sane way to forecast any storm is by assessing the probabilities of a storm and then conveying the probabilities to the public. That is why the Hydrometeorological Prediction Center routinely issues probability forecasts for various snowfall amounts and why the CWG team also tries to provide probabilistic forecasts of a storms potential. Next time you get ready to wail about a bad forecast, think about all the different ways that a forecast can go wrong...
  4. January Med/Long Range Disco Part 2

    Our only hope I think is for a low to go to our north and set up a 50 50 low with weaker wave then digging just enough to pot of low before the next big storm takes a track to out north and west.
  5. January Med/Long Range Disco Part 2

    Actually, the PSD graphicss I showed are from a low resolution version of the GEFS if I'm not mistaken. I'm more concerned about the positive EPO than the PNA which on the PSD plot does got weakly positive by months end but is pretty negative for most of the month. Isotherm also noted other reasons liek the MJO to be negative for most of the month. Of course, the model forecasts could be wrong and the ridge could rebuild over AK faster than forecast as even ensemble means aren't much good beyond 10 days.
  6. January Med/Long Range Disco Part 2

    I'm not very encouraged by the pattern and think our snow chances through the remainder of the month are below average. That doesn't mean that there will be no chances but with all the system slated to start crashing into the west and a mean ridge in the east. snow chances may be hard to find. The ensembles from the Euro and from ERL PSD suggest that the EPO will be positive through the remainder of the month. The pattern suggests the storm track will be to our north. Yes, the NAO may go negative but not in a manner very favorable to us. Here are two discouraging teleconnnection forecasts. The combination of a negative PNA and positive EPO pretty hard to overcome. The PNa pattern may improve towards the end of the month but that's a long way off. Have to hope thee ensembles are wrong.
  7. Jan 16/17 Event Obs/Disc

    measured a whopping 0.5" this morning in extreme northern Calvert County. Just about what was expected. Looks like the dusting to 2 inch CWG forecast wan't too bad though 2" amounts seem pretty scarce.
  8. January Banter String

    It was a horrid loss as I'm not sure what Cowens was supposed to do on defense the way it was set up.
  9. Jan 16/17 light snow event.

    They will be different but if the low develops offshore and to the north we easterners that don't get the max lifting from PVA with the southern end of the vort are likely to get screwed. I kind of agree with what PSU postulated.
  10. January Banter String

    DTL could answer it better than me. There are loads of issues, you need to parameterize physical processes that occur at scales below which are resolved by the grid, no matter how many obs you have, there is no way to know perfectly what the initial conditions are. The equations that describe the atmosphere in the models are non linear so small differences can amplify with time pretty dramatically. Heck, as you move towards higher resolution how you do the math might even bias the forecast a bit (at least I think that is true). I have a slide from a presentation that deals with those issues. The remarkable thing is how good models actually are. Last year I wrote and article for the Captial Weather Gang on how much model forecast and forecaster tools have improved since the 1970s. It's pretty remarkable.
  11. January Banter String

    Yes, at least in the two runs that closed off a 500h low to our south, one around day 5 and the other a 72 hour forecast. The EPS mean also closed off a center. The individual members were more dispersive but the really amped up 1/3 outweighed the other 2/3 or at least that is what is looked like. It's nice to see the GFS being closer to the truth or seemingly closer to the truth (we won't know for sure until Wed) but having both model ensembles exhibiting herd instincts along the east coast makes it tough too really hone in on a forecast with much lead time unless both herds stick together for several runs.
  12. January Banter String

    The bad thing is it the Euro EPS almost did it at 72 hours and supported the Euro idea of a 500hlow closing off to our south. I used to have faith in the euro ensembles but now not so much.
  13. Jan 16/17 light snow event.

    Normally, you want the low in that location but with no high to the north we're not gonna get any cold conveyor belt going so we need it a tad north unless we can change the tilt a bit
  14. Jan 16/17 light snow event.

    It's a good run as it would give most of us at least a couple of inches. The tilt still hurts us some but I think the odds of no snow are pretty small unless you call a dusting a miss. If I remember right all the euro members from last night gave us at least a dusting. This Euro run has a better looking 500 than that and therefore spits out a little more precip.
  15. January Med/Long Range Disco Part 2

    The NAM has a sharper 500h than the GFS at the same time. Big differences. The NAM looks sort of like last night's Euro before ti completely folded. Too bad we're probably being NAMed
×