Jump to content

eduggs

Members
  • Posts

    4,770
  • Joined

  • Last visited

Everything posted by eduggs

  1. It's actually possible to have a significant snowstorm before or after a "bad" Pacific flow pattern and get that 5-day average look. Heck even jumbled, partially interfering shortwaves could blunt the Pacific influence and still produce a time-smoothed result to match that graphic. LR multi-day-averaged anomaly maps are ensemble-and time-averaged. That produces a very low resolution, continental-scale overview. I think it's important to understanding what we're looking at before we try to interpret it.
  2. The 18z GFS has a parade of between 6 and 10 successive shortwaves (depending on how you distinguish them) that partially interfere with each other over the next 10 days to prevent any significant local storm development. This highlights one of the problems of using LR time-blended height anomalies to try to identify favorable or active "periods." The averaged anomalies look interesting over the next week, but as usual, everything comes down to the evolution and orientation of the height fields. The actual weather could end of being quite boring depending on the fine details of wave interaction. I prefer Walt Drag's method of threat identification mostly keeping inside of 10 days using a mid-range multi-model super-ensemble focusing on QPF and temperature distributions. To my knowledge Walt doesn't mention climate indices or height anomalies. And he doesn't frequently trigger annoyed disappointment with a lot of LR false alarms.
  3. I was merely pointing out that even when MR or LR model ensemble forecasts verify a high degree of accuracy with respect to the general continental-scale height field, there is typically too much uncertainty at that range to make regional weather forecasts. This was in reference to someone suggesting a 5-day old GEFS chart matched tomorrow's height field pretty well... and also references from a week ago suggesting this period could produce a wintry event. Snapshot anomaly charts should never be used by themselves for synoptic forecasting. IMO they are massively overused and the result of an increase in interest in climate indices and LR forecasting.
  4. That's why anomaly charts are overrated. People assume blue always means good. I can't count the number of times with all of my fingers and toes over the past two winters that 10 day+ anomaly charts gave a false impressions of a favorable period. It's much better to simply loop the raw 500mb heights with vorticity to observe the progression. But people have developed this bad habit of obsessing over the anomalies.
  5. The "trace" threshold makes the southern fringes look quite a bit inflated
  6. I like your write up. I also lived through the 80s and 90s. I remember several frustratingly low snow years. I agree from a snow perspective it's impossible to know for sure how much influence a changing base state has vs. just being in a bad stretch. One thing that does stand out, however, about recent years is the warmth. The 80s had cold periods even when it didn't snow. Outside of 2015 it hasn't been cold recently. Ice is forming later (if at all) and melting sooner. Growing seasons are lengthening. Many places are exceeding 99th percentile frequency statistics for warmth parameters. While we can't know for sure how much our average weather has been affected by a changing climate, I'm personally convinced that it is now observable over our lifespans.
  7. Those bins are a little too general IMO. Some periods straddle the boundary and in any given season we go through phases of each state. Regardless, they are not causing our weather, they are reflections of it. What is likely as our climate continues to warm is that we will increasingly observe atmospheric circulation patterns in "warm phases." We will likely incorrectly attribute these warm phases to other geophysical parameters such as el nino etc. But in reality, what is more likely is that both el nino and other warm state indices are both correlated to a warmer base climate state as opposed to one physically causing the other. This is classic causal fallacy and it is common in amongst hobbyist meteorologists.
  8. Brutal for ski areas. I'm still holding out hope for the Jan. 1-3 period. It's just far enough out into the fuzzy period of modeling that if we squint we can imagine a snow treat.
  9. But it also happened last winter.
  10. The binarily defined parameters el nino and la nina are drastically too simplistic to explain continental-scale weather patterns by themselves. There are literally dozens of confounding variables, some already identified, some not. And in truth, the state of the coupled atmosphere and ocean system at any given moment is unique. It has never been before and never will again be in exactly this state. Efforts to characterize and lump together numerical indices to understand and predict these systems cannot fully capture their uniqueness and variability. To base a forecast months into the future based on what happened decades ago during an "el nino" is laughably simplistic.
  11. I think I mostly agree, but with caveats. If you go back 10 days and compare the ensemble forecasted 10-day 500mb chart to, say, last night's 6hr GFS 500mb chart (or the actual 500mb interpolated analysis) you'll see some of the features match up well and others not so well. Whether or not we can say a model correctly forecasted a "pattern" is completely subjective and dependent on the spatial scale in question, criteria for defining a "pattern," and reference thresholds for accuracy. People living in regions where the ensemble 10-day 500mb heights were poorly forecast would disagree that a model nailed a "pattern." In these areas, the airmass and surface features are drastically different than predicted 10-days ago. Since we never know for sure in advance which areas will be more accurately modeled and which less, it's very difficult to have confidence in even general "pattern" features at this range. What I have observed for many years on this forums is that posters (including meteorologists) confidently proclaim a particular "pattern" coming 10-15 days or more in advance but the realization rate of those prediction is much less than would be warranted based on the confidence in the original claim. People instinctively clamor for understanding and predictability. There is desperation to see the light at the end of the tunnel. We cling to a simplistic understanding of the relationships between climate and regional weather. But we're collectively just not (yet) as good at seeing into the future as we think we are. And we rationalize it away instead of using honest assessment to understand our limitations.
  12. Totally agree about the weather apps. In terms of forecasting temperatures 40 days out. In general it's basically a coin flip whether a particular day will be warmer or colder than "average." Average is of course a moving target. In recent years I would always hedge warmer than average for future forecasts. So maybe 60-40 warmer than average bet. A "snow supporting column" might be slightly easier to forecast in advance than surface temperatures, but at 40 days out it's essentially impossible to accurately predict.
  13. Do you really have confidence that we can predict February weather 5+ weeks in advance. Other than February being further away from a snowless period, I see no compelling reason to think January couldn't be better than February.
  14. IMO you are a little loose with your terminology. People get so used to certain phrases that they start taking them for granted. Terms like "nail" and "lock into place" are subjective. The parameters and spatial scale in question as well as your criteria for assessing model accuracy are not clear. Even the term "pattern" is only vaguely defined. It's easy to rationalize having a good handle on something if details and definitions are kept fuzzy. The magnitude and orientation of 500mb height values at the continental scale are modestly predictable out to about 10 days. But the point I've been trying to make is that regional weather forecasting at and beyond this time frame requires model accuracy that exceeds the current average error. Even if longwave trofs and ridges are roughly predictable, local sensible weather is highly dependent on fine-scale features and evolution that is outside the scope of model skill and only modestly correlated to large-scale features. It's hard enough to see a regional cold snap coming 10 days out. To detect a snowstorm at that range is really hard. And while everybody is looking far into the future for the perfect pattern, a decaying lake effect streamer could drop an inch or two almost without warning.
  15. Let's get it solidly inside 7 days before we celebrate. LR ensembles hedge towards climo at the extended ranges. And we've seen hints of this kind of change already this year that did not materialize.
  16. That's fair. My preference is to just not look out past 10 days at all. But since I sometimes can't resist, I just assume that any model ensemble run out in that range is very very low accuracy. You seem to profess more certainty with LR forecasting than I think is warranted. That's really my only subtle disagreement. Maybe it's more enthusiasm than anything.
  17. I can't believe any coach would actually make that claim. The average NBA basketball player is in the 99th+ percentile for human height. But the average very tall person is not great at basketball, and certainly nowhere near good enough to be an NBA player. If we plotted anomaly charts of nba basketball player's heights we would see that height is extremely well correlated to playing in the NBA. This is analogous to bluewave's favorite historical anomaly charts. Unfortunately in both cases, the underlying metrics are not very predictive of the thing we are trying to forecast because of rarity (northeast snowstorms and NBA skill) and poor correlation.
  18. We clearly disagree then. LR ensembles are already averaged values, which significantly distorts magnitude and location of features, though still usefully out to day 10 or so. Additionally averaging across time scales modulates the resulting values to a degree that renders them almost useless IMO. This completely masks synoptic evolution of features, which is critical to regional forecasting. I believe that's a significant cause for so many head fakes and false alarms. Just look at the 5-day averaged anomaly charts for next weekend and compare that to the operational models. A post mortem of the past few forecasting seasons should shine some light on this issue. But based on what you've already written, I don't think we're going to agree on this point. I wonder who or what it would take for you to ease off the LR multi days averaged ensemble charts.
  19. I don't think there is a more misleading model chart than a 5 day averaged 500mb anomaly chart. If I had a dime for every time Brooklynwx posted one of those and said how good it looked, I'd buy twitter.
  20. My first post today was about next weekend i.e., about 180 hrs out. I pointed out that during this time period we briefly lose the western trof which you suggested was something we needed to get snow. My sense is that you are good at diagnosing the big picture. I prefer to look at the details, which I believe are critical for regional and particularly local snowfall. In truth I believe both scales are important. If the big picture is unfavorable, the details don't get you squat.
  21. By the way, the reason why next weekend fails to deliver is not because of lack of cold. It's because the modeled PVA is either too far offshore of fails to fully round the base of the northern stream trof to initiate surface low formation close enough to our region to produce precipitation. The difference between snow and no snow is in those fine-scale synoptic details and their evolution in time. Snapshot anomaly charts and "pattern" recognition just can't capture those details.
  22. I have been following intently, for years. I appreciate you enthusiasm a lot. I think your concept of "pattern" is on shaky ground. I also don't think the current state of LR forecasting allows you or anyone else to identify productive snow periods more than about 10 days in advance. I would respectfully encourage you and everyone else to follow Walt's lead and focus more of specific synoptic feature combinations in the mid-range and less of fleeting fantasy "patterns" out in fantasy land.
  23. I'll say it another way. The physical and psychological attributes that make a good basketball player are complex. If we relied only on simplistic metrics like height to predict basketball prowess, we would not be very successful basketball scouts. The forecasting of complex patterns requires very precise identification of causal factors and large practice-set sample sizes, both of which are currently lacking in LR weather forecasting.
  24. What you posted is not a "pattern." It's a graphical representation of a set of numerical values at the continental-scale. It's a purely static depiction. Any meaningful definition of weather "pattern" should incorporate the wave dynamics associated with evolution and propagation of airmasses. In that way, "pattern" and "cold" should always be interrelated. In fairness, everyone on this forum would be better off if we all stopped using the term "pattern" because it usually just leads to misunderstandings and unfulfilled expectations.
  25. IMO the coupled global atmosphere and ocean systems are way too complex for easily identifiable and repeatable patterns. Personally I think there are general recurrent features, but not as predicable as you suggest. "El Nino" is a numerical range used to represent a particular geophysical variable in a general region. No two El Nino seasons are even close to being the same.
×
×
  • Create New...