Jump to content
  • Member Statistics

    17,509
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

Why are models so bad?


Recommended Posts

It's interesting that an ex met who is in his 80s sent me this "Forecasting has sure improved since the 22 years when I last forecasted." Part of his career was back when models were just getting started and they NMC (now HPC) ran tests to see whether a forecasting not using models could beat people using models to forecast precipitation amounts (QPF). The guys using models won even back then when the models were not nearly as good as they are now. That's the reason why forecasters rely so heavily on models, when they use analogs only over a long period of time, they will lose to a forecaster who used models. There are loads of verification scores showing that day 5 temperature forecasts forecasts are now as good as day 3 were 20 years ago and that day 3 qpf forecasts are now better than day one were years ago.

Heres precipitation scores the higher the score the better.

http://www.hpc.ncep....vrf/hpc20yr.gif

Now compare this graphic for temperatures. Lower scores are better. a day 7 temp forecast is now better than a day 3 forecast was before 1990.

http://www.hpc.ncep....rf/maemaxyr.gif

Now forecasters also have tools to help them assess the uncertainty in the forecasts in ensemble runs. If you look at them and at the various operational model runs and see lots fo spread, it's usually stupid to jump to a specific deterministic forecast. Why lokc yourself in when in reality you don't know. Many of us occasionally suffer hubris and think we are better at guessing which model solution is right than we are and we jump to forecasts. Some mets even seem to jump towards the most extreme solution when in reality it often is the least likely of solutions then blame the models when they jumped too soon. Yes occasionally the model consensus can sometimes blow a forecast but there are other times when the models score a coup. I think it highly unlikely that the recent rush hour snowstorm would have been so well forecast in the dc area without the models. They pretty much nailed when the changeover would occur, that there would be high snowfall rates and convection. Back in the old days, I'm guessing we would have had a true surprise snowstorm like we had on the Veteran's day snowstorm.

Link to comment
Share on other sites

  • Replies 218
  • Created
  • Last Reply

Well then you should have been paying attention for a while now (particularly the UKMet.....perhaps not for specific types of events [EC Cyclogensis?], but it's a pretty darn good modeling/analysis system now overall).

Since my interests lie mostly along the US East Coast, the UKMET is a terrible model for my purposes.

Link to comment
Share on other sites

It's interesting that an ex met who is in his 80s sent me this "Forecasting has sure improved since the 22 years when I last forecasted." Part of his career was back when models were just getting started and they NMC (now HPC) ran tests to see whether a forecasting not using models could beat people using models to forecast precipitation amounts (QPF). The guys using models won even back then when the models were not nearly as good as they are now. That's the reason why forecasters rely so heavily on models, when they use analogs only over a long period of time, they will lose to a forecaster who used models. There are loads of verification scores showing that day 5 temperature forecasts forecasts are now as good as day 3 were 20 years ago and that day 3 qpf forecasts are now better than day one were years ago.

Heres precipitation scores the higher the score the better.

http://www.hpc.ncep....vrf/hpc20yr.gif

Now compare this graphic for temperatures. Lower scores are better. a day 7 temp forecast is now better than a day 3 forecast was before 1990.

http://www.hpc.ncep....rf/maemaxyr.gif

Now forecasters also have tools to help them assess the uncertainty in the forecasts in ensemble runs. If you look at them and at the various operational model runs and see lots fo spread, it's usually stupid to jump to a specific deterministic forecast. Why lock yourself in when in reality you don't know. Many of us occasionally suffer hubris and think we are better at guessing which model solution is right than we are and we jump to forecasts. Some mets even seem to jump towards the most extreme solution when in reality it often is the least likely of solutions then blame the models when they jumped too soon. Yes occasionally the model consensus can sometimes blow a forecast but there are other times when the models score a coup. I think it highly unlikely that the recent rush hour snowstorm would have been so well forecast in the dc area without the models. They pretty much nailed when the changeover would occur, that there would be high snowfall rates and convection. Back in the old days, I'm guessing we would have had a true surprise snowstorm like we had on the Veteran's day snowstorm.

Wes, you're spot on with everything -- and I applaud the statement that I put in bold font. We as a group have a bad habit of getting lucky and not realizing it. This will often times put us in a position where we were "right for the wrong reason," and we fail to learn from a given event. And even when we glean some piece of data from an event we are still not guaranteed to utilize it correctly and build of off it. As I stated in another post, we've allowed our minds to atrophy and we often do not take guidance for what it is... guidance. Experience is a hell of a teacher, provided we're paying attention. And if one is truly paying attention it should show through in their humbleness.

Link to comment
Share on other sites

Wes, you're spot on with everything -- and I applaud the statement that I put in bold font. We as a group have a bad habit of getting lucky and not realizing it. This will often times put us in a position where we were "right for the wrong reason," and we fail to learn from a given event. And even when we glean some piece of data from an event we are still not guaranteed to utilize it correctly and build of off it. As I stated in another post, we've allowed our minds to atrophy and we often do not take guidance for what it is... guidance. Experience is a hell of a teacher, provided we're paying attention. And if one is truly paying attention it should show through in their humbleness.

Randy, the one thing I left out is even when models are in pretty good agreement on a day 7 forecasts it's often not a good idea to jump to a forecast especially of snow as small changes in the forecasts can lead to a rainy solution rather than a snow one for most of the east coast. All you probably be sure of is there will be a storm somewhere near the east coast at that time range and then sort out the details as you get closer to the event. The ensembles and models often do not have enough spread in those time range.

Link to comment
Share on other sites

Randy, the one thing I left out is even when models are in pretty good agreement on a day 7 forecasts it's often not a good idea to jump to a forecast especially of snow as small changes in the forecasts can lead to a rainy solution rather than a snow one for most of the east coast. All you probably be sure of is there will be a storm somewhere near the east coast at that time range and then sort out the details as you get closer to the event. The ensembles and models often do not have enough spread in those time range.

You know Wes, one thing I've noticed is that the GFS ensemble member often times are fairly clustered in the long range whereas there seems to be much more variance with the GGEM ensembles. Is this something you've noted as well?

And I agree about model convergence in the long range -- it's typically a good indicator that the long-wave pattern is "stable" in a sense. You're right, the details are unimportant because they're likely wrong, but the themes are what we need to focus on with some weight towards climo.

Link to comment
Share on other sites

Under certain circumstances, I think it is IMPORTANT to look at a lot of different models. There have been many times this season when every model was busting except for the RUC and what the RUC was hinting at happening down the road.

As far as MOS, I think most mets realize that it's not perfect. Sometimes it is complete garbage, but as a general tool, it's not bad. When it comes to short-range forecasting, I think it is critical see which models (or techniques) are doing the best. For medium range, stick with the OPS, but look at the ensembles to get a better idea. I've finally come to grasp with largely discounting the 6z and 18z runs, which have been terrible at times this year.

Another thing we have to remember and I think most quality mets do, is that you have to remember CLIMO and TRENDS. There was a stretch this winter where the models kept busting too low for temps. Perhaps over-compensation for snow-pack, who knows? Even if we can't easily explain why something is happening, sometimes you just gotta run with it. Sometimes that doesn't work. Today, temps ended up below guidance here, even though I was very confident they would wind up higher. I know that I personally must do some more research and look more into the scientific explanations, than just looking at things by glance. I'll be the first to admit that I don't always do a lot of analysis, especially when the pattern is relatively quiet and inactive.

One final note is that even basic research, as I did in college, can go a long way. With some help from DT, I was able to come to a better understanding of how to use 850mb temps to forecast 2m temps. I have a system that often outperforms MOS, even though it has clear limitations.

Q

Link to comment
Share on other sites

You know Wes, one thing I've noticed is that the GFS ensemble member often times are fairly clustered in the long range whereas there seems to be much more variance with the GGEM ensembles. Is this something you've noted as well?

And I agree about model convergence in the long range -- it's typically a good indicator that the long-wave pattern is "stable" in a sense. You're right, the details are unimportant because they're likely wrong, but the themes are what we need to focus on with some weight towards climo.

The Canadians run a multi-model ensemble (different members actually have different physical parameterization specifications/groupings) to attempt to address model error/uncertainty. I do not think that the methods used by NCEP or the EC to address model error (stochastic perturbations/backscatter algorithms) can generate quite as much spread as the multi-physics ensemblse (I have no real evidence nor sound scientific basis for this, just my personal opinion). Of course, this comes with a cost, as it seems to me the GGEM ensemble suffers from clustering of solutions (much like the SREF) more so than the other centers.

Link to comment
Share on other sites

The Canadians run a multi-model ensemble (different members actually have different physical parameterization specifications/groupings) to attempt to address model error/uncertainty. I do not think that the methods used by NCEP or the EC to address model error (stochastic perturbations/backscatter algorithms) can generate quite as much spread as the multi-physics ensemblse (I have no real evidence nor sound scientific basis for this, just my personal opinion). Of course, this comes with a cost, as it seems to me the GGEM ensemble suffers from clustering of solutions (much like the SREF) more so than the other centers.

I thought there have to be something other that perturbing the initialization to produce such varied solutions... I agree with your assessment about the spread produced, and I actually think it's a good thing to have different physics packages with the same initialization for comparison purposes, judged against the GFS/EC ensemble members, to gauge how sensitive the evolving pattern is to the initialization and physics set.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...