Jump to content
  • Member Statistics

    17,508
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

Why Are Models So Good?


Recommended Posts

Great thread guys – keep it going. Great points comparing how the GFS and the NAM did with the QPF. I’ve always found too that the GFS is a little too bloated out with moderate to light QPF on the northern and western fringes of a system even though in a lot of systems (but not this one) it does better with the track and positioning of the low than the NAM. The Nam does best with producing the most realistic looking QPF envelope, especially on the mesoscale, even though it may have it in the wrong places sometimes. Overall in this case it (the NAM) did remarkably well with the strength and track of the low too but I’ve found in a lot of cases it’s a bit off compared to the GFS, gem, and Euro with its placement/timing of synoptic scale features even though it does best simulating realistic looking mesoscale features. I discussed this in the other thread titled “is the NAM too good?”

I think the short range models have a tendency to smooth out their qpf fields which is why they were too bullish on the western fringe. They were right on the track and the qpf max, but if experience has taught us anything, it's that rarely do you get such a large area of high qpf in an intense storm bombing just offshore; these storms are more prone to banding with areas of higher qpf sandwiched in between with areas of lower than forecast qpf. They also have sharper than modeled cut offs. I noticed that Upton applied this theory also, as they used the NAM's track and qpf maxes in banding, while using the GFS as the model of choice for the fringe areas. As you said, it was right for the wrong reasons. I said this before the event started and I'll say it again: A wise move by Upton in applying model physics. If you want a storm that gives a big snow dump over a much larger area, you need a weaker system that overruns a large dome of Arctic air.

Link to comment
Share on other sites

This is somewhat misleading, since the GFS was too weak and too far east with the coastal track....so it got some of the answer right (on the western periphery) for the wrong reason. However, it was very stubborn in hinting that the Mid-Atlantic wouldn't get in on the action much....which was useful in of itself (as you point out, it was useful guidance and suggested to keep forecast totals down in some places).

I'm talking more about it from a public forecast standpoint. The general public wouldn't necessarily care WHY the forecast was right, they would only care IF it was right.

Link to comment
Share on other sites

Yes, definately true. It is important to differentiate between consumer verification vs. "system level" verification.

I'm talking more about it from a public forecast standpoint. The general public wouldn't necessarily care WHY the forecast was right, they would only care IF it was right.

Link to comment
Share on other sites

Yes, definately true. It is important to differentiate between consumer verification vs. "system level" verification.

Even if it means relying on an aspect (qpf) of a particular model solution when you think it has other aspects (dynamics, surface track, etc.) quite wrong?

Link to comment
Share on other sites

I see what you are saying and you are right. I just meant that the public doesn't care about that. From their (consumer) perspective the model was correct. From our perspective (scientific, system level) we know it only happened to be right for the wrong reasons. That's all I'm saying. I agree that since we as scientists need to care about the how and why we can't just walk away from this simply concluding that the GFS did well for those areas without also looking at the whole picture.

Even if it means relying on an aspect (qpf) of a particular model solution when you think it has other aspects (dynamics, surface track, etc.) quite wrong?

Link to comment
Share on other sites

I'm talking more about it from a public forecast standpoint. The general public wouldn't necessarily care WHY the forecast was right, they would only care IF it was right.

Perhaps, but relying on that luck alone won't do the customer much good in the long run. This case was a terrific example of a consistent model not being "right" anyways, and also how models should not be taken verbatim, especially qpf.

As a side note, I do see you have changed your tune on models these days Analog96, good to see I guess.

Link to comment
Share on other sites

Excellent post baroclinic! As a fairly new Meteorologist (degreed for 2 years; working on Masters) I value your posts in the NYC thread. I have to agree with you that it is mind boggling how so many people just toss a solution for any reason that makes sense to them. It really makes it hard to read sometimes. This whole NAM is bad after 48 hours thing is getting old too. Great Post!

Link to comment
Share on other sites

getting old? I don't think enough people are aware that the truth is that generally it does not perform as well as the other models beyond 48 hours. That doesn't mean a met should automatically toss it though! - you still have to analyze the situation as the OP stressed so well. But as much as people shouldn't just toss it people also shouldn't consider it too seriously during the times when there are good reasons believe (after doing a good analysis) that it's not handling the situation well - there are times that sometimes this is obvious yet people still buy it more than they should since supposedly its the best state of the art model (it is for certain things but not everything). Bottom line: you need to do a good analysis and make a forecast based on sound meteorological reasoning as baroclinic explained so well.

Excellent post baroclinic! As a fairly new Meteorologist (degreed for 2 years; working on Masters) I value your posts in the NYC thread. I have to agree with you that it is mind boggling how so many people just toss a solution for any reason that makes sense to them. It really makes it hard to read sometimes. This whole NAM is bad after 48 hours thing is getting old too. Great Post!

Link to comment
Share on other sites

Quite right-- just because the model doesnt have high verification scores beyond a certain time range, it doesnt mean it has no usefulness. I remember with the last storm, someone used the NoGaps to point out that the storm would come in closer to the coast, as that model (which usually has a severe south and east bias) showed a coastal hugger scenario-- which indicated that the other models would probably yank it closer to the coast too.

Link to comment
Share on other sites

Like I said, John Q. Public has no idea what dynamics even means. He just wants to know how much snow he's going to get.

John Q. Public also assumes that you are making a forecast by assessing the weather and creating a well reasoned forecast. Going with a forecast of 3-6" because the GFS has 0.4" of QPF is not really utilizing the guidance in any meaningful way, especially when said model verifies the QPF because it was too progressive with the system. That said, I don't know the specifics of the forecast you referenced or what the modeled QPF was, but really that's not relevant for the point being made. We in the business hump the models too much sometimes and forget to think and compare an initialization to reality or recognize a pattern and how it impacts a given locality.

Link to comment
Share on other sites

  • 2 weeks later...

I was aghast to see a lot of meteorologists and non-meteorologists simply "toss" the NAM and SREF guidance out the window without even giving it consideration. Typical examples of quotes included "It is just the NAM",

NAM Fail. And this is case in point why a lot of mets "toss" the model. It sucks. Correction, it sucks worse than the Global models 80% of the time. It does do better every once in a while. But you just can't trust it.

Relative to the other models, Jan 26 was a spectacular NAM fail. Offices that used it (OKX) did not do well.

Link to comment
Share on other sites

NAM Fail. And this is case in point why a lot of mets "toss" the model. It sucks. Correction, it sucks worse than the Global models 80% of the time. It does do better every once in a while. But you just can't trust it.

Relative to the other models, Jan 26 was a spectacular NAM fail. Offices that used it (OKX) did not do well.

I'm not sure you could have missed the central thesis of this thread any farther.

Link to comment
Share on other sites

I'm not sure you could have missed the central thesis of this thread any farther.

How? Yes, there was a lot about looking at the general pattern itself instead of just watching models, which is fine...I agree. But a lot of the discussion was wondering why mets were throwing out the high resolution non hydrostatic guidance which showed the rapid development. My point is...the NAM is so bad at times, it renders the model unuseable (IMO) as reliable guidance. I'll give you another quote from the original post "Given a good analysis, it would have been clear to a meteorologist that a ticking time bomb was in the waiting. The depth and strength of the dynamic tropopause over the coast was incredible as the leading S/W ejected, and it was clear that the potential for an extreme atmospheric response was possible to such a hydro-dynamically unstable setup. Yet, as we neared the event and the high resolution non hydrostatic guidance continued to suggest a threat while the global operational models were E, I continued to see trained mets "tossing" the guidance or simply regurgitating the "skill scores" of the global models as reasoning to toss the other guidance.."

So fast forward to Wednesday. A similar situation, robust piece of energy sparking low pressure development off the coast. The potential was there. How dynamic was it? How many times to you see thundersleet? But alas, the NAM whiffed. There were runs as late as 00z the night before that showed less than .25" liquid for the entire event in the NYC/Southwest CT region...and the low pressure system was far too weak.

I'm not tearing down most of what he said, I agree with much of it. I was just highlighting why mets toss the NAM. It should be scrapped, and more work focused on the GFS like every other modelling center does...focus on improving one model. But that's for another discussion.

Link to comment
Share on other sites

Right, and in this event, the RUC and HRRR showed the dynamic environment you are writing about and performed the best. You have to have an understanding of the synoptic and mesoscale environments and then choose a model or consensus of models that best depict your schematic forecast. Sometimes the NAM does best, sometimes the RUC does best, sometimes the globals do best. You have to judge each event and each model for a particular event on their own merits rather than making general statements about why any particular model is superior or garbage.

Link to comment
Share on other sites

Right, and in this event, the RUC and HRRR showed the dynamic environment you are writing about and performed the best. You have to have an understanding of the synoptic and mesoscale environments and then choose a model or consensus of models that best depict your schematic forecast. Sometimes the NAM does best, sometimes the RUC does best, sometimes the globals do best. You have to judge each event and each model for a particular event on their own merits rather than making general statements about why any particular model is superior or garbage.

And the NAM was first and right in showing this not as a consolidated monster low like the GGEM/Euro and UK.

For all the banging on the NAM, the snow-anistas want to run around and say the Euro was great, even though the 72 hour verification was poor and it totally misunderstood how the storm would evolve. fact is all but the RGEM were TERRIBlE inside of 48, with only the RUC/HRRR doing well. The HRRR seemed to have less of a NW bias vs the RUC, but then jumped too far SE up here late.

Link to comment
Share on other sites

How? Yes, there was a lot about looking at the general pattern itself instead of just watching models, which is fine...I agree. But a lot of the discussion was wondering why mets were throwing out the high resolution non hydrostatic guidance which showed the rapid development. My point is...the NAM is so bad at times, it renders the model unuseable (IMO) as reliable guidance. I'll give you another quote from the original post "Given a good analysis, it would have been clear to a meteorologist that a ticking time bomb was in the waiting. The depth and strength of the dynamic tropopause over the coast was incredible as the leading S/W ejected, and it was clear that the potential for an extreme atmospheric response was possible to such a hydro-dynamically unstable setup. Yet, as we neared the event and the high resolution non hydrostatic guidance continued to suggest a threat while the global operational models were E, I continued to see trained mets "tossing" the guidance or simply regurgitating the "skill scores" of the global models as reasoning to toss the other guidance.."

So fast forward to Wednesday. A similar situation, robust piece of energy sparking low pressure development off the coast. The potential was there. How dynamic was it? How many times to you see thundersleet? But alas, the NAM whiffed. There were runs as late as 00z the night before that showed less than .25" liquid for the entire event in the NYC/Southwest CT region...and the low pressure system was far too weak.

I'm not tearing down most of what he said, I agree with much of it. I was just highlighting why mets toss the NAM. It should be scrapped, and more work focused on the GFS like every other modelling center does...focus on improving one model. But that's for another discussion.

Actually, the nam did really well with the Wed storm and I used it to forecast thunder and lightning with the snow for the dc area. Without the models, I doubt we would have done so well. A retired met who retired 20 year ago sent me an e-mail stating that meteorology and forecasting had sure improved since he retired. He's right, I doubt anyone would have been able to forecast the event over 24 hrs ahead of time like we could this event. People like to bash the models but forecasting has way better than it was back in the 70 or 80s and most of the improvement is because of the models. I guess there's a difference in expectations. Now people expect a model to be correct several days ahead of an event. Years ago, we didn't have the same expectations.

Link to comment
Share on other sites

How? Yes, there was a lot about looking at the general pattern itself instead of just watching models, which is fine...I agree. But a lot of the discussion was wondering why mets were throwing out the high resolution non hydrostatic guidance which showed the rapid development. My point is...the NAM is so bad at times, it renders the model unuseable (IMO) as reliable guidance. I'll give you another quote from the original post "Given a good analysis, it would have been clear to a meteorologist that a ticking time bomb was in the waiting. The depth and strength of the dynamic tropopause over the coast was incredible as the leading S/W ejected, and it was clear that the potential for an extreme atmospheric response was possible to such a hydro-dynamically unstable setup. Yet, as we neared the event and the high resolution non hydrostatic guidance continued to suggest a threat while the global operational models were E, I continued to see trained mets "tossing" the guidance or simply regurgitating the "skill scores" of the global models as reasoning to toss the other guidance.."

So fast forward to Wednesday. A similar situation, robust piece of energy sparking low pressure development off the coast. The potential was there. How dynamic was it? How many times to you see thundersleet? But alas, the NAM whiffed. There were runs as late as 00z the night before that showed less than .25" liquid for the entire event in the NYC/Southwest CT region...and the low pressure system was far too weak.

I'm not tearing down most of what he said, I agree with much of it. I was just highlighting why mets toss the NAM. It should be scrapped, and more work focused on the GFS like every other modelling center does...focus on improving one model. But that's for another discussion.

Still missing the gist of the discussion. See below.

Right, and in this event, the RUC and HRRR showed the dynamic environment you are writing about and performed the best. You have to have an understanding of the synoptic and mesoscale environments and then choose a model or consensus of models that best depict your schematic forecast. Sometimes the NAM does best, sometimes the RUC does best, sometimes the globals do best. You have to judge each event and each model for a particular event on their own merits rather than making general statements about why any particular model is superior or garbage.

This.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...