Jump to content
  • Member Statistics

    17,512
    Total Members
    7,904
    Most Online
    12bet1 net
    Newest Member
    12bet1 net
    Joined

Atlantic Tropical Action 2013


Recommended Posts

I was careful with my words for a reason... I said the ECMWF was far and away the best model for forecasting the general intensity. Not the best verbatim forecast, but the best guide for forecasting the intensity of a TC.

 

The problem is when you compare a model directly with SHIFOR, you are only comparing the maximum sustained winds (MSWs) of a global model vs. a statistical model of previous TCs in that same region. Its a grave mistake to take model forecasted MSWs because in most cases global models don't have the resolution to resolve the maximum sustained winds of a TC. The GFS even with a good initialization of a major hurricane will never have the resolution to resolve 100+ knot winds at 10 meters because the grid spacing won't resolve the small inner core that contains those winds. The ECMWF has the highest spectral resolution of all the global models, so it can occasionally resolve strong inner cores of larger TCs, but it doesn't always do this on a consistent basis. Thus, the ECMWF does have the capability in larger storms to forecast a 10 meter wind speed intensity of a major hurricane. It tried to do this with Leslie last year, which was a colossal failure due to its inability to depict proper SST cooling underneath the nearly stationary TC (global models don't yet have coupled SST physics, so they won't do a good job at predicting upwelling weakening of TCs). The GFS also forecasting Leslie to be a very powerful TC, but its wind intensity forecast was lower because its resolution was not able to show 100+ knot winds.

 

So lets logically think this through. 2012 was a year where there were very many weak TCs, but very few major hurricanes. Thus a model like the GFS should could have higher skill in theory because it can never predict the proper intensity of major hurricanes but does reasonably well with 35-50 knot intensities, while a model like the ECMWF might be unfairly scored just because it has the capability to resolve higher wind intensity forecasts. 

The problem with this argument is verification has actually shown the opposite to be true. That is, for the Atlantic, the ECMWF has a weak bias, GFS has a strong bias. See Table 5b Page 35 of the NHC's 2012 Verification Report. So...that argument kinda goes out the window. The greatest source of discrepancy between the intensity forecasts of these two models is in the DA, imo. GFS has vortex bogusing, EC does not. This would largely explain the EC's tendency to not fully initialize the vortex and thus be too weak, as verification has shown.

With that said, we need more verification. NHC has only done ECMWF intensity verification for one year (2012). As you noted, most of the storms in 2012 were pretty weak. I'd be curious to see how these two models compare in a bigger year...

Link to comment
Share on other sites

  • Replies 2.2k
  • Created
  • Last Reply

The tropics have heated up...if only by the discussion in this thread :D.

 

So models that show development have shifted to the north, and are stronger regarding intensity...with the outstanding exception of the Euro. I'd wait for some development before adventuring with the final track. Models agree that it should move into the NW Caribbean and then somewhere in the GoM...until that point, I'm fairly confident. How strong is the trough digging into the central Gulf states, that's the 64k question. Euro is the flattest, and the GFS is the sharpest...the CMC and UKmet are something in between. Also a stronger low level vort farther north in the GoM would help erode the ridge and would tend to keep the gap open. We'll have to wait a little more. 

Link to comment
Share on other sites

The BOC system might still have a chance if it stays further west. I have noticed there is usually less shear  in the western part of the gulf near the Mexican coast. 

 

If the troff in the MS valley forms as deep as the GFS and Euro are advertising than it won't have a good environment until post day 7 if it's not inland already. The GGEM is much weaker with the troff and allows the storm to get organized earlier.    Glass 7/53 full.

Link to comment
Share on other sites

The BOC system might still have a chance if it stays further west. I have noticed there is usually less shear  in the western part of the gulf near the Mexican coast. 

 

If the troff in the MS valley forms as deep as the GFS and Euro are advertising than it won't have a good environment until post day 7 if it's not inland already. The GGEM is much weaker with the troff and allows the storm to get organized earlier.    Glass 7/53 full.

Trough driven cyclones are usually more prone to shear and dry air ... yes, if in the right spot, a TC can benefit from the strong upper level evacuation a trough might provide...but it has to be a deep trough, as that means the whole column flow has a relatively similar magnitude and direction wind flow wise (less shear).

Link to comment
Share on other sites

dtk, calling dtk to the tropical thread. I'm 99% sure there is no vortex bogusing in the GFS. There is relocation via PV inversion, but that's different than a bogus

As of 2009  Relocated.

http://www.nhc.noaa.gov/pdf/model_summary_20090724.pdf

The GFS makes a special accommodation for TCs in its initial fields by relocating the globally analyzed TC vortex in the first-guess field to the official NHC position.  An assimilation of the available data is then performed to create the initial state.  The globally analyzed vortex is, however, often an incomplete representation of the true TC structure.  For this reason, the GFS is typically more suited to producing track and outer wind structure forecasts than to producing intensity forecasts.

 

Link to comment
Share on other sites

Question for modeling heavies.

 

ETA DTK is in the house!

 

Is there a perfect resolution based on model type?  Adam mentioned NAM and probably FIM having convective feedback issues.  I learned a while back in the Central subforum from a red tagger that a line of SREFs that produced what looked like supercells were based on an older GFS version of low resolution, and said red tagger (forgot who) even linked to a NOAA product on diagnosing convective 'QPF bombs', that even falsely produced warm core lows.  If a very broad area is depicted as a solid thunderstorm, it lead to a process where the model suffers from convective feedback.

 

Now, I have spent a couple of days reading PDFs I don't understand on spherical harmonics and spectral models and transforms, and I saw truncation as a solution to nonlinear instability.  But then I couldn't decide if they meant limiting the terms of the spectral equation, or limiting the terms before the truncation of the transform.  (Did I mention a class in 'systems of non-linear differential equations convinced me I would not get an MS in engineering?  I could convert from physical to Laplace space and solve, converting back killed me.)

 

Is the limit of resolution the fact that truncation number increases faster than resolution increase and is mainly a point of computing power?  Or is there a limit where either it is pointless because of initialization errors or the model becomes inherently unstable and likely to fail.

 

Speaking of resolution and models

 

Is the FIM a spectral model?  (Sounds like a dumb question, I know).  Is there something special about the shape of the horizontal grid? (hexagons)  This looks cool, however.

reiner_xsect.jpg

tk2.png

 

 And 18Z FIM close enough to forecasting a hurricane I'd say it is probably forecasting a hurricane.

post-138-0-35737700-1376274181_thumb.png

Link to comment
Share on other sites

dtk, calling dtk to the tropical thread. I'm 99% sure there is no vortex bogusing in the GFS. There is relocation via PV inversion, but that's different than a bogus

This is correct, though not via pv inversion. There are however cases where we do bogusing instead of relocation. This happens when our tracker cannot find a vortex to relocate. In an effort to spin something up, we drop in bogus winds consistent with the advisories. The issue here is the vortices generated are too coherent, as this typically occurs fir weak storms or genesis events not picked up by the model. I don't have the numbers, but I think the bogus winds are assimilated less than 5% of active da updates with an tc present.

Forgive typos, as I'm trying to reply from my mobile. I'll try to expand on this tomorrow.

Link to comment
Share on other sites

The problem with this argument is verification has actually shown the opposite to be true. That is, for the Atlantic, the ECMWF has a weak bias, GFS has a strong bias. See Table 5b Page 35 of the NHC's 2012 Verification Report. So...that argument kinda goes out the window. The greatest source of discrepancy between the intensity forecasts of these two models is in the DA, imo. GFS has vortex bogusing, EC does not. This would largely explain the EC's tendency to not fully initialize the vortex and thus be too weak, as verification has shown.

With that said, we need more verification. NHC has only done ECMWF intensity verification for one year (2012). As you noted, most of the storms in 2012 were pretty weak. I'd be curious to see how these two models compare in a bigger year...

 

Ahh good point, as I really only looked at 5a before attempting to make my argument. I am still a bit curious to see how exactly how they are verifying these intensity forecasts, because there were a couple of obvious cases where the ECMWF had a high bias last year (Isaac and Leslie)... I didn't think the low sample size of relatively short lived storms like Kirk, Michael, and Ernesto would skew the results towards the low bias side, but that's just my subjective assessment. 

 

 

dtk, calling dtk to the tropical thread. I'm 99% sure there is no vortex bogusing in the GFS. There is relocation via PV inversion, but that's different than a bogus

 

 

This is correct, though not via pv inversion. There are however cases where we do bogusing instead of relocation. This happens when our tracker cannot find a vortex to relocate. In an effort to spin something up, we drop in bogus winds consistent with the advisories. The issue here is the vortices generated are too coherent, as this typically occurs fir weak storms or genesis events not picked up by the model. I don't have the numbers, but I think the bogus winds are assimilated less than 5% of active da updates with an tc present.

Forgive typos, as I'm trying to reply from my mobile. I'll try to expand on this tomorrow.

 

Thanks for starting to clear this up... and this seems to correspond well to the blog that I cited earlier today. It sounds like vortex bogusing in the GFS is rare and only typically occurs during the genesis of a TC, if at all. However, just investigating some of the text files from nomads, vortex relocation (while very minor) seems to happen quite frequently with nearly every model cycle if there are TCs. 

Link to comment
Share on other sites

This is correct, though not via pv inversion. There are however cases where we do bogusing instead of relocation. This happens when our tracker cannot find a vortex to relocate. In an effort to spin something up, we drop in bogus winds consistent with the advisories. The issue here is the vortices generated are too coherent, as this typically occurs fir weak storms or genesis events not picked up by the model. I don't have the numbers, but I think the bogus winds are assimilated less than 5% of active da updates with an tc present.

Forgive typos, as I'm trying to reply from my mobile. I'll try to expand on this tomorrow.

Thanks for this. So usually it's just a relocation and bogus events only occur with weak storms in which the tracker can't pick up a vortex, which is about 5% of the time...I bet it happened a little more often than that last year though, considering we had all those weak frontal storms.

Link to comment
Share on other sites

Ahh good point, as I really only looked at 5a before attempting to make my argument. I am still a bit curious to see how exactly how they are verifying these intensity forecasts, because there were a couple of obvious cases where the ECMWF had a high bias last year (Isaac and Leslie)... I didn't think the low sample size of relatively short lived storms like Kirk, Michael, and Ernesto would skew the results towards the low bias side, but that's just my subjective assessment. 

Yeah sometimes the ECMWF will really latch on to a storm and bomb it out.

 

I really wish the NHC had more in-depth documentation of the model biases. For example, we could look at intensity bias w.r.t. storm position, intensity bias w.r.t. storm strength at initialization, intensity bias w.r.t. storm genesis type (monsoonal, AEW, frontal, ULL, etc...). Same could be said for track biases. Assessing and properly understanding model bias is essential to making good forecasts.

Link to comment
Share on other sites

 I maintain that the Euro has had a too strong bias overall in recent years. I've posted my analyses here about this in the past.

Again, I think it's only too strong for certain storms. Perhaps the one's in which DA fully initializes a strong TC (i.e. storms which are already strong). For the most part, however, the ECMWF is too weak. 2012's verification showed that. And the same has been true for the four storms we've had so far this year. This bias may be occurring because the majority of the storms last year, and all of the storms so far this year, have been weak.

Link to comment
Share on other sites

Great discussion today guys! 

 

I'll try to tackle a few of these questions...

 

Is there a perfect resolution based on model type?  Adam mentioned NAM and probably FIM having convective feedback issues.  I learned a while back in the Central subforum from a red tagger that a line of SREFs that produced what looked like supercells were based on an older GFS version of low resolution, and said red tagger (forgot who) even linked to a NOAA product on diagnosing convective 'QPF bombs', that even falsely produced warm core lows.  If a very broad area is depicted as a solid thunderstorm, it lead to a process where the model suffers from convective feedback.

 

Convective parametrization schemes are always prone to error, but it's much more obvious at very poor resolution.  Just as you said, you can get massive qpf bombs and land-locked warm core lows whenever the model calls the convective parametrization when you're releasing latent heat over an entire 50x50 km grid point.

 

In general, for resolving hurricanes, you will generally get the best structure and intensity with convection-permitting (~ 3 km) or resolving (<1 km) resolutions, for which the parametrization would be turned off.  Actually, simulations at NCAR by Bryan and Rotunno have shown 1 km WRF simulations to have a high intensity bias.  It can generate a tighter core / RMW than simulations at lower resolution, which means a higher max wind, but does not have high enough resolution to resolve the mixing processes that occur along the inner-edge of the eyewall that actually weaken the storm.  Realistically (meaning you don't have a supercomputer all to yourself), anything below 15 km is usually sufficient to get a reasonable core. 

 

Is the limit of resolution the fact that truncation number increases faster than resolution increase and is mainly a point of computing power?  Or is there a limit where either it is pointless because of initialization errors or the model becomes inherently unstable and likely to fail.

 

It's true that there are decreasing returns with increasing resolution due to the lack of observations on a sufficiently dense grid.  I wouldn't say there's an exact limit per-say, but the fact that the observation network can never be infinitely dense and that all observations will always have some amount of error (no thermometer or satellite measurement will ever be perfect) means that the analysis will always have error.  Since errors will always grow with time in a dynamic system, there will always be a limit of predictability in the forecast (see Lorenz 1969 or 1982). 

 

You're most likely not producing a more skillful forecast by increasing resolution when you don't have enough observations to generate a realistic analysis, unless you just happened to have a really good prior, which in general is not the case. 

 

 

Speaking of resolution and models

 

Is the FIM a spectral model?  (Sounds like a dumb question, I know).  Is there something special about the shape of the horizontal grid? (hexagons) 

 

If by spectral you mean solved via a spectral element method, then no.  Instead the FIM uses finite volume methods which is kinda a variation of the finite element method.  The icosahedral shape to the grid is just because it fits well on a spherical planet. You can read more about it here is you really want to know: http://fim.noaa.gov/fimdocu_rb.pdf

 

Link to comment
Share on other sites

Again, I think it's only too strong for certain storms. Perhaps the one's in which DA fully initializes a strong TC (i.e. storms which are already strong). For the most part, however, the ECMWF is too weak. 2012's verification showed that. And the same has been true for the four storms we've had so far this year. This bias may be occurring because the majority of the storms last year, and all of the storms so far this year, have been weak.

 

Fiona of 2010, which never went below 998 mb: eight straight horrendous Euro runs runs, with the following lowest SLP's (in mb) in various locations between the SW Atlantic and the GOM : 921, 908 (lowest then of any Euro model run since Katrina), 934, 930, 926, 939, 960, and 962.

 

Edit: I've found the worst of the bombing bias to be above 25 N.

Link to comment
Share on other sites

It's true that there are decreasing returns with increasing resolution due to the lack of observations on a sufficiently dense grid.  I wouldn't say there's an exact limit per-say, but the fact that the observation network can never be infinitely dense and that all observations will always have some amount of error (no thermometer or satellite measurement will ever be perfect) means that the analysis will always have error.  Since errors will always grow with time in a dynamic system, there will always be a limit of predictability in the forecast (see Lorenz 1969 or 1982). 

 

You're most likely not producing a more skillful forecast by increasing resolution when you don't have enough observations to generate a realistic analysis, unless you just happened to have a really good prior, which in general is not the case. 

Yeah this is an interesting point. It really makes me wonder just how useful increasing the grid resolution of our global models will be in the future. I fully support increasing model resolution for the time being; however, in the not-too-distant future, I question how useful increasing resolution will be. For example, ECMWF has plans for a T7999 2.5km global model by ~2025. Provided our observation network doesn't dramatically improve (here I am mostly referring to satellite-based soundings which have poor resolution and large errors -- gps RO is an exception) I don't see how our forecasts will improve by running our models on these incredibly high resolutions. Yes, they will more realistically represent atmospheric processes, but will they be more accurate?

 

 

Maybe I am way off here but I feel that pushing our global (not regional) models into very high resolution (say <5km) will not bring any significant increase to model skill unless our observation network is dramatically improved. There has to be a threshold at some point where the more you increase resolution (provided a static observation system), the worse the forecasts actually become. I'm not sure what that tipping point is, however. High-res global ensembles sound much more promising...

Link to comment
Share on other sites

Fiona of 2010, which never went below 998 mb: eight straight horrendous Euro runs runs, with the following lowest SLP's (in mb) in various locations between the SW Atlantic and the GOM : 921, 908 (lowest then of any Euro model run since Katrina), 934, 930, 926, 939, 960, and 962.

 

Edit: I've found the worst of the bombing bias to be above 30 N.

Yep, as I said, it will unrealistically bomb out certain storms...I'm not arguing there.

Link to comment
Share on other sites

Yep, as I said, it will unrealistically bomb out certain storms...I'm not arguing there.

 

Yeah, it appears we pretty much agree. I said this in 2010 after analyzing that season's storms:

 

"This is just more evidence to support the idea that the higher resolution of the Euro has a significant bias toward too low SLP's for well developed TC's once they reach north of 25N."

 

Link to comment
Share on other sites

Fiona of 2010, which never went below 998 mb: eight straight horrendous Euro runs runs, with the following lowest SLP's (in mb) in various locations between the SW Atlantic and the GOM : 921, 908 (lowest then of any Euro model run since Katrina), 934, 930, 926, 939, 960, and 962.

 

Edit: I've found the worst of the bombing bias to be above 25 N.

 

...the funny part is it was eerily accurate on Sandy's pressure above 25 N despite this.

Link to comment
Share on other sites

...the funny part is it was eerily accurate on Sandy's pressure above 25 N despite this.

 

 OTOH, the lowest Euro pressures for Irene were above 30N. For Irene above 30N, it had the following as the lowest for the 8/22/11 12Z through 8/26/11 0Z runs (in mb), respectively: 926, 936, 923, 927, 925, 921, 924, and 918. The lowest for 30N or higher verified to be 945 mb.

Link to comment
Share on other sites

Most likely because Sandy was huge without much of an inner core.

 

Gotta think that Sandy's "hybrid" status and the influence with the 500 trough helped a lot with that one.  If there were Irene type 500 scenario and Sandy just scooted up the coast, I doubt those pressures would have dropped to the extent they did.

 

 

 OTOH, the lowest Euro pressures for Irene were above 30N. For Irene above 30N, it had the following as the lowest for the 8/22/11 12Z through 8/26/11 0Z runs (in mb), respectively: 926, 936, 923, 927, 925, 921, 924, and 918. The lowest for 30N or higher verified to be 945 mb.

 

I wasn't really picking on you about the bombing out point because the Euro has a tendency on occasion to over amplify in the cold season (although not to the extent the Canadian does)...I figured I'd throw a log on the fire with the Sandy point since it was the exception on that where that tendency ended up verifying.

 

I remember seeing run after run of 930-940ish on the Euro w/ Sandy and couldn't fathom it would verify given how off the Euro had been with Irene on pressure, even with transition and baroclinic being factored in for Sandy.

Link to comment
Share on other sites

Gotta think that Sandy's "hybrid" status and the influence with the 500 trough helped a lot with that one.  If there were Irene type 500 scenario and Sandy just scooted up the coast, I doubt those pressures would have dropped to the extent they did.

Agreed. It was a cold season scenario with a transitioning tropical cyclone. I'd expect the Euro to excel in that kind of situation.
Link to comment
Share on other sites

There's a big area of low level vorticity in the SW Caribbean...there's also the low level vorticity associated to a TW currently moving in the C Caribbean. The latter will move in just N of the former, interacting. Which will be the dominant one? Euro and Ukie says the SW Caribbean piece of energy... GFS and CMC say the TW energy. Down the road, very different forecasts, with the european models not developing much and hugging the coast of C America and MX, the american models aiming at the Yucatan channel and the N GoM.

Link to comment
Share on other sites

Nice surge in moisture, not well reflected on IR satellite imagery and currently under extremely unfavorable shear per CIMMS analysis.

 

Euro consistent on a weak latitudinally challenged system that never really develops.  Quarter full until Euro comes on board.  And a step backwards, as far as probabilities, from this model.

 

latest72hrs.gif

post-138-0-52054800-1376308467_thumb.png

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

Guest
This topic is now closed to further replies.
  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...