Jump to content
  • Member Statistics

    18,477
    Total Members
    7,904
    Most Online
    kmsrocknj
    Newest Member
    kmsrocknj
    Joined

First Legit Storm Potential of the Season Upon Us


40/70 Benchmark
 Share

Recommended Posts

37 minutes ago, weatherwiz said:

This should be pinned at the top of the board :clap: 

As I've also mentioned before, AI can probably be very useful in the nowcasting department or short-term (6-12 hours) but beyond that...a very long ways to go

Thanks, dude. I always think of Ian Malcolm's quote from Jurassic Park when AI models are mentioned: Data scientists are so "preoccupied with whether or not they could, that they didn't stop to think if they should."

I think they're more useful for climatological/ensemble purposes. Its resolution is too course for nowcasting, and whether people like it or not, the best real-time product we have is the HRRR (only model to update every hour not considering the RRFS). Users just need to understand its limitations... Within a few hours = good ||| outside a few hours = meh ||| beyond a PBL cycle = ignore...

I've been thinking; theoretically, the ceiling for AI should be that of current NWP... I don't think it's possible to outperform the dataset its trained on, so to improve AI, you must improve NWP <OR> increase the size of your training dataset. As a result, NWP will never be phased out.

:fist bump:

 

30 minutes ago, CoastalWx said:

The only thing I heard is that the euro AI ensemble seems to have the best scores in terms of 500 MB. But you and I both know that doesn’t necessarily translate to better and more accurate forecast for sensible weather.

If I remember correctly, the evaluation was conducted wrt an analysis dataset (not in-situ locations). To me, that implies they're evaluating its efficacy (can it 'hang' with a traditional modeling system?) and not its accuracy. I did this too when I compressed assimilation data and reran CMAQ simulations when I worked with the EPA. I won't trust AI until evaluations are conducted at remote sensing stations. Analysis datasets aren't entirely accurate.

  • Like 3
  • 100% 1
Link to comment
Share on other sites

5 minutes ago, MegaMike said:

I've been thinking; theoretically, the ceiling for AI should be that of current NWP... I don't think it's possible to outperform the dataset its trained on, so to improve AI, you must improve NWP <OR> increase the size of your training dataset. As a result, NWP will never be phased out.

 

I disagree with this. ECMWF and NOAA/NWS should be the baseline. AI can use verification to improve modeling beyond the current state of physics-equations-based modeling, which is limited by its programming and unable to quickly iteratively improve.

  • Like 1
Link to comment
Share on other sites

7 minutes ago, MegaMike said:

Thanks, dude. I always think of Ian Malcolm's quote from Jurassic Park when AI models are mentioned: Data scientists are so "preoccupied with whether or not they could, that they didn't stop to think if they should."

I think they're more useful for climatological/ensemble purposes. Its resolution is too course for nowcasting, and whether people like it or not, the best real-time product we have is the HRRR (only model to update every hour not considering the RRFS). Users just need to understand its limitations... Within a few hours = good ||| outside a few hours = meh ||| beyond a PBL cycle = ignore...

I've been thinking; theoretically, the ceiling for AI should be that of current NWP... I don't think it's possible to outperform the dataset its trained on, so to improve AI, you must improve NWP <OR> increase the size of your training dataset. As a result, NWP will never be phased out.

:fist bump:

 

If I remember correctly, the evaluation was conducted wrt an analysis dataset (not in-situ locations). To me, that implies they're evaluating its efficacy (can it 'hang' with a traditional modeling system?) and not its accuracy. I did this too when I compressed assimilation data and reran CMAQ simulations when I worked with the EPA. I won't trust AI until evaluations are conducted at remote sensing stations. Analysis datasets aren't entirely accurate.

Excellent post. Understanding models (strengthens and weaknesses) is vital to forecasting success. Ultimately, forecasting is much more than just looking at the output of a model or comparing a few products. A forecaster should always be asking themselves, "does this output makes sense given the pattern"...obviously when dealing with a time range beyond 3-4-5 days there is always, always going to be a degree of uncertainty, however, asking yourself that question and working through the details to answer that question can provide enough of a basis for a forecaster to determine with confidence, the likelihood of a scenario occurring. 

I'm with you, the ceiling for AI should be that of current NWP and I think it should be thought of as AI being a compliment to current NWP. For example, if AI can do a better job at assessing the current state (initialization) and more quickly, integrate this into NWP. I believe this has always been done (again, a reason Euro was superior for a while) but with the advancement in technology this could vastly improve NWP. 

For your response to Scott, that is a very underrated understanding regarding re-analysis datasets. I think we take them at too much of a face value but need to understand there are limitations with them as well. For example, if you look at the ERSSTv6 and compare it to v5 and previous versions, you can see there are some large discrepancies in various areas of the globe, particularly earlier days when much of the re-analysis outside of ship routes was created via extrapolation methods.  

  • Like 1
  • Thanks 1
Link to comment
Share on other sites

Personally as an industry I'm not pleased with the rollout of this new AI technology. 

This needs to have a prioritized expose as to how this stuff works.  What the expectations are.   It's all very difficult to find, and find that to be divisive. 

It's quite obvious why.  There is a sense of competition, ask later -ism that is going on, where different sub-sectors are afraid of losing a competitive edge, so they are rushing out these AI products that are probably based on a rudimentary model that  can be "tweaked" - but in the meantime, no one gets to know that they don't really know what, nor how to do it very well.  That part is kept very hidden.  It's just that everyone has AI this that and other ...so organizational ineptitude can remain lost in the noise of all this AI. 

Either way, being left with no answers other than food for suspicion doesn't lend confidence in any of these AI model versions. And it is a little scary as more and more not criticality gets comfortable with an assumption in using them.

  • Like 3
Link to comment
Share on other sites

20 minutes ago, MegaMike said:

Thanks, dude. I always think of Ian Malcolm's quote from Jurassic Park when AI models are mentioned: Data scientists are so "preoccupied with whether or not they could, that they didn't stop to think if they should."

I think they're more useful for climatological/ensemble purposes. Its resolution is too course for nowcasting, and whether people like it or not, the best real-time product we have is the HRRR (only model to update every hour not considering the RRFS). Users just need to understand its limitations... Within a few hours = good ||| outside a few hours = meh ||| beyond a PBL cycle = ignore...

I've been thinking; theoretically, the ceiling for AI should be that of current NWP... I don't think it's possible to outperform the dataset its trained on, so to improve AI, you must improve NWP <OR> increase the size of your training dataset. As a result, NWP will never be phased out.

:fist bump:

 

If I remember correctly, the evaluation was conducted wrt an analysis dataset (not in-situ locations). To me, that implies they're evaluating its efficacy (can it 'hang' with a traditional modeling system?) and not its accuracy. I did this too when I compressed assimilation data and reran CMAQ simulations when I worked with the EPA. I won't trust AI until evaluations are conducted at remote sensing stations. Analysis datasets aren't entirely accurate.

I know the energy sector was all abuzz late last year about how the Euro AI ensembles our performed the EPS. I heard a few Mets talking about that. I haven’t compared them recently though. 
 

Interesting way to evaluate if it can hang with the big boys. Definitely plenty of data now to collect and verify how they have been performing vs other guidance.

  • Like 1
Link to comment
Share on other sites

3 minutes ago, Typhoon Tip said:

Personally as an industry I'm not pleased with the rollout of this new AI technology. 

This needs to have a prioritized expose as to how this stuff works.  What the expectations are.   It's all very difficult to find, and find that to be divisive. 

It's quite obvious why.  There is a sense of competition, ask later -ism that is going on, where different sub-sectors are afraid of losing a competitive edge, so they are rushing out these AI products that are probably based on a rudimentary model that  can be "tweaked" - but in the meantime, no one gets to know that they don't know what what they were doing, nor how to do it very well.  That part is kept very hidden.  It's just that everyone has AI this that and other ...so organizational ineptitude can remain lost in the noise of all this AI. 

The other thing too is there is a ton of money in AI...lots of money. When it comes to technology, it's so easy to sucker people in...I mean look how so many people go bonkers when the new iPhone comes out or some new high tech gadget. But if you're in the development of AI...you can easily sucker people in and make a boat load of money. 

  • Like 1
Link to comment
Share on other sites

10 minutes ago, weatherwiz said:

A forecaster should always be asking themselves, "does this output makes sense given the pattern"...

I don't agree with this at all. I think it leads many people off a cliff. People think their intuition regarding loosely defined concepts like "pattern" is superior to supercomputers developed specifically to model exactly what's possible in the atmosphere. It's pure ego. 

Link to comment
Share on other sites

1 hour ago, weatherwiz said:

As stated the idea of they're supposed to "learn" is totally overblown. Traditional models already have some AI built into them and already do this to an extent. From what I understand (and this may not apply equally to AI models) is

AI assists with the initializing scheme whereas it combs through ingested data and will "remove" what it believes to be bad data or an outlier based on a slew of historical information. The goal here is, or the idea is, this will lead to a more accurate initialization which is important because once you move forward in time you start to introduce error and that error becomes compounded over time...that is why forecast models (OP) are generally useless beyond D7-10 and can even be relatively useless past D5 if there is alot going on. Error also occurs because of rounding and approximations, especially approximations. 

AI models are built on a wealth of historical data where it runs and looks for similarities, both to the initialized field and then forecasts based on how these similarities evolved in the past. 

The challenges in all of this is, there is still a lot we don't understand about weather, particularly when it comes down to processes which occur during storm evolution and it becomes even more of a challenge because for forecast models to ingest this data we have to be able to parameterize it. 

There is much more to this then just verifying a specific level or variable and even that leads to a lot of questions. Probably in a tame weather pattern that is not hostile, AI will outperform but what good is that or what value is that really adding? 

Thanks, I appreciate your response and perspective on this.  Going to be interesting to see how it ultimately evolves and plays out over time.  

  • Like 1
Link to comment
Share on other sites

8 minutes ago, CoastalWx said:

I know the energy sector was all abuzz late last year about how the Euro AI ensembles our performed the EPS. I heard a few Mets talking about that. I haven’t compared them recently though. 
 

Interesting way to evaluate if it can hang with the big boys. Definitely plenty of data now to collect and verify how they have been performing vs other guidance.

From what I've gathered, verification stats have had Euro AI ensemble and EPS as pretty much neck and neck...sometimes the AI has had a slight edge but definitely not the magic bullet that some people imply. The AI has done better with individual storms, especially this hurricane season, but it also got its clock cleaned in others. Just like with every other model. My two cents is there is promise, but the jury is still out on when they are most useful and to what extent. There will always be limitations when the models are mostly data-driven instead of physics based. 

  • Like 4
Link to comment
Share on other sites

3 minutes ago, eduggs said:

I don't agree with this at all. I think it leads many people off a cliff. People think their intuition regarding loosely defined concepts like "pattern" is superior to supercomputers developed specifically to model exactly what's possible in the atmosphere. It's pure ego. 

Precisely, developed to model exactly what's possible in the atmosphere, a skilled forecaster will use fundamental knowledge of meteorology, principles, and historical knowledge to make an educated forecast on how likely "possible" is

  • Like 3
Link to comment
Share on other sites

10 minutes ago, eduggs said:

I disagree with this. ECMWF and NOAA/NWS should be the baseline. AI can use verification to improve modeling beyond the current state of physics-equations-based modeling, which is limited by its programming and unable to quickly iteratively improve.

 

That's a fair point. I didn't want to get technical, but I'll restate it as, "the ceiling for AI should be that of current NWP + bias correction."

I've mentioned in the past that AI should be used to bias-correct ic/bcs, so I don't disagree.

On top of bias-correction, I imagine the analysis datasets already incorporate 'nudging.' This is only done for the ic/bcs prior to initialization though, so you'd still need to do gridded bias correction post-simulation.

 

Link to comment
Share on other sites

3 minutes ago, Winter Wizard said:

From what I've gathered, verification stats have had Euro AI ensemble and EPS as pretty much neck and neck...sometimes the AI has had a slight edge but definitely not the magic bullet that some people imply. The AI has done better with individual storms, especially this hurricane season, but it also got its clock cleaned in others. Just like with every other model. My two cents is there is promise, but the jury is still out on when they are most useful and to what extent. There will always be limitations when the models are mostly data-driven instead of physics based. 

That’s where I’m at too. Good post. I’m definitely curious how they perform in high leverage situations…like Sunday night lol.

  • Like 2
Link to comment
Share on other sites

Everything I’ve read about AI suggests deep learning which SHOULD make it improve with time.  I’ve been involved in AI models for screening major disease and the performance is NOT worse than human.  Humans have to determine the relevance of the results in terms of management but I’m seeing some evidence of a “head in the sand” approach to anything AI.  I think that logic is inherently flawed.

  • Like 3
Link to comment
Share on other sites

8 minutes ago, weatherwiz said:

Precisely, developed to model exactly what's possible in the atmosphere, a skilled forecaster will use fundamental knowledge of meteorology, principles, and historical knowledge to make an educated forecast on how likely "possible" is

Where I agree is that meteorologists can use local knowledge combined with model output to occasionally outforecast a global model locally, in the short range, and for limited parameters like surface temperature. But forecasters who think they can outforecast a global weather model at the synoptic scale or in the mid-range are deluding themselves. They are susceptible to all sorts of biases that convince them that their gut feelings are superior (confirmation bias, availability heuristic, confidence bias etc).

  • Like 1
Link to comment
Share on other sites

2 minutes ago, eduggs said:

Where I agree is that meteorologists can use local knowledge combined with model output to occasionally outforecast a global model locally, in the short range, and for limited parameters like surface temperature. But forecasters who think they can outforecast a global weather model at the synoptic scale or in the mid-range are deluding themselves. They are susceptible to all sorts of biases that convince them that their gut feelings are superior (confirmation bias, availability heuristic, confidence bias etc).

Agreed on this

Link to comment
Share on other sites

We should probably start a thread for model comparisons since the AI models have shaken things up.  There isn’t much of a storm presenting at this stage, NAM sucker hole notwithstanding.

Could also just rename this thread to keep the model disco rolling and not lose it when something else pops up.  It is interesting and people are engaged with it.  

  • Like 1
Link to comment
Share on other sites

Just now, weathafella said:

NAM significantly improved but still sucks.   And it’s out of its useful range.

Yeah this is all just fodder....if we can get a bump back west on the varsity models at 00z, then we're still in the game, but otherwise it's lights out. 

  • Like 1
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

×
×
  • Create New...