-
Posts
78,926 -
Joined
-
Last visited
Content Type
Profiles
Blogs
Forums
American Weather
Media Demo
Store
Gallery
Posts posted by weatherwiz
-
-
What is this obsession with Nemo
-
1
-
-
5 minutes ago, vortex95 said:
It's not trolling. We go back and forth via email all the time.

Oh yes, CoastalWx has given me the third degree before about how "we never gets big tstms" here. Until 6/1/2011, then he had a change of heart! Then he got a +CG shock on his PC from a wicked close strike, and freaked when the outside transformer keep arcing. And the tornado in Weymouth in Aug 2023. He started to go "hmmmm" and then I showed him the long-term CG plot for New England, and how there is a local max right in his area S of BOS.
High risk days often FAIL miserably, esp. in the Plains. I'd take a SLGT or MDT risk any day instead. My best chase days were in SLGT. Isolated supercells way better logistically, other than the chase hordes.Was the one high risk…feels like forever now but maybe 2019? People were going insane because the HRRR was going bonkers with these warm sector supercells and I think a high risk was issued and virtually nothing happened. I think after this it became apparent the HRRR had a bias for supercells south of the warm front well away from any forcing
-
37 minutes ago, H2Otown_WX said:
That guy was really weird. He catfished me on PMs. Pretended he was a hot woman.
Meh I used to kick my feet up on the desk and fall asleep and fantasize
-
1
-
-
Probably results in a few inches of snow within GA and parts of the Carolinas and at least mixing into the FL Panhandle
-
2 minutes ago, eduggs said:
Where I agree is that meteorologists can use local knowledge combined with model output to occasionally outforecast a global model locally, in the short range, and for limited parameters like surface temperature. But forecasters who think they can outforecast a global weather model at the synoptic scale or in the mid-range are deluding themselves. They are susceptible to all sorts of biases that convince them that their gut feelings are superior (confirmation bias, availability heuristic, confidence bias etc).
Agreed on this
-
3 minutes ago, eduggs said:
I don't agree with this at all. I think it leads many people off a cliff. People think their intuition regarding loosely defined concepts like "pattern" is superior to supercomputers developed specifically to model exactly what's possible in the atmosphere. It's pure ego.
Precisely, developed to model exactly what's possible in the atmosphere, a skilled forecaster will use fundamental knowledge of meteorology, principles, and historical knowledge to make an educated forecast on how likely "possible" is
-
3
-
-
1 minute ago, ORH_wxman said:
I’d be shocked if we went through a large chunk of Feb with an El Niño N PAC pattern but stranger things have happened.
My guess is we revert back to RNA/-EPO pattern as we go into February. That can still be ok but you risk SE ridge getting too stout which happens frequently in Niña Februarys.
Yeah that would probably be a big ask given the Nina state. I guess what we could hope for is a big storm as that pattern developed and then something else as it reverted back...then take our chances with the gradient.
-
3 minutes ago, Typhoon Tip said:
Personally as an industry I'm not pleased with the rollout of this new AI technology.
This needs to have a prioritized expose as to how this stuff works. What the expectations are. It's all very difficult to find, and find that to be divisive.
It's quite obvious why. There is a sense of competition, ask later -ism that is going on, where different sub-sectors are afraid of losing a competitive edge, so they are rushing out these AI products that are probably based on a rudimentary model that can be "tweaked" - but in the meantime, no one gets to know that they don't know what what they were doing, nor how to do it very well. That part is kept very hidden. It's just that everyone has AI this that and other ...so organizational ineptitude can remain lost in the noise of all this AI.
The other thing too is there is a ton of money in AI...lots of money. When it comes to technology, it's so easy to sucker people in...I mean look how so many people go bonkers when the new iPhone comes out or some new high tech gadget. But if you're in the development of AI...you can easily sucker people in and make a boat load of money.
-
3
-
-
1 minute ago, ORH_wxman said:
Almost get a classic El Nino N PAC there late in the ensembles with an Aleutian low and +PNA ridge.
Wouldn't mind seeing that moving into February. You want to talk about the prospects for a big February, there is the look right there. Ultimately, I'd like to see that ridge axis shifted east a bit and tilted a bit more directly poleward...but this is an ens mean so that detail is a bit minute but something to watch for when we get into OP range
-
7 minutes ago, MegaMike said:
Thanks, dude. I always think of Ian Malcolm's quote from Jurassic Park when AI models are mentioned: Data scientists are so "preoccupied with whether or not they could, that they didn't stop to think if they should."
I think they're more useful for climatological/ensemble purposes. Its resolution is too course for nowcasting, and whether people like it or not, the best real-time product we have is the HRRR (only model to update every hour not considering the RRFS). Users just need to understand its limitations... Within a few hours = good ||| outside a few hours = meh ||| beyond a PBL cycle = ignore...
I've been thinking; theoretically, the ceiling for AI should be that of current NWP... I don't think it's possible to outperform the dataset its trained on, so to improve AI, you must improve NWP <OR> increase the size of your training dataset. As a result, NWP will never be phased out.
:fist bump:
If I remember correctly, the evaluation was conducted wrt an analysis dataset (not in-situ locations). To me, that implies they're evaluating its efficacy (can it 'hang' with a traditional modeling system?) and not its accuracy. I did this too when I compressed assimilation data and reran CMAQ simulations when I worked with the EPA. I won't trust AI until evaluations are conducted at remote sensing stations. Analysis datasets aren't entirely accurate.
Excellent post. Understanding models (strengthens and weaknesses) is vital to forecasting success. Ultimately, forecasting is much more than just looking at the output of a model or comparing a few products. A forecaster should always be asking themselves, "does this output makes sense given the pattern"...obviously when dealing with a time range beyond 3-4-5 days there is always, always going to be a degree of uncertainty, however, asking yourself that question and working through the details to answer that question can provide enough of a basis for a forecaster to determine with confidence, the likelihood of a scenario occurring.
I'm with you, the ceiling for AI should be that of current NWP and I think it should be thought of as AI being a compliment to current NWP. For example, if AI can do a better job at assessing the current state (initialization) and more quickly, integrate this into NWP. I believe this has always been done (again, a reason Euro was superior for a while) but with the advancement in technology this could vastly improve NWP.
For your response to Scott, that is a very underrated understanding regarding re-analysis datasets. I think we take them at too much of a face value but need to understand there are limitations with them as well. For example, if you look at the ERSSTv6 and compare it to v5 and previous versions, you can see there are some large discrepancies in various areas of the globe, particularly earlier days when much of the re-analysis outside of ship routes was created via extrapolation methods.
-
1
-
1
-
-
2 minutes ago, Layman said:
Since this storm is gone...I don't necessarily feel like I'm derailing the thread by asking:
Do you have experience or knowledge about how the AI models are attempting to assimilate data and/or generate output? My very limited understanding is that they're supposed to "learn" but what are they learning from? Historical data? Current data? Compared to other model outputs? Or, comparing after the fact, what actually happened?
Curious about all this but not even sure where to look for a quality resource that may explain some of these things.
As stated the idea of they're supposed to "learn" is totally overblown. Traditional models already have some AI built into them and already do this to an extent. From what I understand (and this may not apply equally to AI models) is
AI assists with the initializing scheme whereas it combs through ingested data and will "remove" what it believes to be bad data or an outlier based on a slew of historical information. The goal here is, or the idea is, this will lead to a more accurate initialization which is important because once you move forward in time you start to introduce error and that error becomes compounded over time...that is why forecast models (OP) are generally useless beyond D7-10 and can even be relatively useless past D5 if there is alot going on. Error also occurs because of rounding and approximations, especially approximations.
AI models are built on a wealth of historical data where it runs and looks for similarities, both to the initialized field and then forecasts based on how these similarities evolved in the past.
The challenges in all of this is, there is still a lot we don't understand about weather, particularly when it comes down to processes which occur during storm evolution and it becomes even more of a challenge because for forecast models to ingest this data we have to be able to parameterize it.
Just now, Go Kart Mozart said:There are many online articles and verification charts that show just the opposite, with AI outperforming beyond 4 days...at least at h500. It does have lots of flaws, such as black swan events with minimal historical corollaries. It will probably not be good for something like December 92, for instance.
There is much more to this then just verifying a specific level or variable and even that leads to a lot of questions. Probably in a tame weather pattern that is not hostile, AI will outperform but what good is that or what value is that really adding?
-
1
-
-
8 minutes ago, MegaMike said:
In my opinion, there's too much trust in AI for weather prediction. I've mentioned this a few times, but
- It was made operational recently
- There's nothing wrong with current NWP excluding (a) it takes longer to run and (b) it requires a lot of resources vs. AI
- There is no significant evidence the EC-AIFS/AIGFS outperforms NWP for sensible weather at the surface during inclement weather (please provide a source if I'm wrong).
- For AI, nobody knows how forcing(x,y,z,t) is calculated (doesn't rely on traditional methods). ie... what is 1+1? Human = 1 + 1 == 2 ||| AI = :performs multi-dimensional math on 'n' fields: == 2. Do you trust that?
- Given the initial state of the atmosphere is captured flawlessly, there's no guarantee AI will perform well.
AI is great when there is no known relationship/correlation between a predictor and many predictands. Weather is relatively predictable so I don't find AI useful unless the fields are bias-corrected then ingested back into data assimilation grids.
If the AIs outperform NWP for this, *** and it evaluates well ***, I'll take it a little more seriously.
Who knows... Maybe truncating/rendering certain fields may increase its accuracy for this one event <AND/OR> data assimilation is poor at the current location(s) where the disturbance(s) is/are, and AI could use historic events to predict this event with some level of accuracy.
This should be pinned at the top of the board
As I've also mentioned before, AI can probably be very useful in the nowcasting department or short-term (6-12 hours) but beyond that...a very long ways to go
-
1
-
49 minutes ago, ORH_wxman said:
Gonna be a couple threats next week…Friday looking like the most threatening, but perhaps a smaller one on Wednesday. Most are focused on the 1/18 failure, but next week may fly under the radar until then.
Yup...like I mentioned the other night it will be active upcoming. May not see stuff pop up on the SLP charts but its an active look with plenty of shortwaves
-
1 minute ago, weathafella said:
Aren’t the physics built in?
I don't believe all of the AI models have physics built in. I would think the NCEP AI models do, but that is how some of them are able to process more quickly and roll out much faster, the model doesn't have to perform calculations.
-
Just now, Typhoon Tip said:
man... this all happened when that relay took place overnight with the GFS.
this is when it finally pulled the plug on this event today, too.
i'm growing more and more convinced that the data assimilation is getting caught with its pants down
Agree, this is a big player I believe. I don't know a whole heck of a lot about the basic blueprints of forecast models (the math/physics, different schemes...I'm hoping that may actually be covered in my advanced forecasting class) but I do know this
Data assimilation and being able to properly and accurately parameterize are detrimental to the success and accuracy of a forecast model. This is what made the euro superior for all those years, the euro had superior assimilation and parameterization. This is exactly why I am not sold on AI yet. We need to drastically improve these capabilities and this is where (hopefully) quantum computing is going to come a long ways.
-
1
-
-
I could see AI being heavily influenced in thinking because you have a nrn stream moving in and srn stream coming up the coast that there will be clean phasing. Like Runnaway said, maybe there is also resolution at play here..or something to this degree? I mean the vorticity field almost seems "too smoothed"...too clean.
-
1
-
-
2 minutes ago, Spanks45 said:
check out the moisture profiles between the 2....hopefully the AI isn't missing something. I feel like it has had issues with drier sides of systems? Too expansive with precip fields? Maybe I am making that up....
Very well could be too expansive.
-
1 minute ago, radarman said:
Maybe I missed something but not sure I've seen an AI coup
It will eventually hit one after 1000 tries and then everyone is going to get sucked into it because it finally got a hit and ignore the other 999 misses
-
1
-
4
-
-
The AIGFS doesn't seem too drastically different from the 12z OP run yesterday in terms of the H5 evolution, both srn stream and nrn stream.
-
1 minute ago, Cyclone-68 said:
This is more like the Atlanta Super Bowl I fear
nahhh that would be more if we had model consensus for a big hit now, only to see the rug get pulled out from under us as we got closer
-
1
-
-
4 minutes ago, dendrite said:
I don’t have a lot of hope for this. The last 2 gfs runs have been like the euro with a positively tilted shortwave with strung out vortmax. That 12z run was negative and consolidated and curled in with a strong punch of dPVA to really elicit the surface pressure falls. 18z backed off a bit, but still has some snow for eastern sections. 00z and 06z have trended toward a dud look.
Neither did Bears fans entering the 4th quarter.
Let's Chicago Bears this thing
-
1
-
-
1 minute ago, ineedsnow said:
Why isn't he on the board?
He probably prefers to keep his sanity
-
1
-
3
-
1
-
-
10 minutes ago, HIPPYVALLEY said:
I’m friends with him, he’s a pretty solid amateur weather guy.
Dave tries to call it as he sees it with no hype.I enjoy his tweets. He definitely does not hype and is pretty straightforward. I really enjoy his posts during convective events
-
2
-
-
2 minutes ago, Typhoon Tip said:
no, it's vestigial of S/W moving into/through a compressed field
Yeah I don't think there is any convective feedback issue going on here...hardly any, if any, convection with this anyways in the Southeast

First Legit Storm Potential of the Season Upon Us
in New England
Posted
This is how I feel with the severe weather threads