Jump to content
  • Member Statistics

    17,509
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

Wake Me Up When September Ends..Obs/Diso


40/70 Benchmark
 Share

Recommended Posts

26 minutes ago, Typhoon Tip said:

I'm staring to lean more blocking, myself. 

I think there is some synoptic oomph for imposing isentropic lift over the region on Saturday. But even it would seem more intense SW, vs NE zones of SNE.  By Sunday morning...  I could see this trend in the models to "split" the overrunning momentum E, while the residual Ophelia is held back closer to bombing eastern PA/NJ with a flood threat.

Obviously we've all been onto this weekend's synoptic shenanigans, no need to readdress. But I just think along years of experience with these evolving deformation axis, roughly ALB to BOS ...and they usually end up correcting the moisture to a pretty sharp gradient as far as what ends up in buckets.

 

This is a great example of what I was inquiring about yesterday regarding relying solely on model output or the combination of years of experience interpreting model data.

If years of experience shows that there's a higher likelihood of one thing happening over another, why do the models not pick up on that same evolution and output that?  Or do they?  

I recall seeing posts in the past (ensembles?) with all the numbered panels showing potential outcomes so I'm assuming they may kick out a more precise option similar to how one would interpret the data based on experience...?

Sorry if this is extremely basic and known by most here.  I simply don't know at this time!  My interest is largely in the data, model reliability, consistency, confidence, etc strictly from a hobbyist/inquisitive perspective.  

Link to comment
Share on other sites

19 minutes ago, weatherwiz said:

Some forecast :lol:

Just threw all strong EL Nino's together. Yes, strong EL Nino's have a tendency to be above-average in terms of temperatures across the northern-tier of the country, but there are strong EL Nino's which were colder. Can't just throw a bunch of years into a composite and call it a forecast or outlook.

cd73.100.41.127.264.7.23.26.prcp.png

Those just suck anyway…so vague and pathetic. They should be ashamed of themselves even releasing that stuff.

  • Weenie 1
Link to comment
Share on other sites

8 minutes ago, Layman said:

This is a great example of what I was inquiring about yesterday regarding relying solely on model output or the combination of years of experience interpreting model data.

If years of experience shows that there's a higher likelihood of one thing happening over another, why do the models not pick up on that same evolution and output that?  Or do they?  

I recall seeing posts in the past (ensembles?) with all the numbered panels showing potential outcomes so I'm assuming they may kick out a more precise option similar to how one would interpret the data based on experience...?

Sorry if this is extremely basic and known by most here.  I simply don't know at this time!  My interest is largely in the data, model reliability, consistency, confidence, etc strictly from a hobbyist/inquisitive perspective.  

Well ...yeah, but don't take that as gospel.   "Leaning toward," has some wiggle room.

As model evaluators vs that existential aspect, it's a bit of an 'art' because experience doesn't dictate the future.   Then on the model side, we have to understand that the models do not actually "predict" the future. 

I know that sounds a bit weird considering that they are projecting scenarios outward in time. That certainly looks like a prediction about the future.  But what they are failing/can't really do, is predict the "emergent" properties of motion and momentum in time.  They only use physics to predict what those would be IF there were no emergence of new forces along the way.  If the future did not have interacting forces causing new permutations, the models would probably be exceptionally correct out to exotically long leads. But, as nearly an infinite number of counter-actions arise, some are positively interfering (re-enforcing), while others are negatively interfering (terminating) with one another. This is very similar to "fractals" in chaos mechanics.   Some fractals self-terminate, others go on to dictate a pattern modulation...

As far as the ensemble individual members, they are using slightly different physics in each member. So, those are referred to as 'perturbed' ... The physics they employ have some experimental bases for being valid, so they are allowed to offer scenarios that could take place in a plausible manifold of outcomes. But usually, the ensemble mean will (thus) perform better than any individual member. The operational (deterministic) solutions use the those physics that are best performed.

Link to comment
Share on other sites

16 minutes ago, Layman said:

This is a great example of what I was inquiring about yesterday regarding relying solely on model output or the combination of years of experience interpreting model data.

If years of experience shows that there's a higher likelihood of one thing happening over another, why do the models not pick up on that same evolution and output that?  Or do they?  

I recall seeing posts in the past (ensembles?) with all the numbered panels showing potential outcomes so I'm assuming they may kick out a more precise option similar to how one would interpret the data based on experience...?

Sorry if this is extremely basic and known by most here.  I simply don't know at this time!  My interest is largely in the data, model reliability, consistency, confidence, etc strictly from a hobbyist/inquisitive perspective.  

From my many years on the board (first joined I think in late 2006 or 2007) and a bit from school, what I've learned is forecasting is much more than just reading model output. Obviously forecast models are significant in the forecasting process, but forecasting goes beyond just reading model output. 

When someone is forecasting and analyzing models, they should be asking themselves questions in their head and coming to an understanding of what is going on within the model to generate that output. Does the evolution make sense? Is this a realistic solution? Are there any biases the model is known to have and are those biases being reflected here? You also want to fully assess a wide variety of data and output from all levels of the atmosphere.

A lot of focus goes right into surface outputs (QPF, snow maps, precipitation totals, etc.) which I get because we live at the surface, but having a strong understanding of the upper-levels, the pattern in place, how the pieces are evolving and interacting will tell you more about what to expect at the surface than any surface product will. I also think having a strong background in the complex mathematical equations can help too.

You also want to be looking for run-to-run consistency, model-to-model consistency, and recent model performance and having an understanding of this can help a forecaster confidence wise in which model to perhaps rely on more. 

Ultimately, it's all about experience and understanding of the models, their strengths, biases, and understanding the overall pattern.

Link to comment
Share on other sites

9 minutes ago, WinterWolf said:

Those just suck anyway…so vague and pathetic. They should be ashamed of themselves even releasing that stuff.

Yeah I've always been a bit puzzled by it, but ultimately, I just don't think they have the resources to dedicate much too long-range forecasting, plus it's supplemented quite well within the private sector. 

  • Like 1
Link to comment
Share on other sites

7 minutes ago, weatherwiz said:

Yeah I've always been a bit puzzled by it, but ultimately, I just don't think they have the resources to dedicate much too long-range forecasting, plus it's supplemented quite well within the private sector. 

I think some people see the deep red and expect last year...that isn't what it means. It just means they are relatively confident that it will be somewhat above normal...which isn't saying that much.

  • Like 2
Link to comment
Share on other sites

1 minute ago, 40/70 Benchmark said:

I think some people see the deep red and expect last year...that isn't what it means. It just means they are relatively confident that it will be somewhat above normal...which isn't saying that much.

Yes great point, also at our latitude "above" doesn't really mean too much (unless you're talking about something that is in the top percentile). As you know, we can still get a lot of snow and have temperatures be above-average...though yeah it could be walking a fine line (especially towards the coast). 

Link to comment
Share on other sites

5 minutes ago, weatherwiz said:

When someone is forecasting and analyzing models, they should be asking themselves questions in their head and coming to an understanding of what is going on within the model to generate that output. Does the evolution make sense? Is this a realistic solution? Are there any biases the model is known to have and are those biases being reflected here?

Thank you @Typhoon Tip and @weatherwiz for the details, I appreciate it.

I don't want to derail the thread with endless questions however, I'm curious about the known biases within the models.  I've seen reference to this over the years and it seems odd to me that they can't or haven't been programmed out.  Or is it more a matter that when those biases exist, the model performs better generally so it's an accepted consequence?  I'm not sure if it's possible or worth it, but it seems that there's the potential for a secondary program focused specifically on removing the bias(es) to be overlayed onto the model output to "correct" it.  I imagine starting to do something like could rapidly get out of hand with derivations that are miles away from the original output.

Are there specific biases within the models that are suggesting their output for this weekends rain won't play out like they're currently suggesting? If so, can they be easily explained (like: "the NAM typically pushes these too far North") or are they far more involved?

Link to comment
Share on other sites

5 minutes ago, weatherwiz said:

Yes great point, also at our latitude "above" doesn't really mean too much (unless you're talking about something that is in the top percentile). As you know, we can still get a lot of snow and have temperatures be above-average...though yeah it could be walking a fine line (especially towards the coast). 

Don't forget.....global warming is impacting nighttime lows much more than it is daytime highs, as well....so 1-2F above average at this latitude really isn't a big deal.

  • Like 1
Link to comment
Share on other sites

33 minutes ago, Layman said:

Thank you @Typhoon Tip and @weatherwiz for the details, I appreciate it.

I don't want to derail the thread with endless questions however, I'm curious about the known biases within the models.  I've seen reference to this over the years and it seems odd to me that they can't or haven't been programmed out.  Or is it more a matter that when those biases exist, the model performs better generally so it's an accepted consequence?  I'm not sure if it's possible or worth it, but it seems that there's the potential for a secondary program focused specifically on removing the bias(es) to be overlayed onto the model output to "correct" it.  I imagine starting to do something like could rapidly get out of hand with derivations that are miles away from the original output.

Are there specific biases within the models that are suggesting their output for this weekends rain won't play out like they're currently suggesting? If so, can they be easily explained (like: "the NAM typically pushes these too far North") or are they far more involved?

I don't think this constitutes as derailing the thread. 

I am not an expert into model development and diagnostics at all so I don't really have an answer to this, but I would think that as biases are discovered there are tweaks worked on to try to eliminate those biases. Ultimately though, I think biases are always going to exist. This is probably also tied into the resources available. NOAA has seen some significant funding slashing the past few decades and this has really hurt the ability to further improve our modeling...now there have been some significant upgrades to the GFS the last decade which has closed the gap a bit versus the European model, but if there was more funding directed towards NOAA and improvement of modeling you'd probably see better performances and biases lessened. 

Again...this is just my thinking and I have no background into this so it's a thought that should be taken with a grain of salt. 

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...