Jump to content
  • Member Statistics

    17,509
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

Why are models so bad?


Recommended Posts

Read Deterministic Nonperiodic Flow by Lorenz. Perfection will never happen. BEsides the fact perfect data assimilation will never occur unless we have infinite and continuous observations with NO error associated with them, the model equations are truncated because we have not solved the full Navier Stokes equations.

If you want to win a million dollars, it is a Millenium Prize Problem. Until then, some amount of turbulence will exist...and therefore chaos will as well.

http://en.wikipedia...._and_smoothness

Things like the Heisenberg Uncertainty Principle apply here as well (its not just for quantum scale interactions). The fact is that there is a ceiling for how much we can actually accomplish and no matter how much technology improves (whether it be parallel processing or quantum computers eventually) we are going to have to face the law of diminishing returns and the fact that, as we go out further in time, inherent Uncertainty will always be on the increase. There is simply too much variability in the factors being assimilated and that only increases with time. You deal with this in all the sciences..... there is no such thing as "perfection." The universe loathes absolutes!

Link to comment
Share on other sites

  • Replies 218
  • Created
  • Last Reply

Yeah definitely agree. The overall size of the improvements will become smaller. I think this is why organizations such as the NWS for instance are changing to impact based forecasting in the short term for this very reason. Forecasting roles in general are changing. The typical role a met used to play will no longer exist, but forecasters in general will always exist (See my discussion in the "HPC" pinned thread). It is the same with private weather forecasting. Long range teleconnections and whatnot are also a mainstay, but that is a different discussion.

I just read this-- but this is what I was talking about in my last post. Also, the role of humans will change, when technology reaches a certain level, it will enable human minds to concentrate more on innovation rather than calculation.

Link to comment
Share on other sites

Another thing not mentioned much (or at all here) is the work of ensembling which has decreased variability even more and well beyond the individual model improvements themselves.

The ensembles can better simulate the inherent variability and probabilistic nature of the atmosphere much better than any single model ever could. They remind me of quantum probability charts used to determine the possible location of subatomic particles lol. But, as you know, uncertainty always wins out-- there really is no absolute perfection.

Link to comment
Share on other sites

This shouldn't even be a debate. When you have highly anomalous patterns like this, models will have their busts. Technology has allowed what was once private data, to go out into the public hands and cause mass hysteria 5 days out. You wouldn't believe how the media up here was having this potential storm as their top story. Years ago, this would have never happened...and much of it is probably because models had a hard time seeing threats that were more than 4 days out. In this age of media hype, all these busts get sensationalized and give the meteorologist a bad name. People remember the busts, and not the good forecasts.

In any rate, even in my own short tenure, I feel that models have gotten a little better.

Link to comment
Share on other sites

This shouldn't even be a debate. When you have highly anomalous patterns like this, models will have their busts. Technology has allowed what was once private data, to go out into the public hands and cause mass hysteria 5 days out. You wouldn't believe how the media up here was having this potential storm as their top story. Years ago, this would have never happened...and much of it is probably because models had a hard time seeing threats that were more than 4 days out. In this age of media hype, all these busts get sensationalized and give the meteorologist a bad name. People remember the busts, and not the good forecasts.

In any rate, even in my own short tenure, I feel that models have gotten a little better.

Well, to be fair, the majority of on cam "mets" have the IQ of a donut. The ones on here are a breath of fresh air.

Link to comment
Share on other sites

The ensembles can better simulate the inherent variability and probabilistic nature of the atmosphere much better than any single model ever could. They remind me of quantum probability charts used to determine the possible location of subatomic particles lol. But, as you know, uncertainty always wins out-- there really is no absolute perfection.

This is one thing I don't fully understand regarding the human drive towards perfection...variability is what makes everything so much better. A day we have perfect models will be a day where life just became suddenly more boring. This applies, as you said, to almost everything in science. We don't need to be perfect or have perfect solutions to everything.

Link to comment
Share on other sites

It's not about on cam mets...it's these producers who push to have this as a top story.

Yeah I know, there's just a specific few who I doubt have actual degrees that you can tell have no clue what theyre talking about. The ones I watch on ABC are actually quite good-- they always said there was a large amount of uncertainty and presented the different possibilities.

IMO weather should only be a top story (with the general public) when its actually happening or when its imminent.

Link to comment
Share on other sites

This shouldn't even be a debate. When you have highly anomalous patterns like this, models will have their busts. Technology has allowed what was once private data, to go out into the public hands and cause mass hysteria 5 days out. You wouldn't believe how the media up here was having this potential storm as their top story. Years ago, this would have never happened...and much of it is probably because models had a hard time seeing threats that were more than 4 days out. In this age of media hype, all these busts get sensationalized and give the meteorologist a bad name. People remember the busts, and not the good forecasts.

In any rate, even in my own short tenure, I feel that models have gotten a little better.

I partially agree with this, but in this day and age, there has arisen a role of "doomsday forecasters" who only hype the very largest and worst case scenarios when there is a sufficient chance (still less than 30%) it will happen. Paul Douglas, a local weatherman who used to do TV but now has his own company, is renowned for this. He is so famous because he calls the big storms and people remember when he was the only one hyping it up before others and they forget when he called a big storm that didn't verify. Unfortunately they have a niche in this media hype day and age.

Link to comment
Share on other sites

This is one thing I don't fully understand regarding the human drive towards perfection...variability is what makes everything so much better. A day we have perfect models will be a day where life just became suddenly more boring. This applies, as you said, to almost everything in science. We don't need to be perfect or have perfect solutions to everything.

I agree, I think variability and uncertainty actually gives the whole operation an air of mystery, an ambiance of mystique. I think it's where science meets art-- because you have to have a certain knack, a "feeling" for what's going to happen-- and even then oftentimes you dont get things right-- but I think the effort being made is rewarding in itself, no matter which branch of science youre talking about. You learn even when mistakes are made. I for one, am really intrigued by this opportunity to see how the atmosphere works with a strong la nina and a strong neg nao-- not just in our corner of the world, but in Europe and other areas also.

Im coming more and more to the conclusion that its a matter of personality-- that it's a characteristic of certain people (maybe the majority) that need certainty and that's why (for example with computer weather models) they get overly excited or overly depressed with each run-- because they cant cope with uncertainty. But the fact is, some of us find tracking these potential storms just as much fun as actually getting them. As a matter of fact, sometimes even more fun, because once theyre upon you-- you realize that it's going to be awhile before you see the next one.

Link to comment
Share on other sites

I dont see the point of this thread at all . Not from a professional stand point.

Its Kind of something JI would post.

Its a La Nina winter ... if you have ever forecasted in a Mid / strong la Nina winter the crappy Model performance is very common

Agreed, it made me cringe to see that a red tag started this thread.

Link to comment
Share on other sites

It does seem to me that modeling in general seems to do better in el nino winters than la nina winters for the local area we forecast. Maybe statistically, globally its not that different. We looked up the gfs runs for the first three big winter storms last season and it had double digit snows forecast for the PHI CWA 60hrs in advance. Some of the more difficult events, Feb 89 2 feet at the shore, Jan 2000, March 2001 hyperstorm have happened coincidentally or not in la nina winters.

This is where I think the ensembling has helped considerably in both the deterministic and confidence spectrum of forecasting. The last two ECMWF operational runs were at opposite ends of the ensemble spectrum, one way west, the other way east. Either way it should have raised a flag. We all know that the ECMWF likes to hold back energy in the desert southwest too long and you could see how yesterday's 12z run could happen, it holds the short wave back, if its being held back it must be stronger and chances slower. This slower, stronger evolution made it possible for the northern stream short wave to catch the southern stream short wave and phase and there we went.

Looking at the decay curves off of the EMC site the models have gained about a day to a day and a half of forecast skill at 500mb since 2001, so there is slow but steady improvement that does go on. I believe the models are better this winter than if we would have used the models as they were from the 2007-8 la nina winter, we're too close to realize it.

As usual I totally agree with Don Sutherland's posts, he even says it better than I would.

Link to comment
Share on other sites

I agree, I think variability and uncertainty actually gives the whole operation an air of mystery, an ambiance of mystique. I think it's where science meets art-- because you have to have a certain knack, a "feeling" for what's going to happen-- and even then oftentimes you dont get things right-- but I think the effort being made is rewarding in itself, no matter which branch of science youre talking about. You learn even when mistakes are made. I for one, am really intrigued by this opportunity to see how the atmosphere works with a strong la nina and a strong neg nao-- not just in our corner of the world, but in Europe and other areas also.

Im coming more and more to the conclusion that its a matter of personality-- that it's a characteristic of certain people (maybe the majority) that need certainty and that's why (for example with computer weather models) they get overly excited or overly depressed with each run-- because they cant cope with uncertainty. But the fact is, some of us find tracking these potential storms just as much fun as actually getting them. As a matter of fact, sometimes even more fun, because once theyre upon you-- you realize that it's going to be awhile before you see the next one.

Well I am glad some agree with the whole weather scenario. I totally agree and discussed this once in the central forums. For me I am just as if not more excited about the actual dynamic interaction of weather systems as opposed to the actual event nailing me. I certainly like a good hit, but observing and forecasting weather is equally exciting, and the best we can do is apply the science and learn from the mistakes.

It is indeed science meets art...I always thought of weather forecasting as an art.

Link to comment
Share on other sites

Agreed, it made me cringe to see that a red tag started this thread.

Im not defending him, but I think he's probably just frustrated. It can happen to anyone-- and probably happens to everyone at some point, even within their chosen profession.

edit: Im just basing it on the first couple of posts he made; I just glossed over the first few posts to get a general feel for the thread-- so Im not sure if it went downhill from there.

Link to comment
Share on other sites

Well I am glad some agree with the whole weather scenario. I totally agree and discussed this once in the central forums. For me I am just as if not more excited about the actual dynamic interaction of weather systems as opposed to the actual event nailing me. I certainly like a good hit, but observing and forecasting weather is equally exciting, and the best we can do is apply the science and learn from the mistakes.

It is indeed science meets art...I always thought of weather forecasting as an art.

I agree, the best part of it is that its a continual learning process. Let's face it, even if you get a big hit, the storm has to end sometime. But the knowledge you gain from it (hit or miss) should stay with you for a very long time.

Link to comment
Share on other sites

I haven't read the entire thread...but I'll throw out a theory if I may...is it possible that as we trend towards higher resolution models, that actually hurts us? Could it be that we're getting to the point where we're trying to model things down to a scale that we really have no business trying to model? Perhaps not because it can't ultimately done...but maybe because it can't be done at this point in time due to various reasons such as sparsity of input data, computing power, and an oversimplification of the processes that are actually occurring at the scales we're trying to predict? Yeah we've got these cool complex equations to try to predict atmospheric motion...but the smaller the scale and the more features you attempt to model...the more muddy and chaotic the solution becomes, right?

In other words...perhaps we really are only to the point where we can model systems with some success on a synoptic scale...but the more we try to model down to meso-scale or even microscale the more inaccurate we get because the current and future state of the atmosphere is that much more complex.

Link to comment
Share on other sites

It's true...unfortunately it can backfire on the community.

Yeah, thats the part I dislike-- perception becomes reality and that's why a significant percentage of the general public doesnt take weather forecasting seriously. Their only experience with the profession is what they see on their local news. It's really unfortunate, because its intrinsic beauty gets squashed in favor of the one minute news bite.

Link to comment
Share on other sites

I haven't read the entire thread...but I'll throw out a theory if I may...is it possible that as we trend towards higher resolution models, that actually hurts us? Could it be that we're getting to the point where we're trying to model things down to a scale that we really have no business trying to model? Perhaps not because it can't ultimately done...but maybe because it can't be done at this point in time due to various reasons such as sparsity of input data, computing power, and an oversimplification of the processes that are actually occurring at the scales we're trying to predict? Yeah we've got these cool complex equations to try to predict atmospheric motion...but the smaller the scale and the more features you attempt to model...the more muddy and chaotic the solution becomes, right?

In other words...perhaps we really are only to the point where we can model systems with some success on a synoptic scale...but the more we try to model down to meso-scale or even microscale the more inaccurate we get because the complexity of the current state atmosphere and future states is that much more complex.

Yeah, I can totally see this happening-- Uncertainty increases in two axes-- one is as you go further out in time, the other is as you try to model lower and lower spatial scales. It actually goes hand in hand. At some point, the data just gets too "fuzzy" (just like in QM) to properly analyze. And we're trying to venture out further in both axes. I think there is still room for some improvement (and ensemble modeling actually increases the limits of effectiveness because you stack and simulate the atmosphere's inherent variability), but at some point you'll hit that chaos barrier where diminishing returns will trump all.

Or to put it another way, you lose the forest for the trees......

Link to comment
Share on other sites

I haven't read the entire thread...but I'll throw out a theory if I may...is it possible that as we trend towards higher resolution models, that actually hurts us? Could it be that we're getting to the point where we're trying to model things down to a scale that we really have no business trying to model? Perhaps not because it can't ultimately done...but maybe because it can't be done at this point in time due to various reasons such as sparsity of input data, computing power, and an oversimplification of the processes that are actually occurring at the scales we're trying to predict? Yeah we've got these cool complex equations to try to predict atmospheric motion...but the smaller the scale and the more features you attempt to model...the more muddy and chaotic the solution becomes, right?

In other words...perhaps we really are only to the point where we can model systems with some success on a synoptic scale...but the more we try to model down to meso-scale or even microscale the more inaccurate we get because the current and future state of the atmosphere is that much more complex.

High resolution models are used with decent success in complex terrain like the intermountain west and Pacific ranges. If initialized properly, they can be quite useful within 48 hours. Of course, they have signficant issues and their errors tend to compound rapidly with time because less agressive filters need to be used to preserve the high resolution details. As we know with chaos, those details can also become an issue since we know some of it is noise. I think this is why we won't be seeing any 4 km global models anytime soon.

High res models have a place though, and they can be used quite effectively. U of Washington has a nice implementation out W. They get some insane complex weather with the straight of juan de fuca, the olympic mtns, the crazy convergence zones, and the carved out mtn ranges that can create forecasting headaches in general.

http://www.atmos.washington.edu/mm5rt/

Link to comment
Share on other sites

Yeah, I can totally see this happening-- Uncertainty increases in two axes-- one is as you go further out in time, the other is as you try to model lower and lower spatial scales. It actually goes hand in hand. At some point, the data just gets too "fuzzy" (just like in QM) to properly analyze. And we're trying to venture out further in both axes. I think there is still room for some improvement (and ensemble modeling actually increases the limits of effectiveness because you stack and simulate the atmosphere's inherent variability), but at some point you'll hit that chaos barrier where diminishing returns will trump all.

Or to put it another way, you lose the forest for the trees......

Expanding on this a tiny bit, NWS Denver once did a brief study for their local office and suggested high res models had a tendency to "lull" the forecasters into a belief it was a better model because of how realistic its results looked. Of course they added the models are highly sensitive to the initializing model and much care should be used with this type of guidance. Well said.

Link to comment
Share on other sites

High resolution models are used with decent success in complex terrain like the intermountain west and Pacific ranges. If initialized properly, they can be quite useful within 48 hours. Of course, they have signficant issues and their errors tend to compund rapidly with time because less agressive filters need to be used to preserve the high resolution details. As we know with chaos, those details can also become an issue since we know some of it is noise. I think this is why we won't be seeing any 4 km global models anytime soon.

High res models have a place though, and they can be used quite effectively. U of Washington has a nice implementation out W. They get some insane complex weather with the straight, the olympic mtns, the crazy convergence zones, and the carved out mtn ranges that can create forecasting headaches in general.

http://www.atmos.washington.edu/mm5rt/

and I do agree...high res models have their place...but like you said, they have to be used properly and will get increasingly inaccurate solution as you begin to exceed a certain time threshold. I just wonder, without really knowing the specific evolution of the operational models, if we've been too aggressive in implementing things at a much higher resolution in those models without fully understanding the full implications of increased resolution. As a result, the operational models are more prone to flip-flopping and offering drastically different solutions over just one or two runs since they are trying to break energy and flow down into smaller and smaller pieces that lead to a lot more uncertainty. Would a lower-res model do better in a more complex flow since its only going to pick up on the biggest and potentially more meaningful pieces of energy? Or to put it another way...a lower-res model would only see and forecast the impact of a broad piece of energy as opposed to trying to parse it into a bunch of smaller vort-maxes that do nothing but muddy up the whole solution.

Link to comment
Share on other sites

Guest someguy

I wouldn't be surprised by a statement like that out of a member of the general public, but out of a trained Met its quite surprising to me. I know when I learned the entire process of numerical weather modeling in detail, its shortcomings, and what is actually going on, I was amazed how good it was. I still am amazed that they do so well with something so incredibly complicated, and they ARE getting better. Occasionally they all waffle on solutions but what should be expected, perfection? The entire process is fraught with imperfections. But show me anyone else in the world who so accurately can predict the future about anything.

Another thing saying things like that does is stir up the weenies, they see a Met saying something like that and they go spewing it around the forum how the models suck, etc. I'll say it here, the models don't suck, they do a great job, sometimes amazing, at predicting an enormous, chaotic set of physical processes with relatively little to go on. To me, it's one of the top scientific achievements of the last 30 years.

I made this very point 2 hrs ago

Link to comment
Share on other sites

Guest someguy

This implies that there aren't whole groups of folks who ARE continuously trying to improve the models. IMO, any system where you are forced to parametrize important things such as land-surface interaction and convection is bound to have errors. Considering those limitations, I think it's impressive they work as well as they do. Remember, errors propagate upward in scale.

strongly agree... it is suprising that analog 96 as a professional met woud make such a stupid post

Link to comment
Share on other sites

and I do agree...high res models have their place...but like you said, they have to be used properly and will get increasingly inaccurate solution as you begin to exceed a certain time threshold. I just wonder, without really knowing the specific evolution of the operational models, if we've been too aggressive in implementing things at a much higher resolution in those models without fully understanding the full implications of increased resolution. As a result, the operational models are more prone to flip-flopping and offering drastically different solutions over just one or two runs since they are trying to break energy and flow down into smaller and smaller pieces that lead to a lot more uncertainty. Would a lower-res model do better in a more complex flow since its only going to pick up on the biggest and potentially more meaningful pieces of energy? Or to put it another way...a lower-res model would only see and forecast the impact of a broad piece of energy as opposed to trying to parse it into a bunch of smaller vort-maxes that do nothing but muddy up the whole solution.

I see where you are trying to go, but no, I disagree. There is good reason we don't use models such as the LFM and NGM anymore tongue.gif

Seriously though, the latest GFS update is a good example that shows an increased resolution doesn't result in worse forecasts per se. Also, some synoptic scale systems are highly sensitive to sub-synoptic scale forcings, so if we went back to real low res models, we would simply be unable to even model those solutions. I think that would be a step back.

Link to comment
Share on other sites

I wouldn't be surprised by a statement like that out of a member of the general public, but out of a trained Met its quite surprising to me. I know when I learned the entire process of numerical weather modeling in detail, its shortcomings, and what is actually going on, I was amazed how good it was. I still am amazed that they do so well with something so incredibly complicated, and they ARE getting better. Occasionally they all waffle on solutions but what should be expected, perfection? The entire process is fraught with imperfections. But show me anyone else in the world who so accurately can predict the future about anything.

Another thing saying things like that does is stir up the weenies, they see a Met saying something like that and they go spewing it around the forum how the models suck, etc. I'll say it here, the models don't suck, they do a great job, sometimes amazing, at predicting an enormous, chaotic set of physical processes with relatively little to go on. To me, it's one of the top scientific achievements of the last 30 years.

I am amazed that a computer, electricity and metal, can tell me on Monday that there will be a storm in the MA on Thursday. And then it happens. A storm that hasn't formed, doesn't exist. That's pretty amazing. Now it doesn't always get the details right, but it's impressive nonetheless.

Link to comment
Share on other sites

Guest someguy

High resolution models are used with decent success in complex terrain like the intermountain west and Pacific ranges. If initialized properly, they can be quite useful within 48 hours. Of course, they have signficant issues and their errors tend to compound rapidly with time because less agressive filters need to be used to preserve the high resolution details. As we know with chaos, those details can also become an issue since we know some of it is noise. I think this is why we won't be seeing any 4 km global models anytime soon.

High res models have a place though, and they can be used quite effectively. U of Washington has a nice implementation out W. They get some insane complex weather with the straight of juan de fuca, the olympic mtns, the crazy convergence zones, and the carved out mtn ranges that can create forecasting headaches in general.

http://www.atmos.washington.edu/mm5rt/

U of W a great great weather site and I use their stuff all the time.... Their His res stuff and his res sref is top notch

best in the world

Link to comment
Share on other sites

Guest someguy

This is one thing I don't fully understand regarding the human drive towards perfection...variability is what makes everything so much better. A day we have perfect models will be a day where life just became suddenly more boring. This applies, as you said, to almost everything in science. We don't need to be perfect or have perfect solutions to everything.

THE BLACK SWAN by Taleb

\ and

FOOLED BY RANDOMNESS also by taleb

if you are into day 3 to day 30 forecasting and you have NOT read those books ...twice... your are screwed

Link to comment
Share on other sites

I see where you are trying to go, but no, I disagree. There is good reason we don't use models such as the LFM and NGM anymore tongue.gif

Seriously though, the latest GFS update is a good example that shows an increased resolution doesn't result in worse forecasts per se. Also, some synoptic scale systems are highly sensitive to sub-synoptic scale forcings, so if we went back to real low res models, we would simply be unable to even model those solutions. I think that would be a step back.

Yeah...I'm kinda just throwing ideas around...but I guess what I'm getting at is that no, LFM and NGM would lose in almost all scenarios to the GFS and NAM...but perhaps they would offer more stability in more complex situations. Maybe they are still wrong in the end...but maybe they trend more smoothly to the correct solution as opposed to flip-flopping several times back and forth.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.

×
×
  • Create New...