Jump to content
  • Member Statistics

    17,502
    Total Members
    7,904
    Most Online
    Weathernoob335
    Newest Member
    Weathernoob335
    Joined

Will We Ever Get To The Point Where Models Are 100% Accurate?


Recommended Posts

 If they were to do so (which they won't imo), wx discussions wouldn't be nearly as interesting. Everything would already be known ahead of time. So, what forecasting issue would there be to discuss? Also, since there are typically many more instances of possibilities for interesting events than actual interesting events for any particular location, following the wx as a hobby would become less interesting since most of the time interesting weather isn't happening right now or in the immediate future.

Link to comment
Share on other sites

As far as I know, no, because of Chaos.  Even models with grid spacings as low as 200 m, which now must simulate eddies instead of parameterizing them in a PBL scheme, still are paramaterizing the miscrophysics of the atmosphere.  I believe the theoretical limit of accuracy is about 80% (Lorenz? Going out on a limb here...), and it decreases as a function of future time, of course.

Link to comment
Share on other sites

The models will continue to improve very slowly but when you say 100% like, what time frame, three days would be a considerable feat, personally I would not expect to see much better than a slight improvement at three days and a somewhat larger improvement over current levels in the 5-10 day time range. Beyond that, it's going to take theoretical research as well as better models because if we don't know what factors cause weather events and we don't know the length of the energy cycles that are involved, then having vast computing power cannot advance the accuracy at longer time frames.

 

This is already the case -- all long-range forecasting would be pretty much random if all we had to go on was day 16 of the GFS and had to guess the 30 days beyond that. Day 16 of the GFS is only marginally better than a random shot in the dark now. But such things are possible because rudimentary theories already exist, such as teleconnections and pattern matching. These are not really predictive in the same way that precise physical equations are predictive. If we knew what forces were producing weather systems, then like ocean tides we could predict to a high degree of certainty. In fact, if we could do that then our current very good predictions of ocean tides would also improve because weather events are really the only major factor in producing errors in tide tables (if you want to call a storm surge an error in a tide table, some might just say the correctly predicted tide was dislocated by strong winds).

 

Hard to say how this will play out, for several more years there will be very slow and irregular improvements in model performance, then possibly but with no absolute certainty, at some future time a predictive framework discovery will be made and this will produce virtually perfect model runs out to quite some distance (if such a thing worked, it would keep working a long time and the only thing that might degrade it would be very slow shifts of the background grid). As some know here, I have been working on such theoretical research and could safely say that this kind of breakthrough is not likely coming from my direction, even if I figured it all out one day then you have to convince others and get it programmed into models. But I don't claim to be anywhere near that point yet. All I see from my research is slight indication of having narrowed down the range of uncertainty. This is more or less the same thing in terms of results as applying more computing power to models, although as a paradigm it is different.

 

And I should point out that while anyone who engages in such research must visualize that there could be a cause and effect hypothesis capable of being reduced to numerical prediction systems, there is no guarantee. Weather could in fact be entirely random and it is conceivable that perfect prediction of it would remain impossible unless we could develop technology of time travel, in which case we could get perfect models by going to the future and getting their observations.

Link to comment
Share on other sites

First of all, we will never be able to get the initial conditions 100% perfect.  You would need to know the initial state everywhere, not only at the surface but through the entire depth of the atmosphere.  And these observations would have to have zero error.  As in not accurate to within 1/10,000 of a degree or 1/1 millionth of a m/s, but perfect accuracy, otherwise your model will be integrating the equations with the wrong initial values. 

 

And secondly, I don't think we'll ever reach the point of being able to explicitly resolve microphysics.  As long as that's parameterized, there will always be a significant source of error. 

 

So no.

Link to comment
Share on other sites

A friend said they should be able to do better with the forecasting models with the power of the supercomputers they have.

He's a got a point, nothing has really changed in the past in 10 years.

Computer power is only part of it. Remember models are based on what we know but also more importantly about assumptions that we make about the atmosphere. Some of those may not be totally correct thus introducing error into the system. I also wonder how much increasing resolution will improve the models until our observational system as well as model assimilation schemes improve.

 

I will level you with a quote from my Master's Thesis Advisor; "The meteorologist who work on numerical weather prediction cannot believe how good their models are,  the operational meteorologist wonders how they can be this bad".

Link to comment
Share on other sites

A friend who used to work at NCEP as a modeller, once told me to think of the atmosphere as a big pancake.  It's really think vertically compared to the earth's surface area.  He was always amazed at how good the models were because of the difficulties in measuring the initial state differences along that pancake.  Essentially, he was mirroring  UNCCmetgrad's advisor's thoughts on how models often view models,while I was complaining about how bad they can be.  The bottom line is we'll never be able to measure the initial state well enough to have prefect forecasts.  That's the idea behind developing ensemble techniques. 

Link to comment
Share on other sites

Lets say money wasn't an object for a second...Is it feasible that in 50 years we could see a change in how we observe weather? What if the government supplied each person with a small cheap device that calculated the exact weather conditions at their location. Would having the exact wx conditions at all of these locations drastically change our model output? Imagine with me for a moment, but what if we built small wx stations, almost like street lights that shot a small wx balloon in the air twice a day at different heights in the atmosphere. Would something like this help? Would something like that ever be possible? 

Link to comment
Share on other sites

Yes of course we will.  One day we will probably control the weather and use it to maximize the Earth in terms of natural efficiency.

 

I am talking like 2200 onwards assuming we continue on a somewhat exponential path of ascension

 

There are lots of things that we "control" that we don't really have complete control of.

 

Weather prediction can certainly improve drastically, and probably will if the resources are allocated to doing it. But 100% is a silly number to even contemplate. It is rather nonsensical, like infinity.

Link to comment
Share on other sites

We were drinking and he said we could build our own supercomputing system of model guidance as long as we had all the data to start with. 

 

I'd like to hear your plans on measuring the physical characteristics of all 109,000,000,000,000,000,000,000,000,000,000,000,000,000,000 molecules in the atmosphere, then getting them in a timely fashion into your supercomputing system which would need to be as big as the continental US.  Please expand on this. 

Link to comment
Share on other sites

If anything, I'm concerned that NWP models are becoming degraded by trying to make them too perfect. For example, at my WFO we've been wondering if parameterizations etc. that worked well at coarser resolutions are not working as well at finer resolutions. IMO the NAM has never been the same ever since becoming the NMM version of the WRF. And some SREF members have shown really poor skill of late, skewing our experimental snowfall probabilities to the high side. To paraphrase Dr. McCoy, I'm a forecaster not a modeler, and don't know the answer myself, but would sure love some guidance here.

Link to comment
Share on other sites

And some SREF members have shown really poor skill of late, skewing our experimental snowfall probabilities to the high side. .

 

That answers an issue that I was thinking about, as I really like the experimental probabilistic table. The number of storms has been quite small so far, so I didn't really know whether the relatively high probabilities were representative or not.

Link to comment
Share on other sites

I'd like to hear your plans on measuring the physical characteristics of all 109,000,000,000,000,000,000,000,000,000,000,000,000,000,000 molecules in the atmosphere, then getting them in a timely fashion into your supercomputing system which would need to be as big as the continental US.  Please expand on this. 

I never said he was right!  We were drinking!

Link to comment
Share on other sites

The models will continue to improve very slowly but when you say 100% like, what time frame, three days would be a considerable feat, personally I would not expect to see much better than a slight improvement at three days and a somewhat larger improvement over current levels in the 5-10 day time range. Beyond that, it's going to take theoretical research as well as better models because if we don't know what factors cause weather events and we don't know the length of the energy cycles that are involved, then having vast computing power cannot advance the accuracy at longer time frames.

 

This is already the case -- all long-range forecasting would be pretty much random if all we had to go on was day 16 of the GFS and had to guess the 30 days beyond that. Day 16 of the GFS is only marginally better than a random shot in the dark now. But such things are possible because rudimentary theories already exist, such as teleconnections and pattern matching. These are not really predictive in the same way that precise physical equations are predictive. If we knew what forces were producing weather systems, then like ocean tides we could predict to a high degree of certainty. In fact, if we could do that then our current very good predictions of ocean tides would also improve because weather events are really the only major factor in producing errors in tide tables (if you want to call a storm surge an error in a tide table, some might just say the correctly predicted tide was dislocated by strong winds).

 

Hard to say how this will play out, for several more years there will be very slow and irregular improvements in model performance, then possibly but with no absolute certainty, at some future time a predictive framework discovery will be made and this will produce virtually perfect model runs out to quite some distance (if such a thing worked, it would keep working a long time and the only thing that might degrade it would be very slow shifts of the background grid). As some know here, I have been working on such theoretical research and could safely say that this kind of breakthrough is not likely coming from my direction, even if I figured it all out one day then you have to convince others and get it programmed into models. But I don't claim to be anywhere near that point yet. All I see from my research is slight indication of having narrowed down the range of uncertainty. This is more or less the same thing in terms of results as applying more computing power to models, although as a paradigm it is different.

 

And I should point out that while anyone who engages in such research must visualize that there could be a cause and effect hypothesis capable of being reduced to numerical prediction systems, there is no guarantee. Weather could in fact be entirely random and it is conceivable that perfect prediction of it would remain impossible unless we could develop technology of time travel, in which case we could get perfect models by going to the future and getting their observations.

Keep working on it.  As my friend was saying "we can set up a super computer mainframe for 10,000 and do whatever we want"

Link to comment
Share on other sites

There are lots of things that we "control" that we don't really have complete control of.

 

Weather prediction can certainly improve drastically, and probably will if the resources are allocated to doing it. But 100% is a silly number to even contemplate. It is rather nonsensical, like infinity.

 

 

 

No it's not.  I said 2200+.  If humans head towards a star trek like civilization likely minus anything like warp drive our achievements will be astronomical.

 

things are still changing at unreal speed and that won't stop until man and machine can not understand this reality any further.

Link to comment
Share on other sites

No it's not. I said 2200+. If humans head towards a star trek like civilization likely minus anything like warp drive our achievements will be astronomical.

things are still changing at unreal speed and that won't stop until man and machine can not understand this reality any further.

Well of course, warp drive is just silly talk. The rest, spot on.

Oye.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...