Jump to content
  • Member Statistics

    17,509
    Total Members
    7,904
    Most Online
    joxey
    Newest Member
    joxey
    Joined

forecaster bias


Recommended Posts

I saw this in another thread this morning and I thought I'd post it since I was excoriated by some and misinterpreted as bashing mets for simply mentioning that there was probably a certain amount of bias in the field. So don't take this weenie's word for it, this from an HPC met:

This from an HPC met

http://www.americanw...0/page__st__520

If you look over any individual's forecast over a series of weeks, months, or years, you will notice a bias, whether it is in regards to the choice is of a particular model or pair of models or towards certain solutions for their backyard, whether extreme in impact or not. The bias could be always going for consensus. We all have them. Just search older discussions if you want to know if an individual forecast has any sort of bias. The key to recognizing biases is to see if any individual forecaster is just emphasizing positives (in this case the most menacing possible solution), or is actually weighing both the positives and the negatives for either a certain meteorological event, or per your first question, a certain piece of model guidance. Just keep in mind that for the 09z medium range issuance, the individual 00z EC and 00z Canadian ensemble members and the 00z ECMWF ensemble mean are not available to HPC. Those show up between 5 and 6 am EST, or 10-11z. As it stands, this board seems to have access to the 00z guidance via the web about 10 minutes faster than HPC.

DR

Link to comment
Share on other sites

I saw this in another thread this morning and I thought I'd post it since I was excoriated by some and misinterpreted as bashing mets for simply mentioning that there was probably a certain amount of bias in the field. So don't take this weenie's word for it, this from an HPC met:

This from an HPC met

http://www.americanw...0/page__st__520

If you look over any individual's forecast over a series of weeks, months, or years, you will notice a bias, whether it is in regards to the choice is of a particular model or pair of models or towards certain solutions for their backyard, whether extreme in impact or not. The bias could be always going for consensus. We all have them. Just search older discussions if you want to know if an individual forecast has any sort of bias. The key to recognizing biases is to see if any individual forecaster is just emphasizing positives (in this case the most menacing possible solution), or is actually weighing both the positives and the negatives for either a certain meteorological event, or per your first question, a certain piece of model guidance. Just keep in mind that for the 09z medium range issuance, the individual 00z EC and 00z Canadian ensemble members and the 00z ECMWF ensemble mean are not available to HPC. Those show up between 5 and 6 am EST, or 10-11z. As it stands, this board seems to have access to the 00z guidance via the web about 10 minutes faster than HPC.

DR

When I was in the Air Force, the verification records included whether or not a forecaster was biased towards the pessimistic or optimistic side. The most successful forecasters who had the best records against persistence tended to be very close to neutral or slightly optimistic as I tended to be(not going for the bad weather unless there was good reason to). Of course in those days model bias tended to be absence since the only ones we had were the single layer H5 barotropic and baroclinic out to 72 hours which weren't all that great. When I returned to forecasting in the SU in the 1980's, I was in UT and the model available was the LFM which didn't work well in the West. I then went to AZ where until recently it's been hard to find any model that worked. When PHX was doing the forecasting for all of AZ there were two forecasters who always bought the most extreme solution on incoming storms (and who consistently busted in southern AZ) while another was known as "Dry Dave"-if he forecast 40% POPs you began building an ark.

Steve

Link to comment
Share on other sites

ASLKahuna said it right there. Quite honestly forecasting is a game and an art, and you have to first be neutral with no bias towards busting high or low. From a pure public forecasting standpoint, you have to temper odds one way or the other with wording, and you need to sharpen the wording as an event eventually becomes a higher probability event. As for probabilities, they are absolutely KEY, and one must consider the actual dynamic background with a good analysis. This is why, to the consternation of some here, why I break every storm down to the detail. How can I give a probability to an event if I only follow models without actually making the dynamic considerations? Impossible. We talked about this in a different thread, but this is where a lot of new forecasters fail today. They follow models without breaking down the weather pattern and truly analyzing it. This East Coast storm is a classic example of that. For any budding meteorologists who wish to forecast someday, take these words seriously. Models are not the full answer, and to be a successful forecaster model-casting won't get it done. You MUST be able to analyze a weather pattern and break it down to give a probability to an event.

A good place to start for non-mets or mets alike. Have to understand probability forecasting before one can do it.

http://www.cimms.ou....robability.html

Social science is becoming big in forecasting today since we must know how people react to the forecast and the impacts weather/forecasts have on people. NWS is filtering a lot of money into this today.

Link to comment
Share on other sites

...They follow models without breaking down the weather pattern and truly analyzing it. This East Coast storm is a classic example of that. For any budding meteorologists who wish to forecast someday, take these words seriously. Models are not the full answer, and to be a successful forecaster model-casting won't get it done. You MUST be able to analyze a weather pattern and break it down to give a probability to an event...

I agree, but I think the opposite can be true sometimes. I've also read discussions which go on forever analyzing feature after feature trying to verify better than MOS, but a lot of the time it doesn't make sense and there is a lot of speculation and assumptions involved which aren't reality.

As far as bias, every forecaster has it whether they want to admit it or not. Scratch that, every human has bias, self-serving bias (i.e. last year my forecasts were great, but this year the models are all over the place. Uhhh, NO, reality doesn't work that way). I don't think the acknowledgment of bias is well understood by most forecasters, honestly I don't think it is covered in forecasting and synoptic classes and meteorologists don't have time for in-depth social psychology classes.

I strongly recommend this book:

http://www.mistakesw...butnotbyme.com/

In most cases when humans get involved in forecasting the science ends and the human psychology begins. I remember in forecasting class back in school in the 1990s it was extremely difficult over the semester to beat the consensus forecast (the mean of all the forecaster predictions). One person could do it consistently and that's because he had been forecasting in that area for decades and from what I understand he had a book with observations and rules of thumb that he learned throughout the years. Nobody else in my school, including the grad students and professors, could beat the consensus on a consistent basis. The consensus forecast removed all of our biases.

If I look at more than one model I immediately second guess myself and start thinking things against my better judgment. A good example is that this morning I peaked at the 06Z GFS and started rationalizing various scenarios that would send this low out to sea. Stupid, stupid, stupid. It is well known the GFS has problems in the medium range, it is well known it tends to not verify well (actually, some meteorologists, especially those involved in the government near "The Beltway" will deny this, but look at the objective verification numbers). But, that nice map and those crisp MOS numbers make a lot of sense at first glance. The ECMWF has always been superior ever since I started in the 1990s, it isn't always correct but honestly whenever I've put some weight behind the AVN/MRF/GFS beyond 72 hours I think I've always gotten nailed. Always. I do better when I look at one reasonably good model and analyze the atmosphere if I think something is fishy. That's me...

Forecasting is a lot different now than it was 10 or 20 years ago. I think there is data overload and it is easy for most people to get completely overwhelmed. There is always some model output somewhere that will show you what you sub-consciously want, whether it is 85° and sunny or the next super-blizzard. It is extremely difficult to beat consensus forecasts. I think most meteorologists are kidding themselves when they think they can do it, the models analyze so much data and do so many billions of calculations it is very hard for a human to out-perform them, especially when ensembles or consensus is thrown into the fray.

Consensus is often talked about, but unless it is done objectively by a computer it isn't consensus. I believe "consensus" as talked about in discussions is actually in reality self-serving bias. I think it is usually impossible for a human to visualize a consensus, there is so much sub-conscious defense mechanisms and bias and usually the forecaster is not aware of it or won't acknowledge it.

For me, learning how to forecast is all about trial and error and will be till the day I stop doing it. A lot of forecasters (and they don't have to be degreed meteorologists by any means) have really big egos, self justification is extremely important, even if they are dead wrong. I think self-justification is the most important factor in forecasting, more so than skill, education level and where someone went to college. A really intelligent forecaster can almost constantly be wrong if they constantly need to stroke their ego (I'm thinking of a town in central PA).

So, in conclusion, we are all biased. We all err. The sooner someone can get beyond that and realize and learn from their mistakes the sooner their forecasting improves. As humans we never see things objectively, it isn't in our nature...but we can get pretty close if we dump all the ego B.S. I question bringing religion into this discussion, but I think the concept is very similar...we all sin, the question is whether we acknowledge it and what we do about it.

Link to comment
Share on other sites

I agree, but I think the opposite can be true sometimes. I've also read discussions which go on forever analyzing feature after feature trying to verify better than MOS, but a lot of the time it doesn't make sense and there is a lot of speculation and assumptions involved which aren't reality.

As far as bias, every forecaster has it whether they want to admit it or not. Scratch that, every human has bias, self-serving bias (i.e. last year my forecasts were great, but this year the models are all over the place. Uhhh, NO, reality doesn't work that way). I don't think the acknowledgment of bias is well understood by most forecasters, honestly I don't think it is covered in forecasting and synoptic classes and meteorologists don't have time for in-depth social psychology classes.

I strongly recommend this book:

http://www.mistakesw...butnotbyme.com/

In most cases when humans get involved in forecasting the science ends and the human psychology begins. I remember in forecasting class back in school in the 1990s it was extremely difficult over the semester to beat the consensus forecast (the mean of all the forecaster predictions). One person could do it consistently and that's because he had been forecasting in that area for decades and from what I understand he had a book with observations and rules of thumb that he learned throughout the years. Nobody else in my school, including the grad students and professors, could beat the consensus on a consistent basis. The consensus forecast removed all of our biases.

If I look at more than one model I immediately second guess myself and start thinking things against my better judgment. A good example is that this morning I peaked at the 06Z GFS and started rationalizing various scenarios that would send this low out to sea. Stupid, stupid, stupid. It is well known the GFS has problems in the medium range, it is well known it tends to not verify well (actually, some meteorologists, especially those involved in the government near "The Beltway" will deny this, but look at the objective verification numbers). But, that nice map and those crisp MOS numbers make a lot of sense at first glance. The ECMWF has always been superior ever since I started in the 1990s, it isn't always correct but honestly whenever I've put some weight behind the AVN/MRF/GFS beyond 72 hours I think I've always gotten nailed. Always. I do better when I look at one reasonably good model and analyze the atmosphere if I think something is fishy. That's me...

Forecasting is a lot different now than it was 10 or 20 years ago. I think there is data overload and it is easy for most people to get completely overwhelmed. There is always some model output somewhere that will show you what you sub-consciously want, whether it is 85° and sunny or the next super-blizzard. It is extremely difficult to beat consensus forecasts. I think most meteorologists are kidding themselves when they think they can do it, the models analyze so much data and do so many billions of calculations it is very hard for a human to out-perform them, especially when ensembles or consensus is thrown into the fray.

Consensus is often talked about, but unless it is done objectively by a computer it isn't consensus. I believe "consensus" as talked about in discussions is actually in reality self-serving bias. I think it is usually impossible for a human to visualize a consensus, there is so much sub-conscious defense mechanisms and bias and usually the forecaster is not aware of it or won't acknowledge it.

For me, learning how to forecast is all about trial and error and will be till the day I stop doing it. A lot of forecasters (and they don't have to be degreed meteorologists by any means) have really big egos, self justification is extremely important, even if they are dead wrong. I think self-justification is the most important factor in forecasting, more so than skill, education level and where someone went to college. A really intelligent forecaster can almost constantly be wrong if they constantly need to stroke their ego (I'm thinking of a town in central PA).

So, in conclusion, we are all biased. We all err. The sooner someone can get beyond that and realize and learn from their mistakes the sooner their forecasting improves. As humans we never see things objectively, it isn't in our nature...but we can get pretty close if we dump all the ego B.S. I question bringing religion into this discussion, but I think the concept is very similar...we all sin, the question is whether we acknowledge it and what we do about it.

You are going all over the place here on all sorts of tangents and off-topic discussions, but one thing worth mentioning.

Bias:

"a particular tendency or inclination, esp. one that prevents unprejudiced consideration of a question; prejudice."

In bold, you almost sound as if you are wish-casting and implementing that in the operational world. I sure hope you aren't doing that! Any operational meteorologist/forecaster should be fully un-biased towards a solution, but should always be able to recognize a potential threat and break it down into potential impacts and probabilities without hyping it or under forecasting it. In your final paragraph, saying "we are all biased. We all err" is misinterpreting the definition of bias. Big difference between not having a correct forecast and being biased towards a particular solution when the evidence strongly suggests otherwise.

Link to comment
Share on other sites

In bold, you almost sound as if you are wish-casting and implementing that in the operational world. I sure hope not...

No, I was trying to say just the opposite.

With the amount of numerical guidance available today, I think there is a tendency for some forecasters to find model output that they find "stimulating" and then attempt to justify that specific output even though it is clear to more unbiased folk that model is not likely to verify in the future.

I don't think forecasters can be unbiased. There is always some bias with everyone and most of the time it is hidden in the sub-conscious. Not realizing this means it is likely to do worse than consensus.

We seem to be thinking of bias in different terms, for an example I'm thinking of it in the sense of under-forecasting the day's maximum temperature by 1.5°F over the period of several months.

Link to comment
Share on other sites

No, I was trying to say just the opposite.

With the amount of numerical guidance available today, I think there is a tendency for some forecasters to find model output that they find "stimulating" and then attempt to justify that specific output even though it is clear to more unbiased folk that model is not likely to verify in the future.

I don't think forecasters can be unbiased. There is always some bias with everyone and most of the time it is hidden in the sub-conscious. Not realizing this means it is likely to do worse than consensus.

We seem to be thinking of bias in different terms, for an example I'm thinking of it in the sense of under-forecasting the day's maximum temperature by 1.5°F over the period of several months.

It all comes down to a probability of a particular event happening. If a forecaster approaches the situation properly and analyzes the weather pattern at hand, the numerous guidance solutions should help the forecaster make an informed decision regarding the variability/probability of an event. If a forecaster is confused by differing model suites and can not interpret trends, model bias, variability of a particular weather event, etc., then they are approaching it all wrong. I would argue that bias is not a factor here, but instead, lack of knowledge and/or lazy forecasting. That is really a different discussion it seems.

Link to comment
Share on other sites

No, I was trying to say just the opposite.

With the amount of numerical guidance available today, I think there is a tendency for some forecasters to find model output that they find "stimulating" and then attempt to justify that specific output even though it is clear to more unbiased folk that model is not likely to verify in the future.

I don't think forecasters can be unbiased. There is always some bias with everyone and most of the time it is hidden in the sub-conscious. Not realizing this means it is likely to do worse than consensus.

We seem to be thinking of bias in different terms, for an example I'm thinking of it in the sense of under-forecasting the day's maximum temperature by 1.5°F over the period of several months.

I was mostly going off this in my latest discussions: "The key to recognizing biases is to see if any individual forecaster is just emphasizing positives (in this case the most menacing possible solution), or is actually weighing both the positives and the negatives for either a certain meteorological event,".

Link to comment
Share on other sites

No, I was trying to say just the opposite.

With the amount of numerical guidance available today, I think there is a tendency for some forecasters to find model output that they find "stimulating" and then attempt to justify that specific output even though it is clear to more unbiased folk that model is not likely to verify in the future.

I don't think forecasters can be unbiased. There is always some bias with everyone and most of the time it is hidden in the sub-conscious. Not realizing this means it is likely to do worse than consensus.

We seem to be thinking of bias in different terms, for an example I'm thinking of it in the sense of under-forecasting the day's maximum temperature by 1.5°F over the period of several months.

I get what you are saying.

What I would like to know is this: Using synoptic meteorology and analysis of the developing and unfolding 500 mb pattern; is it plausable that the European solution will be correct? Why?

Link to comment
Share on other sites

I'm a rank amateur and know nothing about forecasting, but it sounds like Chagrin Falls is biased to toward the Euro. I'm just using this as an example of bias. Just because one model outshines all the others in terms of verification, it doesn't mean that all other guidance should be be poo-pooed.

I would hope that NWS mets would weigh positives and negatives to come up with an unbiased forecast for a pending event. I can understand other mets (ie accuweather, etc.) want to hype the most positive outcome toward the extreme possibilities.

Therefore, I think that the difference in the amount of bias depends on who is paying for the forecast.

Link to comment
Share on other sites

I'm a rank amateur and know nothing about forecasting, but it sounds like Chagrin Falls is biased to toward the Euro. I'm just using this as an example of bias. Just because one model outshines all the others in terms of verification, it doesn't mean that all other guidance should be be poo-pooed.

I would hope that NWS mets would weigh positives and negatives to come up with an unbiased forecast for a pending event. I can understand other mets (ie accuweather, etc.) want to hype the most positive outcome toward the extreme possibilities.

Therefore, I think that the difference in the amount of bias depends on who is paying for the forecast.

Do realize that the European model has excellent verification superiority as well as run to run consistency.

Best way to critique a scientific diagnostic tool is

specificity

validity

repeatability

Link to comment
Share on other sites

ASLKahuna said it right there. Quite honestly forecasting is a game and an art, and you have to first be neutral with no bias towards busting high or low. From a pure public forecasting standpoint, you have to temper odds one way or the other with wording, and you need to sharpen the wording as an event eventually becomes a higher probability event. As for probabilities, they are absolutely KEY, and one must consider the actual dynamic background with a good analysis. This is why, to the consternation of some here, why I break every storm down to the detail. How can I give a probability to an event if I only follow models without actually making the dynamic considerations? Impossible. We talked about this in a different thread, but this is where a lot of new forecasters fail today. They follow models without breaking down the weather pattern and truly analyzing it. This East Coast storm is a classic example of that. For any budding meteorologists who wish to forecast someday, take these words seriously. Models are not the full answer, and to be a successful forecaster model-casting won't get it done. You MUST be able to analyze a weather pattern and break it down to give a probability to an event.

A good place to start for non-mets or mets alike. Have to understand probability forecasting before one can do it.

http://www.cimms.ou....robability.html

Social science is becoming big in forecasting today since we must know how people react to the forecast and the impacts weather/forecasts have on people. NWS is filtering a lot of money into this today.

For very good reason. If the forecast isn't communicated well to the general public, and this is a difficult job that I used to do at AccuWeather, then it has very little value.

Link to comment
Share on other sites

I'm a rank amateur and know nothing about forecasting, but it sounds like Chagrin Falls is biased to toward the Euro. I'm just using this as an example of bias. Just because one model outshines all the others in terms of verification, it doesn't mean that all other guidance should be be poo-pooed.

I would hope that NWS mets would weigh positives and negatives to come up with an unbiased forecast for a pending event. I can understand other mets (ie accuweather, etc.) want to hype the most positive outcome toward the extreme possibilities.

Therefore, I think that the difference in the amount of bias depends on who is paying for the forecast.

Yes, I think saying I'm biased toward the Euro is reasonable. And yes, I agree with you in saying that other guidance shouldn't be poo-pooed.

With that said there is only a certain amount of time to make a forecast. I've found I don't make a more accurate forecast when I look at multiple model runs. Other meteorologists would strongly disagree with this method, but I've found it works for me.

Getting back to bias, I would consider the HPC a bit biased since they seem to spend a equal amount of time discussing the GFS/ensemble members and weighing the GFS equally when determining their model consensus. For example, since the GFS does not verify as well as the ECMWF in most cases, wouldn't it be biased to treat it equally? Hypothetically, if there are only two medium-range models and one is a bit more accurate than the other, does a 50/50 forecast split indicate a bias? I would say it does, perhaps in such a case a 70/30 weighted consensus would be unbiased.

Unless we are dealing with objective numerical verification for specific elements, perhaps bias is a bit subjective?

Link to comment
Share on other sites

Through 72 hours, the GFS is a better piece of guidance pressure-wise than the ECMWF, NAM, or UKMET. This has been true since the GFS upgrade and hasn't changed in the cool season. Does this mean we ignore the ECMWF during short range? No, because its QPF verification is better than the GFS and its loss to the GFS is marginal in the pressure field. From a pressure perspective, the 00z ECMWF is only marginally better than the 12z GFS through the medium range period. If the model was significantly worse, like the NOGAPS or to some degree the UKMET and overphased Canadian during the cool season, we wouldn't weigh it as heavily or chose it very often. And we don't chose the Canadian, UKMET, or NOGAPS frequently, especially outside the warm season. As of Monday, for example, when you looked at Day 7 verification, the 12z GFS had more wins than the 00z ECMWF for December in the pressure pattern in and near the lower 48. However, it also did worse than the ECMWF on other days. Pressure verification is important for the medium range period since later on, many field offices use wind grids derived off of the HPC pressures. Having a surface high too strong is just as bad as a surface low too strong...your winds will be overdone. You have to pick your battles.

In case anyone hasn't noticed, even when the ECMWF was producing extreme amounts of snow in the Mid-Atlantic states with this upcoming storm, its surface low track has been edging eastward since Monday, back towards its Sunday solution (part of a multi-day waver, rather than trend), and has been wavering moderately with the strength of the high in the wake of this system. Since yesterday, it also looks slightly quicker. On Monday, the 00z ECMWF look the low through Massachusetts and quite close to Maine into the Bay of Fundy which likely would have meant rain for southeast New England rather than snow. Since then, its low track has swung wider out from the Mid-Atlantic states, and to a lesser extent New England. Which is worse...a slight east trend which changed precipitation type and amounts substantially, or a west trend which always showed the same p type (snow) for the Mid-Atlantic and New England states, even if the amounts have somewhat (and irregularly) increased for New England? None of the guidance has been especially consistent with amounts in the Mid-Atlantic States, so I leave their precipitation impact out of this argument.

DR

Getting back to bias, I would consider the HPC a bit biased since they seem to spend a equal amount of time discussing the GFS/ensemble members and weighing the GFS equally when determining their model consensus. For example, since the GFS does not verify as well as the ECMWF in most cases, wouldn't it be biased to treat it equally? Hypothetically, if there are only two medium-range models and one is a bit more accurate than the other, does a 50/50 forecast split indicate a bias? I would say it does, perhaps in such a case a 70/30 weighted consensus would be unbiased.

Unless we are dealing with objective numerical verification for specific elements, perhaps bias is a bit subjective?

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...