Jump to content
  • Member Statistics

    17,518
    Total Members
    7,904
    Most Online
    bowsunski
    Newest Member
    bowsunski
    Joined

Probabilistic convective forecasts


Recommended Posts

As we begin to enter the severe weather season, a lot of attention will begin to be placed on severe weather forecasts. These forecasts, including those from SPC, are often probabilistic forecasts, not deterministic forecasts. This is entirely appropriate given the fickle nature of moist convection and the fact that hail, tornadoes, and high winds are often conditional threats. Probabilistic forecasts are conveying to you the probability that some event is going to occur, not that it is or is not. As such, evaluating these forecasts is not elementary. Calling "bust" on a single probability forecast makes no mathematical sense, unless the given forecast probability is 0% or 100%.

The appropriate way to evaluate such forecasts requires a large sample of some % forecast (for example, let's say a large sample of 10% tornado point forecasts for simplicity) and seeing how often that event occurred. If the answer is 10% of the time, those forecasts are said to have been "perfectly reliable"; in other words, the 10% forecasts conveyed the appropriate probability to the event. To reach that perfectly reliable forecast, both true and false outcomes of the 10% forecast were necessary (tornado and no tornado). If there were no tornado every time the 10% probability was issued, the forecasts, collectively, would have been worse. Thus, you can see that calling a bust on any single one of our hypothetical 10% tornado forecasts would have been in error regardless of whether there was a tornado or not.

There are various conference Preprints that provide these types of evaluations for different types of SPC forecasts and the results are very good. This is just something to keep in mind when dealing with convective forecasts, as people tend to get overly emotional and treat the forecasts as deterministic, probably because that gives them the ability to instantly judge a forecast.

Also, if the above all sounds familiar, it is. I made a similar thread about this on Eastern, but I lost it in The Purge and didn't have it backed up like the others.

Link to comment
Share on other sites

As we begin to enter the severe weather season, a lot of attention will begin to be placed on severe weather forecasts. These forecasts, including those from SPC, are often probabilistic forecasts, not deterministic forecasts. This is entirely appropriate given the fickle nature of moist convection and the fact that hail, tornadoes, and high winds are often conditional threats. Probabilistic forecasts are conveying to you the probability that some event is going to occur, not that it is or is not. As such, evaluating these forecasts is not elementary. Calling "bust" on a single probability forecast makes no mathematical sense, unless the given forecast probability is 0% or 100%.

The appropriate way to evaluate such forecasts requires a large sample of some % forecast (for example, let's say a large sample of 10% tornado point forecasts for simplicity) and seeing how often that event occurred. If the answer is 10% of the time, those forecasts are said to have been "perfectly reliable"; in other words, the 10% forecasts conveyed the appropriate probability to the event. To reach that perfectly reliable forecast, both true and false outcomes of the 10% forecast were necessary (tornado and no tornado). If there was no tornado every time the 10% probability was issued, the forecasts, collectively, would have been worse. Thus, you can see that calling a bust on any single one of our hypothetical 10% tornado forecasts would have been in error regardless of whether there was a tornado or not.

There are various conference Preprints that provide these types of evaluations for different types of SPC forecasts and the results are very good. This is just something to keep in mind when dealing with convective forecasts, as people tend to get overly emotional and treat the forecasts as deterministic, probably because that gives them the ability to instantly judge a forecast.

Also, if the above all sounds familiar, it is. I made a similar thread about this on Eastern, but I lost it in The Purge and didn't have it backed up like the others.

Ya great topic to start right now. I remember this had been stressed and rehashed a trillion times over on eastern, best to bring this up before the inaugural start to severe weather season.

Link to comment
Share on other sites

I understand how these forecasts may probabilistically verify, but how useful are these forecasts really to the general public. If Im reading this right a 10% probability means a tornado will occur only one out of 10 times this probability was issued for an area. Heading into a severe event why is it useful to know how probable tornado occurence would be over a certain number of threats?

Link to comment
Share on other sites

I understand how these forecasts may probabilistically verify, but how useful are these forecasts really to the general public. If Im reading this right a 10% probability means a tornado will occur only one out of 10 times this probability was issued for an area. Heading into a severe event why is it useful to know how probable tornado occurence would be over a certain number of threats?

10% chance w/i a 25 mile radius. That is what is so cool about Dr. Forbes "TorCon" on TWC, he uses a 50 mile radius, so he can issue much higher apparent probabilities.

Link to comment
Share on other sites

I understand how these forecasts may probabilistically verify, but how useful are these forecasts really to the general public. If Im reading this right a 10% probability means a tornado will occur only one out of 10 times this probability was issued for an area. Heading into a severe event why is it useful to know how probable tornado occurence would be over a certain number of threats?

It's less useful to be told something is or is not going to happen when the event is so conditional, and in the case of tornadoes, so unpredictable. Probabilistic forecasts are the most accurate way to convey the nature of the threat. I think it's better to tell the public we don't know for sure (because convection initiates and evolves on spatial scales not resolved by most observing platforms) and to express that uncertainty in the form of a probabilistic forecast rather than pretending to know with a deterministic forecast.

Link to comment
Share on other sites

That is not to say there is no room for deterministic forecasts. In cases where the atmospheric phenomenon is well observed and the processes associated with the phenomenon are well understood, deterministic forecasts are valuable. They are better understood by the public and the nature of the phenomenon ensures that the FAR is relatively low.

Link to comment
Share on other sites

It's less useful to be told something is or is not going to happen when the event is so conditional, and in the case of tornadoes, so unpredictable. Probabilistic forecasts are the most accurate way to convey the nature of the threat. I think it's better to tell the public we don't know for sure (because convection initiates and evolves on spatial scales not resolved by most observing platforms) and to express that uncertainty in the form of a probabilistic forecast rather than pretending to know with a deterministic forecast.

Yeah, that makes sense given how truly unpredictable tornadoes can be. I guess it is a little unrealistic to expect a verifiable deterministic forecast at this point.

Link to comment
Share on other sites

Do you have some of the articles mentioned or links to them? They would be an interesting read.

"There are various conference Preprints that provide these types of evaluations for different types of SPC forecasts and the results are very good."

Yeah, I had a bunch of articles in the old thread at Eastern. I'll look around and post them some time today.

Link to comment
Share on other sites

I'm going to play devil's advocate for a second from a statistical perspective.

In your hypothetical verification scenario, at which point do you decide that enough outlooks were issued to warrant verification? A week, month, season? In the end it is all moot because each environment upon which an outlook was issued was different than every other issued outlook. This renders verification impossible.

As far as SPC contoured probabilities are concerned, they are based on chance that an event occurs within x number of miles of any given point within the contoured area. That's a nightmare from a verification scenario even if you ignored the face that each environment was different (again, making verifications a moot point in the first place). Additionally, locations within x number of miles of the edge of the contoured area have some increased risk of experience the given event as they are within the prescribed distance from the edge of the area encircled. Would you count that as an affirmative, or is it null since the event occurred outside of the contours in question?

Have fun! :weight_lift:

Link to comment
Share on other sites

I'm going to play devil's advocate for a second from a statistical perspective.

In your hypothetical verification scenario, at which point do you decide that enough outlooks were issued to warrant verification? A week, month, season?

I don't know, but that's not a problem unique to severe weather verification. The more samples, the lower the sampling error.

In the end it is all moot because each environment upon which an outlook was issued was different than every other issued outlook. This renders verification impossible.

What?

As far as SPC contoured probabilities are concerned, they are based on chance that an event occurs within x number of miles of any given point within the contoured area. That's a nightmare from a verification scenario

maybe for you or I, but like I said, they've done it a number of times

even if you ignored the face that each environment was different (again, making verifications a moot point in the first place).

The verifications are not a moot point, and I'm not sure how that got to be the default assumption. No weather event or forecast anywhere occurs under the same set of circumstances, and verification is done successfully all the time.

Additionally, locations within x number of miles of the edge of the contoured area have some increased risk of experience the given event as they are within the prescribed distance from the edge of the area encircled. Would you count that as an affirmative, or is it null since the event occurred outside of the contours in question?

If the event occurs within X miles of a point but isn't within the contours, I would assume it counts. I think that's what you're asking.

Link to comment
Share on other sites

Attica, a true verification is impossible because it's a probabilistic outlook and not a deterministic forecast. A 10% probability of a tornado is not confirmed by an observation of a tornado, and you realize this which is why you stated that one would need to take a large sample of outlooks with the given probability and then see what percentage of those outlooks later contained a tornado within their bounds. Unfortunately, this does not assess the robustness of any single outlook. Nor does it shed light on the SPC's ability to gauge the likelihood of a tornado. The reason? Each forecast outlook is issued with the atmosphere is some particular state. Well, can you replicate that state? I'll go out on a limb and answer no. So with this in mind one learns little.

Additionally, what if you found that in 10% of the cases you looked at, a tornado occurred within the bounds prescribed. OK, fine. Aside from the limitation I just laid out above, a second problem is that one case could have one tornado confirmed across a bounded area the size of Texas while another could have 30 confirmed tornadoes in a bounded area the size of Connecticut. Do you treat these cases the same? I wouldn't... because by "verifying" the entire sample you are implying that each individual case was a "perfectly reliable" outlook... or that the SPC can accurately asses risk/probability.

I don't believe that we're as good as we think we are, and we use all sort of statistical tools to dupe us into a false sense of understanding. We have, however, come a long way so perhaps there is some utility in these methods (again, just not as much as many of us think).

Link to comment
Share on other sites

Attica, a true verification is impossible because it's a probabilistic outlook and not a deterministic forecast. A 10% probability of a tornado is not confirmed by an observation of a tornado, and you realize this which is why you stated that one would need to take a large sample of outlooks with the given probability and then see what percentage of those outlooks later contained a tornado within their bounds. Unfortunately, this does not assess the robustness of any single outlook. Nor does it shed light on the SPC's ability to gauge the likelihood of a tornado. The reason? Each forecast outlook is issued with the atmosphere is some particular state. Well, can you replicate that state? I'll go out on a limb and answer no. So with this in mind one learns little.

I disagree completely with almost every word of this. Every setup is different, but the forecasters go through a similar exercise of formulating the probabilities based on the same parameters (e.g., CAPE, LL shear, sfc moisture, boundary position, low position, etc.). The idea is not to verify a single outlook, but to assess how the forecasters do overall. This is pretty standard stuff.

You're telling me that, if over a large sample of 10% tornado forecasts, it was found that tornadoes occurred 13% of the time rather than, say, 32% of the time, that tells you nothing about the forecaster skill level? You'd feel no differently about forecaster skill in the latter case compared to the former?

Additionally, what if you found that in 10% of the cases you looked at, a tornado occurred within the bounds prescribed. OK, fine. Aside from the limitation I just laid out above, a second problem is that one case could have one tornado confirmed across a bounded area the size of Texas while another could have 30 confirmed tornadoes in a bounded area the size of Connecticut. Do you treat these cases the same? I wouldn't... because by "verifying" the entire sample you are implying that each individual case was a "perfectly reliable" outlook... or that the SPC can accurately asses risk/probability.

If the forecast says there's a 10% chance of a tornado within X miles of a point, then that's the forecast and that's what you verify. If you want to deal with tornado quantity and severity, I think those are better handled by the convective outlooks (slight, moderate, high risk). It's not like areal probabilities have never been verified before, these are relatively minor issues.

I don't believe that we're as good as we think we are, and we use all sort of statistical tools to dupe us into a false sense of understanding. We have, however, come a long way so perhaps there is some utility in these methods (again, just not as much as many of us think).

Who is we? Who is duping who? Which of the statistical tools are being used to dupe? Any evidence in support of this belief? Citations?

Link to comment
Share on other sites

I disagree completely with almost every word of this. Every setup is different, but the forecasters go through a similar exercise of formulating the probabilities based on the same parameters (e.g., CAPE, LL shear, sfc moisture, boundary position, low position, etc.). The idea is not to verify a single outlook, but to assess how the forecasters do overall. This is pretty standard stuff.

You're telling me that, if over a large sample of 10% tornado forecasts, it was found that tornadoes occurred 13% of the time rather than, say, 32% of the time, that tells you nothing about the forecaster skill level? You'd feel no differently about forecaster skill in the latter case compared to the former?

If the forecast says there's a 10% chance of a tornado within X miles of a point, then that's the forecast and that's what you verify. If you want to deal with tornado quantity and severity, I think those are better handled by the convective outlooks (slight, moderate, high risk). It's not like areal probabilities have never been verified before, these are relatively minor issues.

You can get a sense of how functionally aware a forecaster is with regard to severe weather, but not much more can be gleamed from it. And quantity is important since the probability is based on a tornado being observed within 65 miles of any given point (or whatever distance it is) within the bounded area. And that probability is arbitrarily drawn up based on how strong the set-up is to the particular forecaster(s) in question. Taking the outlooks in aggregate does not properly asses a particular case, nor does it provide much meaningful insight into the tornado potential of any individual case. It just means that (s)he has will undervalue a few events and over value many more, all with unique environments, and in the end our lack of a deep understanding will wash out. You can broadly asses who is marginally better overall, but anything more than that? Nope...

Think about this hypothetical... what if we had ten outlooks issued, all 10% contours, and in one case 50 tornadoes were observed within a contoured area the size of Connecticut and in every other case no tornadoes occurred within a bounded area the size of Texas. Does this mean the forecaster's outlooks are "perfectly reliable"? It does not. What it does mean is that we're just not very good at this... we're light years ahead of where we once were, but the truth is we've got a long way to go yet. We still can't agree on how precisely a tornado develops... but somehow we are capable of generating accurate probabilities of their occurrence? Somethings amiss.

Who is we? Who is duping who? Which of the statistical tools are being used to dupe? Any evidence in support of this belief? Citations?

We meteorologists are duping ourselves with verification schemes that seems great mathematically but do not have a firm anchoring in reality. The point regarding the atmosphere always being different is certainly valid when it comes to verifying our arbitrary statistics. If you had 150 cases, 5 of which produced a tornado in a low shear, high CAPE environment; another 5 which produced a tornado in a low CAPE, high shear environment; and finally 5 cases of tornadoes occurring withing a moderate shear, moderate CAPE environment.... what have you learned?

My evidence is epistemic and you will not find it cited, because, well, we are not trained to approach the field in such a way. I never stated that the outlooks issued have no utility -- but it's not as robust as we are led to believe.

Link to comment
Share on other sites

What it does mean is that we're just not very good at this... we're light years ahead of where we once were, but the truth is we've got a long way to go yet.

Pretty much all you need right there. Could you suggest a better system that we could accurately use given the current technology and knowledge of atmospheric processes?

Link to comment
Share on other sites

You can get a sense of how functionally aware a forecaster is with regard to severe weather, but not much more can be gleamed from it. And quantity is important since the probability is based on a tornado being observed within 65 miles of any given point (or whatever distance it is) within the bounded area. And that probability is arbitrarily drawn up based on how strong the set-up is to the particular forecaster(s) in question. Taking the outlooks in aggregate does not properly asses a particular case, nor does it provide much meaningful insight into the tornado potential of any individual case.

Right, that's the whole point. Since convection is highly conditional, assessing one case, regardless of the outcome, does not give you meaningful information. In the aggregate it does, maybe small threats are being under-forecast, maybe there's a bias of over-forecasting tornadoes in New England, etc. I don't know, but I think the idea that because setups are different, you can't assess how forecasters do forecasting convection over large samples is rather myopic.

Probabilistic forecasting has been, by almost every account I can find, a big success. My guess is SPC agrees, as well as NSSL and the pertinent parts of NSF, and several reviewers, who approved the 10 year NSSL warn-on forecast initiative (see also Stensrud et al. 2009 in BAMS). There is a long history of probabilistic forecasting and verification in many fields. As Harold Brooks discusses in one of the links I provided, it took a very short amount of time of probabilistic forecasting before forecasters were able to accurately match their subjective analysis to accurate probabilities (10% forecasts verified ~10% of the time), thereby providing useful information about the chances of severe weather occurring in an area.

It just means that (s)he has will undervalue a few events and over value many more, all with unique environments, and in the end our lack of a deep understanding will wash out. You can broadly asses who is marginally better overall, but anything more than that? Nope...

How do you know this? Have you looked at the reliability diagrams? Are the results just a coincidence?

Think about this hypothetical... what if we had ten outlooks issued, all 10% contours, and in one case 50 tornadoes were observed within a contoured area the size of Connecticut and in every other case no tornadoes occurred within a bounded area the size of Texas. Does this mean the forecaster's outlooks are "perfectly reliable"? It does not. What it does mean is that we're just not very good at this

This is not correct. I think there is some confusion based on my (admittedly oversimplified) example, which was for illustrative purposes. Verification uses a grid (since like a model, we can't evaluate the infinite number of points in space), so that the size of a given probability area matters. So for your example, the answer is no, the forecasts were not perfectly reliable and they wouldn't be verified as such because the probability estimates would be binned.

... we're light years ahead of where we once were, but the truth is we've got a long way to go yet. We still can't agree on how precisely a tornado develops... but somehow we are capable of generating accurate probabilities of their occurrence? Somethings amiss.

Well, yeah, that's the whole point of having a probabilistic forecast. Would you prefer a deterministic tornado forecast? Maybe just no forecast at all?

We meteorologists are duping ourselves with verification schemes that seems great mathematically but do not have a firm anchoring in reality. The point regarding the atmosphere always being different is certainly valid when it comes to verifying our arbitrary statistics. If you had 150 cases, 5 of which produced a tornado in a low shear, high CAPE environment; another 5 which produced a tornado in a low CAPE, high shear environment; and finally 5 cases of tornadoes occurring withing a moderate shear, moderate CAPE environment.... what have you learned?

These are made-up numbers, right? So they tell us nothing. There are actual CAPE and shear parameter space graphs in the Preprint link I provided, in which it is discussed what is learned from the real numbers (see Fig. 2 and Fig. 3).

My evidence is epistemic and you will not find it cited, because, well, we are not trained to approach the field in such a way. I never stated that the outlooks issued have no utility -- but it's not as robust as we are led to believe.

That's fine, of course, your opinion, but I don't know what way you're speaking of, how you would approach the problem, or what you would do differently.

Link to comment
Share on other sites

Double posted from another thread, but only because one of the red taggers thought it was ok to post here...

****************************************************************************************************

I, personally, as an amateur end product consumer, will call a severe weather watch a bust if a box is issued and no severe storms occur within it.

This has happened to me, been inside a tornado watch box, and no tornadoes occured in the box during the issuance time. Irregardless of the probability table in the watch box for various modes of severe, its a complete bust IMHO if there is no severe weather of any type, and a mild bust if there are only a few marginal hail and wind reports.

Central/SE Texas seems to fall victim to this often, cursed by low clouds caused by high dewpoint air from the Southern Gulf coming over cooler near shore waters and the Mexican highlands to our Southwest...

Link to comment
Share on other sites

I, personally, as an amateur end product consumer, will call a severe weather watch a bust if a box is issued and no severe storms occur within it.

This has happened to me, been inside a tornado watch box, and no tornadoes occured in the box during the issuance time. Irregardless of the probability table in the watch box for various modes of severe, its a complete bust IMHO if there is no severe weather of any type, and a mild bust if there are only a few marginal hail and wind reports.

Watches are treated differently, I think. Yes, they have probabilities associated with them but I think the guideline is a watch is issued if 2 or more tornadoes or 1 violent tornado is expected (someone please correct me if I'm wrong). That can be treated as a form of a deterministic forecast, though based not on a point but an area. That being said, there are a number of more rigorous ways to validate and/or evaluate the watches that are more useful than Ed Mahmoud calling "bust," though I understand this is an important exercise for many. I think the probabilities provide more useful information and provide far greater utility than saying this or that is expected.

My original post was about the tornado, wind, and hail probabilities, which is more where this branch of the field is headed, and I think, appropriately so.

Link to comment
Share on other sites

Watches are treated differently, I think. Yes, they have probabilities associated with them but I think the guideline is a watch is issued if 2 or more tornadoes or 1 violent tornado is expected (someone please correct me if I'm wrong). That can be treated as a form of a deterministic forecast, though based not on a point but an area. That being said, there are a number of more rigorous ways to validate and/or evaluate the watches that are more useful than Ed Mahmoud calling "bust," though I understand this is an important exercise for many. I think the probabilities provide more useful information and provide far greater utility than saying this or that is expected.

My original post was about the tornado, wind, and hail probabilities, which is more where this branch of the field is headed, and I think, appropriately so.

If an outlook/watch is issued and not much happens, I think it's just easier for folks to say "bust" even if it's not an accurate term. I used to do that before I learned more about probabilisitic forecasts. There should be some way for us to discuss individual events post-mortem. Is "imo, the potential with this setup was not realized" better than using the b-word?

Link to comment
Share on other sites

If an outlook/watch is issued and not much happens, I think it's just easier for folks to say "bust" even if it's not an accurate term. I used to do that before I learned more about probabilisitic forecasts. There should be some way for us to discuss individual events post-mortem. Is "imo, the potential with this setup was not realized" better than using the b-word?

I didn't mean to imply that the word itself is bothersome, just that the sentiment that leads to the reaction is misguided.

Link to comment
Share on other sites

Right, that's the whole point. Since convection is highly conditional, assessing one case, regardless of the outcome, does not give you meaningful information. In the aggregate it does, maybe small threats are being under-forecast, maybe there's a bias of over-forecasting tornadoes in New England, etc. I don't know, but I think the idea that because setups are different, you can't assess how forecasters do forecasting convection over large samples is rather myopic.

Probabilistic forecasting has been, by almost every account I can find, a big success. My guess is SPC agrees, as well as NSSL and the pertinent parts of NSF, and several reviewers, who approved the 10 year NSSL warn-on forecast initiative (see also Stensrud et al. 2009 in BAMS). There is a long history of probabilistic forecasting and verification in many fields. As Harold Brooks discusses in one of the links I provided, it took a very short amount of time of probabilistic forecasting before forecasters were able to accurately match their subjective analysis to accurate probabilities (10% forecasts verified ~10% of the time), thereby providing useful information about the chances of severe weather occurring in an area.

I don't dispute that probabilistic outlooks have utility, and part of the reason for this success is the arbitrary nature upon which verification works you are now saying size of the contoured probability matters in verification -- you did not make that claim before, nevertheless this still does not make the SPC evaluation of probability as robust and concrete as it would appear. Which is more likely, that forecasters correctly assess potential and these claims are verified by statistics, or that forecasters get lucky more frequently than they are willing to admit and errors in both direction get washed out by the property of large numbers. I'll choose door number two.

I'll give you another example, this time the box is the state of Kansas in all ten events. In the first nine, no tornadoes were observed... and in the last case 80 were observed. In 10% of the outlooks at least one tornado occurred -- but does this mean that the forecaster had a firm understanding in each event, or that on the whole the forecast understands tornado potential? I would say no, despite that your conditions were met. And as I stated before one can broadly asses the skill of the forecaster, but the evidence is not as concrete or robust as it is made out to be. The funny thing about statistics is that they can be fabulously elegant and deemed to be appropriate... until they're not. This problem is not endimic to meteorology, and in actually it seems to be more insulated from Gaussian mirages... but it too is not immune.

How do you know this? Have you looked at the reliability diagrams? Are the results just a coincidence?

Yes and somewhat. I again do not find the verification scheme to be as accurate as it is claimed to be. It's no different than economic models which try and indicate how risky some investment is. That's blown up in our face and shown we are not as understanding of the system as we think we are, despite the fact that some many people have become ridiculously rich.

This is not correct. I think there is some confusion based on my (admittedly oversimplified) example, which was for illustrative purposes. Verification uses a grid (since like a model, we can't evaluate the infinite number of points in space), so that the size of a given probability area matters. So for your example, the answer is no, the forecasts were not perfectly reliable and they wouldn't be verified as such because the probability estimates would be binned.

Gotcha, so you did oversimplify it. And guess what, I'd argue that the more simplistic approach was better if you want a more favorable outcome with regard to your verification schemes!

Well, yeah, that's the whole point of having a probabilistic forecast. Would you prefer a deterministic tornado forecast? Maybe just no forecast at all?

These are made-up numbers, right? So they tell us nothing. There are actual CAPE and shear parameter space graphs in the Preprint link I provided, in which it is discussed what is learned from the real numbers (see Fig. 2 and Fig. 3).

Of course I wouldn't want a deterministic forecast, but I do use the probability outlooks as nothing more than the confidence that the given forecaster(s) has in a given event. I do not believe that the 10% contour actually represents 10%. But hey, nothing wrong with that.

That's fine, of course, your opinion, but I don't know what way you're speaking of, how you would approach the problem, or what you would do differently.

What would I do differently? I think confidence intervals are just as informative and do not require the statistical investigation of probability contours. Additionally, I would admit that I could not reliably gauge the chance of a tornado within 25 miles of a given point. I'm not going to claim that I can accurately state the probability of an event when I can not precisely say how that event materializes in the first place.

I'll let you have the last word. But I will respectfully bow out so any questions you levy my way will likely go unanswered -- but I eagerly await your response.

Link to comment
Share on other sites

Alright, somebody can pretty easily make a verification product... it would be kind of messy to look at but it could work:

1) map all SPC reports

2) calculate density of reports (Since the SPC metrics measure the probability of a report within 25 mile radius, then the verification should be # reports per (pie * 25miles^2).

-In GIS you can draw a circle around an object and also perform an operation to count the number of objects within a shape and the area of that shape.

- so the procedure would be to draw a circle around every report then draw a shape that traces the outline of all of those circles

3) colorfill the report density product and overlay SPC contours.

Link to comment
Share on other sites

Alright, somebody can pretty easily make a verification product... it would be kind of messy to look at but it could work:

1) map all SPC reports

2) calculate density of reports (Since the SPC metrics measure the probability of a report within 25 mile radius, then the verification should be # reports per (pie * 25miles^2).

-In GIS you can draw a circle around an object and also perform an operation to count the number of objects within a shape and the area of that shape.

- so the procedure would be to draw a circle around every report then draw a shape that traces the outline of all of those circles

3) colorfill the report density product and overlay SPC contours.

Well it isn't really that easy. Population and other factors need to be considered as well. So many factors come into play when it comes to verification.

Link to comment
Share on other sites

Well it isn't really that easy. Population and other factors need to be considered as well. So many factors come into play when it comes to verification.

not really... the conditions defining s severe thunderstorm either occur or they do not. A tornado is still a tornado if it hits an open field or a major city. The topic is trying to verify the SPC maps and I just think all you need is a map that shows something as close as possible to the SPC forecast for comparison.

This is a casual observation about the SPC products but it appears that the center of the maximum SPC risk almost never gets a severe report cluster. Typically severe events are characterized by a few long lived entities which incur many reports such as discrete cells or a line with bowing segments for example and these rarely hit the middle of the risk. If I were issuing them, I would try to have my forecast probability maximized where I think the most reports will occur. THAT would need population factored in since if nobody is around to report the storm, its a false alarm.

Link to comment
Share on other sites

not really... the conditions defining s severe thunderstorm either occur or they do not. A tornado is still a tornado if it hits an open field or a major city. The topic is trying to verify the SPC maps and I just think all you need is a map that shows something as close as possible to the SPC forecast for comparison.

This is a casual observation about the SPC products but it appears that the center of the maximum SPC risk almost never gets a severe report cluster. Typically severe events are characterized by a few long lived entities which incur many reports such as discrete cells or a line with bowing segments for example and these rarely hit the middle of the risk. If I were issuing them, I would try to have my forecast probability maximized where I think the most reports will occur. THAT would need population factored in since if nobody is around to report the storm, its a false alarm.

I was talking about reports since the exact same storm over a large populated region will likely receive more storm reports than one over the plains of Kansas.

Link to comment
Share on other sites

Archived

This topic is now archived and is closed to further replies.

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...