Snowlover11 Posted 18 hours ago Share Posted 18 hours ago Rigged! 2 Link to comment Share on other sites More sharing options...
WeatherGeek2025 Posted 18 hours ago Share Posted 18 hours ago who won? Link to comment Share on other sites More sharing options...
hudsonvalley21 Posted 18 hours ago Share Posted 18 hours ago Congrats Anthony, thanks for your efforts Rodger! 1 Link to comment Share on other sites More sharing options...
MJO812 Posted 14 hours ago Share Posted 14 hours ago I still cant believe i won. Thanks everyone. 2 Link to comment Share on other sites More sharing options...
Roger Smith Posted 13 hours ago Author Share Posted 13 hours ago I have checked full day calendar day 23rd climate reports and with one very minor exception the somewhat debatable numbers are confirmed. Annoyingly, EWR added 0.1" to their total. But NYC remains 19.7" despite potential for 1-2" additional after the early 1 p.m. report. I don't yet have any explanation for EWR showing up as 27.1" in PNS when their two daily amounts are 8.1 and 17.1 (previously 17.0). I will edit the table for those miniscule differentials for EWR. Doubt that it makes much difference to any rankings. Earlier in the process, I had hudsonvalley21 slightly ahead of MJ0812 because of the higher EWR estimate. There were other changes after that, at some higher value than 27" hudsonvalley21 would have a lower total squared error being a bit closer to any high value. That's why that part changed. I am going to check the CF6 data which is usually published around 5 a.m. for NYC area stations, to see if anything changes there -- the CF6 document was stated to be the ultimate guide to scoring this contest. Will either edit in a "no changes found" or tell you what changes I find. Later edit -- no changes to any contest snowfall values in CF6 tables. Remains 19.7 _ 22.5 _ 29.1 _ 20.1 _ 25.2 for two day storm totals. Link to comment Share on other sites More sharing options...
Roger Smith Posted 3 hours ago Author Share Posted 3 hours ago I ran the scoring program using 21.0" for NYC instead of 19.7" and it had very little effect on the ranks so it won't matter to the contest if they change that value within that range. I then ran the scoring program for 27.1" at EWR. That would change the scoring order at the top, as hudsonvalley21 would then have a 2.8" error and MJ0812 5.9" ... the lead changes hands at any value above 26.6" ... and it makes no difference to this what value NYC has since their forecasts are similar there. The late forecast from wannabehippie still finishes ahead of both, and would do so until around 28" ... MJ0812 would still have a lower total error than hudsonvalley21 and that is baked in as they both continue to add same differentials for any values. The rest of the top ten (main contest scoring metric of sum of error squared) with the two station changes would be 3. Nswx516 ... 4. hmdeutsch ... 5. kat5hurricane ... 6. dmillz25 ... 7. UKweathergeek 8. NWS ... 8. CPcantmeasuresnow ... 9. jaysoner ... 10. Weathergeek2026 So in that list, only kat5hurricane makes a significant move up the leaderboard due to their forecast being closest to the EWR total (among the top ten, TriPOL has a higher forecast of 26.0" but was not able to move up in ranks. I would not want to revise the final standings much later than today if one day later this month there is any indication of changes in these values ... so I think it's probably a good idea to make the point that we had two very accurate forecasts that are not separated very much by standard statistical measurements at any values in the ranges likely to be explored by later data. 1 Link to comment Share on other sites More sharing options...
Roger Smith Posted 3 hours ago Author Share Posted 3 hours ago Later update ... CF6 for EWR has been edited to show the 27.2" outcome. That means your top ten for the contest is as shown above. I will wait for the NYC debate to play itself out before editing the table, but in any case I will declare MJ0812 and hudsonvalley21 to be co-winners because of these fluctuations and the fact that MJ0812 retains the lowest total error if not the lowest total error squared. Each of them wins one of the ranking metrics. I won't keep posting about this, just check back in a few days and see what the table looks like then, maybe all five will change and I will have made use of MAID (it's a Canadian joke) ... Link to comment Share on other sites More sharing options...
Recommended Posts
Create an account or sign in to comment
You need to be a member in order to leave a comment
Create an account
Sign up for a new account in our community. It's easy!
Register a new accountSign in
Already have an account? Sign in here.
Sign In Now