Jump to content

MegaMike

Meteorologist
  • Posts

    474
  • Joined

  • Last visited

About MegaMike

Profile Information

  • Four Letter Airport Code For Weather Obs (Such as KDCA)
    KOWD
  • Gender
    Male
  • Location:
    Wrentham, MA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I completely agree. It was recently made operational (Feb. 2025) and its primary purpose (imo at the current moment) is to provide an efficient (few resources and fast to simulate), medium/long range ensemble... I consider it a less accurate version of the CFS, honestly. They're years, if not decades, away from making the AIFS comparable to any traditional NWP modeling system. I'm not even fully sold on that being a possibility either... I'll take it seriously when the AIFS outperforms the IFS at the surface and not 500-50mb Vendors will provide any modeling system to stand out, unfortunately... At this range, I'd primarily consider the ECMWF, GFS, CMC, ICON, and UKMET (with more emphasis on their ensembles). Maybe look at trends of the AIFS for S&Gs.
  2. I respect the effort. It takes a long time doing an analysis on one storm. You did it for 200+ events and manually conducted/plotted an interpolation. That's wild.
  3. Pretty cool looking! Consensus is, that's the exhaust plume from the European Space Agency's Ariane 6 rocket (launched at Kourou, French Guiana).
  4. Absolutely not. Maybe it performed well for this one event, but that doesn't mean it's better than traditional NWP. You really need to conduct a thorough evaluation at the surface and aloft (for forcing variables) to make such conclusions. As an example, it's possible something can be right for the wrong reason. You wouldn't know unless you evaluated it... So, if AI did well with forcing, wrt NWP, over a duration of 1 year, then you can entertain the idea. This is just imo, but we're years, if not decades, away from this. We likely need to significantly improve data assimilation for this to occur.
  5. Narragansett is look'n pretty wavey: https://northeastsurfing.com/narragansett-cam/
  6. More! (Tip's writing ability) x (Wiz's excitement over New England, severe weather) I'm not a fan of heat, so I ran script to figure out the median date of the max. (Summer) daily temperature via GHCND .csv files (TMAX field). Based on what I ran (32 different records/1 per-year from 1994-2025), the median date is ~Jul. 13th for KBOX (labeled, 'NWS BOSTON/NORTON' at https://www.ncei.noaa.gov/pub/data/ghcn/daily/ghcnd-stations.txt). After July 24th, there's a good chance (75%) KBOX experienced their warmest day of the year. Just for S&Gs.
  7. Definitely! If CM1 missed this one (Reno), it likely can't resolve tornadoes unless (maybe) you beef up the model specs. The amount of resources to even run that simulation still gets me... A quarter of a trillion grid points, for a 42 minute simulation (time steps = 0.2s), that spans an area of ~5,600 miles^2 (~6x size of RI), and it took their cluster 3 days to run. That's crazy. Imagine running that for the entire U.S.?
  8. Right? Super cool! I believe they used VAPOR to create most/all of their graphics. Agreed in that I doubt they'll be able to replicate their success for most other tornadoes
  9. My advisor would always tell me that! Someone did manage to simulate a tornado (EF5 in El Reno 2011) using a modeling system intended for very fine atmospheric phenomena (CM1): https://www.mdpi.com/2073-4433/10/10/578 If you wanted, had the resources (19,600 nodes -> 672,200 cores & 270 TB worth of space), and had a lot of time, you can run the simulation too! In serial mode (single CPU), it'd take decades for this simulation to complete. Really, we have the modeling systems to run highly accurate simulations, but unfortunately, data assimilation and (relatively) limited resources is inhibiting us. A nice video of the results:
  10. Logically, it doesn't make sense to me: Let's bring in data scientists to create a stand alone, meteorological modeling system lol. I'm sure it'll get better (build dat' training dataset), but for now, I'd say they're 1-2 decades away from making anything comparable to traditional NWP. I still think using AI to bias correct ic/bcs is the way to go. I know that has merit. Yea, it's a bit misleading... They used HRRR analysis as ground truth to make the conclusion that 'HRRR-Cast is comparable to HRRR...' I'd still rather see evaluations/comparisons at METAR/radiosonde sites.
  11. New model development from NOAA: The Global Systems Laboratory is set to release an experimental, high resolution AI product called, 'HRRR-Cast.' It's been trained on 3 years worth of HRRR analysis data... Lately, computational efficiency is starting to dominate the modeling world and I'm incredibly skeptical of it. According to https://arxiv.org/html/2507.05658v1 (assuming this is the same modeling system): "HRRRCast outperforms HRRR at the 20 dBZ threshold across all lead times, and achieves comparable performance at 30 dBZ." "HRRRCast can produce ensembles that quantify forecast uncertainty while remaining far more computationally efficient than traditional physics-based ensembles." Big caveat: The study used HRRR analysis data as 'ground truth.' Therefore, it's no surprise that HRRR-Cast, and their other AI model, performed well compared to diagnostic HRRR/forecast output. Until I see AI outperform conventional NWP at METAR/radiosonde sites (surface and aloft), I'm not going to get excited.
  12. Embrace the NAM while you can, weenies. Change is coming.
  13. It's an experimental model that doesn't predict moisture/heat flux based on fluid dynamics. As a result, it's totally unreliable in my opinion... It's probably the reason why the NWS doesn't refer to it in their forecast discussions. Can't wait to see how it performs during the warm season. With limited training data on hurricanes, I expect it to perform terribly with tropical disturbances.
  14. I agree with you both. To evaluate snowfall, you really need to evaluate SWE, as well. For that matter, you'd need to evaluate forcing fields too (ensure SWE was predicted accurately for the right reasons). If SWE was under predicted, but a snowfall algorithm performed well, that algorithm isn't showing accuracy... It's showing a bias. Unfortunately, snowfall evaluations are tricky because of gauge losses wrt observations. Not everyone measures the same either... Can of warms, snowfall is. A met mentioned this earlier too, but the more dynamic an algorithm is, the more likely errors exacerbate. The Cobb algorithm is logically ideal for snowfall prediction, but compounding error throughout all vertical layers of atmosphere likely inhibits its accuracy. Snowfall prediction sucks which is probably why there are only a handful of publications. Otherwise, these vague algorithms wouldn't be widely used by public vendors.... It's the bottom of a very small barrel.
  15. I've never heard of Spire Weather before, but based on their website, it looks like it's another AI modeling system (someone correct me if I'm wrong). It doesn't take much to run an AI model... Especially since the source code for panguweather, fourcastnet, and graphcast are available online for free: https://github.com/ecmwf-lab/ai-models My recommendation: If they don't evaluate or provide modeling specifications, don't use it.
×
×
  • Create New...