Clark
(Meteorologist)
Thu Jul 28 2005 02:02 AM
Re: models

As another heads-up, tropical-wise, the FSU Superensemble has been the best performing model for two years running. Of course, it's not publically available, so we can't use the output to make forecasts...only the NHC can. The NOGAPS model was best in 2002 and has remained near the top ever since; the UKMET was best in 2001 yet has been one of the worst since then. It was negatively influenced by 1 storm in 2002, but no such excuses can be made for the past two years. The UK Met Office apparently made some changes to the model during that time which hasn't resulted quite as favorably as they would've hoped or expected. The GFS is always near the middle of the pack, while the GFDL is similar (it's run off of the GFS initial conditions). The GFDN, or the GFDL run off of the NOGAPS initial conditions, generally performs slightly better. Both are usually either too low or too high with intensity. The Canadian (CMC) and European Center (ECMWF) models aren't run as much for tropical activity, though the latter does quite well and the former has its moments...particularly with recurving storms.

On a global scale, the ECMWF model has been the best for some time. The UKMET model is up there as well, as is the Japanese (JMA) model. The GFS is towards the middle-end of the pack as far as global models go, but improving. Note that many of the tropical models are just mesoscale models -- this includes the GFDL and various flavors of both the WRF and MM5, plus all of the steering layer (e.g. BAM-series, LBAR, A98E) and statistical (e.g. CLIPER) models -- and thus not relevant to this part of the discussion.

In this decade, the dynamical models (GFS, GFDL, ECMWF, UKMET, and so on) heavily outperform the limited-area/statistical-dynamical models (BAM-series, LBAR, etc.) and are even better than the statistical/persistence models. This wasn't always the case. Until the mid-90s, when the global models improved in the tropics to the point of relevancy, the statistical-dynamical models were relied upon for track and intensity forecasting; back into the 80s and prior, it was down to forecaster experience, looking at the flow regimes (as best as they could tell from water vapor; satellite analyses such as the UWisconsin products weren't around back then), and the statistical/persistence models. Kinda funny how we've come full circle, with a statistical model (FSU Superensemble) at the head of the pack, though it really is better classified as a dynamical model with statistical modifications.

Obviously, models change from year to year (and occasionally more frequently than that), whether in terms of their resolution, the physics they employ, or even how they ingest data. That's why it's important to use past performance as just a tool until you can determine how a given model is doing with any given storm or any given season/environmental regime. It's also why the FSU Superensemble -- heavily based upon prior model performance -- tends to struggle early in the season; the changes in the model aren't always able to be accounted for, making the first few storms' forecasts not as accurate as they could be.

Hope this sheds some light on the model questions...



Note: This is NOT an official page. It is run by weather hobbyists and should not be used as a replacement for official sources. 
CFHC's main servers are currently located at Hostdime.com in Orlando, FL.
Image Server Network thanks to Mike Potts and Amazon Web Services. If you have static file hosting space that allows dns aliasing contact us to help out! Some Maps Provided by:
Great thanks to all who donated and everyone who uses the site as well. Site designed for 800x600+ resolution
When in doubt, take the word of the National Hurricane Center