Ed DunhamAdministrator
(Former Meteorologist & CFHC Forum Moderator (Ed Passed Away on May 14, 2017))
Sat Apr 07 2007 07:46 PM
Are They Reliable?

Clark's latest blog was related to something that I've been looking into for the past couple of months so it seemed like an appropriate time for a follow-on article. Many of us like to take a shot at a long range forecast for the upcoming season of expected tropical cyclone activity in the Atlantic basin. Clark covered some of the parameters that are examined by those that participate in this yearly exercise (both pro and novice). I use some of them (ENSO, MJO, Longwave Pattern, SST's, etc.) and I'm sure that you use some of them also. Sometimes even just a gut-hunch can provide what may turn out to be a highly accurate forecast. This blog takes a look at recent history to see if any accuracy can be determined from the actual results of basin activity.

The short answer to the title question is a rather decisive 'NO'. They are not accurate within a reasonable expectation and they have not improved any since the start of this century - thus they are not very reliable for those that utilize these types of forecasts - Emergency Management planning, insurance companies, transportation and agricultural agencies, aid and response organizations - but many of these kinds of organizations and companies have grown to expect this type of input as a starting point for their own seasonal planning. The general public has an expectation for accuracy in meteorological forecasts from those of us that labour in this youngest of the natural sciences. If I forecast a low temperature tomorrow morning between 20 and 40 degrees for a specific location, I'd be run out of town. Even a forecast of 30 to 35 will not meet this public expectation - is it going to freeze or not?!? But if I refine my forecast and call for a low temperature of 30-32 degrees, the public will generally be satisfied. Even if the forecast does not verify, I've given them something to work with - bring in the sensitive potted plants or cover them, put the pets in a warm place, watch out for ice on the roads in the morning, etc.

The same forecast accuracy is expected by the public when it comes to the degree of activity anticipated for a tropical cyclone season - and our skill at this type of forecast is still quite poor. Clark has covered two of the prime sources for these types of forecasts, i.e., Colorado State University and Tropical Storm Research. NOAA also does this, but they hedge their bets by forecasting a range of activity rather than a precise forecast number (maybe they are the smart ones), so measuring the NOAA skill level is not as precise of an effort as the other two (CSU and TSR). Since NOAA does not archive their previous forecasts, I did not include them in this analysis.

The following statistics examine the tropical cyclone activity forecasts made by CSU and TSR over the past seven seasons (2000-2006). Whenever available, the April forecast was used (or the closest available forecast to the April timeframe). The forecasts and actuals represent Total Named Storms, total number of Hurricanes, and total number of Major Hurricanes (Cat III or higher). The Total Error is simply the combined deviation of all three categories from the actual recorded numbers within each category. A '+' indicates an overforecast and a '-' indicates an underforecast.

CSU/Year...Forecast Activity...Actual Activity........Total Error.....Total 'Named Storms' Error
2000...........11/7/3....................14/8/3....................-4..............................-3
2001...........10/6/2....................15/9/4....................-10............................-5
2002...........12/7/3....................12/4/2....................+4..............................0
2003...........12/8/3....................16/7/3....................-5..............................-4
2004...........14/8/3....................15/8/6....................-4..............................-1
2005...........13/7/3....................28/15/7..................-27...........................-15
2006...........17/9/5....................10/5/2....................+14...........................+7
2007...........17/9/5.........................?

TSR/Year...Forecast Activity...Actual Activity........Total Error.....Total 'Named Storms' Error
2000.............9/5/2....................14/8/3.....................-5..............................-4
2001...........11/6/2....................15/9/4.....................-9..............................-4
2002...........11/6/2....................12/4/2....................+3..............................-1
2003...........11/6/2....................16/7/3.....................-7..............................-5
2004...........13/7/3....................15/8/6.....................-6..............................-2
2005...........14/8/4....................28/15/7...................-24...........................-14
2006...........15/8/4....................10/5/2....................+10............................+5
2007...........17/9/4.........................?

Perhaps because of a conservative nature (not necessarily a bad thing), both organizations underforecasted the season in 5 of the 7 years as indicated by the Total Error. Of the 14 forecasts in this timeframe, only one hit the total number of named storms (CSU in 2002) and the total number of Hurricanes (CSU in 2004). If a total error of '4' is viewed as acceptable, then CSU made 3 good forecasts and TSR made 1 good forecast in seven years - and that is not a good 'skill' score. Looking at just Named Storms and using a tolerance of '2', CSU made 2 good forecasts and so did TSR - for an accuracy of 29%.

For the same 7 years, I didn't do any better. I hit the total number of Named Storms once (also 2002) and the total number of Hurricanes once (in 2003) and had 2 good forecasts out of the 7 (2002 & 2003). I did manage to have a slightly more accurate forecast in 4 of the 7 years, but not necessarily a good forecast. Given that the Major Hurricane category is a subset of the Hurricane category (which in itself is a subset of the Total Storms category) and therefore the numbers should be smaller, I felt that perhaps the overall 'skill' in this category would improve - but it didn't. CSU hit the number of Majors once (in 2003), TSR also hit it once (in 2002) and I was a bit luckier with 4 correct forecasts in this category (2000, 2002, 2003 and 2006).

Let me emphasize the word 'luck' because thats all it really was - we still don't have enough of an understanding and capability to provide this type of a forecast with anything that remotely approaches a higher degree of accuracy. We may never attain that higher level of accuracy (some parameters are probably well beyond forecast capability) - but we'll keep trying. Someday maybe one of you will discover something that will consistently produce a more accurate forecast - and thus advance the science.
Cheers,
ED



Note: This is NOT an official page. It is run by weather hobbyists and should not be used as a replacement for official sources. 
CFHC's main servers are currently located at Hostdime.com in Orlando, FL.
Image Server Network thanks to Mike Potts and Amazon Web Services. If you have static file hosting space that allows dns aliasing contact us to help out! Some Maps Provided by:
Great thanks to all who donated and everyone who uses the site as well. Site designed for 800x600+ resolution
When in doubt, take the word of the National Hurricane Center