Simon Mason from IRI is visiting South Africa for a project with the South Africa Weather Bureau (SAWS). I had the opportunity to go to Pretoria to ask a few questions to the expert himself. Below I have listed a few things I learnt on my trip:
There are 3 attributes that one must consider in verification of forecasts:
• Resolution: Conditioning on the forecast. If you forecast something, does it happen? A forecast is useful if you have some resolution. Difficult to measure (need lots of data). Good for regions not grid boxes.
• Discrimination: Conditioning on outcomes. If it rains, did we forecast it. Needs less data. Can get map
• Reliability: Measures confidence and bias. Not useful on its own. Need to use (Resolution ∪ Discrimination) ∩ Reliability
Defining a region of study:
It is worth starting at 20 degrees latitude and longitude on each side of area interested and then narrow down by looking at diagnostic maps from GCM to downscaled. Easy to do with CCA. If looking at geo-potential height as an indicator, then the region may need to be bigger to include larger scale synoptic conditions.
Some extra literature for my spare time:
[1] R. Hagedorn and L. A. Smith. Communicating the value of probabilistic forecasts with weather roulette. Meteorological Applications, 2008.
[2] S. J. Mason. Recommended procedures for the verification of operational seasonal climate forecasts. 2010.
[3] A. H. Murphy and D. S. Wilks. A case study of the use of statistical models in forecast veri- fication: Precipitation probability forecasts. Weather and Forecasting, 13:795–810, September 1998.
[4] M. S. Roulston and L. A. Smith. Evaluating probabilistic forecasts using information theory. Monthly Weather Review, 130:1653–1660, June 2002.
[5] M. K. Tippett and A. G. Barnston. Skill of multimodel enso probability forecasts. Monthly Weather Review, 136:3933–3946, 2008.
No comments:
Post a Comment