The following gives a brief overview of research I have conducted into observational fog prediction, which is the subject of a recently submitted article: Understanding and Reducing False Alarms in Observational Fog Prediction [under revision].
OBSERVATIONAL FOG PREDICTION
There are two broad categories of weather forecasting: numerical (using computer simulations) and observational. Observational fog prediction relies on diagnosing the likelihood of fog forming at a given time based off current observations and how close they are to being "fog-favourable" (i.e., likely to lead to saturation)*. One of the first to outline a method of fog prediction was Major G.I. Taylor who, in 1917, published an article titled: "The Formation of Fog and Mist". In it, he outlined a method for predicting fog during the coming night based on observations at 8 pm. More recently, French researchers developed a statistical method (referred to here as the M14 method after the lead author, Laurent Menut, and the year, 2014) which assigns a likelihood of fog occurrence from 0-1 based on comparing observations of key variables to pre-defined thresholds (see the references below).
*Interestingly, fog isn't just a predicted phenomenon, but also a predictive phenomenon! In ancient Greece, the presence of fog was used to forecast good weather in the coming day. We now know this is because radiation fog is often associated with high pressure systems.
OBSERVATIONAL FOG PREDICTION
There are two broad categories of weather forecasting: numerical (using computer simulations) and observational. Observational fog prediction relies on diagnosing the likelihood of fog forming at a given time based off current observations and how close they are to being "fog-favourable" (i.e., likely to lead to saturation)*. One of the first to outline a method of fog prediction was Major G.I. Taylor who, in 1917, published an article titled: "The Formation of Fog and Mist". In it, he outlined a method for predicting fog during the coming night based on observations at 8 pm. More recently, French researchers developed a statistical method (referred to here as the M14 method after the lead author, Laurent Menut, and the year, 2014) which assigns a likelihood of fog occurrence from 0-1 based on comparing observations of key variables to pre-defined thresholds (see the references below).
*Interestingly, fog isn't just a predicted phenomenon, but also a predictive phenomenon! In ancient Greece, the presence of fog was used to forecast good weather in the coming day. We now know this is because radiation fog is often associated with high pressure systems.
The Problem of False Alarms
When performing any forecast, the main goal is to get it right. But, "right" is often a balance between capturing as many events as possible, while not over-predicting. For example, if I were to predict fog every single night, I would never miss a fog event. That, however, is not a good forecast as I would be falsely predicting fog far more often than I was correctly predicting fog! So, forecasts can be measured in terms of both their hit rate (the number of true events predicted correctly), and false alarms (the number of non-events incorreclty predicted). A good forecast is then one with as high a hit rate as possible, while having the minimum number of false alarms (ideally, 100% and 0%, respectively).
The major limitation to using statistical/observational forecasting methods is that they are prone to very high false alarm rates. With the M14 method, for example, one can obtain hit rates above 90%, but at the cost of false alarm rates of around 40%. That means forecasters can have little confidence in the predictions made. Conversely, if one wanted to have a lower number of false alarms (increased confidence), it immediately means a reduction in false alarm rate.
When performing any forecast, the main goal is to get it right. But, "right" is often a balance between capturing as many events as possible, while not over-predicting. For example, if I were to predict fog every single night, I would never miss a fog event. That, however, is not a good forecast as I would be falsely predicting fog far more often than I was correctly predicting fog! So, forecasts can be measured in terms of both their hit rate (the number of true events predicted correctly), and false alarms (the number of non-events incorreclty predicted). A good forecast is then one with as high a hit rate as possible, while having the minimum number of false alarms (ideally, 100% and 0%, respectively).
The major limitation to using statistical/observational forecasting methods is that they are prone to very high false alarm rates. With the M14 method, for example, one can obtain hit rates above 90%, but at the cost of false alarm rates of around 40%. That means forecasters can have little confidence in the predictions made. Conversely, if one wanted to have a lower number of false alarms (increased confidence), it immediately means a reduction in false alarm rate.
Prediction performance for the M14 method. First three represent previously published thresholds (Menut et al., 2014 and Román-Cascón et al., 2016). The remaining bars are different thresholds sets, optimizing with increasing strictness (more risk, higher confidence in the results), with performance for forecast windows from six hours to one hour lead times.
Tuning the Prediction to Different Needs Through Optimization
A truly perfect prediction is practically impossible, but a good prediction is achievable. The problem is that "good" is then subjective depending on the user of the forecast, with there being a trade off between accurate forecasting and permissible levels of risk in a forecast. It is conceivable that for some scenarios overprediction is desired (i.e., there can be little to no risk that an event be missed). Conversely, there may be some acceptable level of missed events when operations or needs are able to adjust. For example, this might occur at an airport where procedures are in place to adapt for inclement weather and the financial loss of altering schedules for an incorrect fog forecast could be greater than for delays caused by an occasional unforeseen fog event.
Testing different optimization schemes, the M14 method can be easily tuned to these different needs, with thresholds optimized according to a balance between confidence and risk. One of the key results is that even just a slight reduction in confidence (allowing for a small drop in hit rate), leads to a much greater increase in confidence (drop in false alarm rate).
Improving Forecasts through Reducing the Forecast Lead Time
Another area where the forecasts were able to be improved is through the reduction in forecast lead time. While the whole point of forecasts is to know what is coming in the future (making a shorter lead time perhaps counter-intuitive), the conditions before fog events are not entirely distinct as far as six hours in advance of an event onset. Reducing the lead time of the forecast to three hours, or even one hour, in advance of the prediction leads to a significant improvement in overall scores, which may serve to offset the reduced lead time.
A truly perfect prediction is practically impossible, but a good prediction is achievable. The problem is that "good" is then subjective depending on the user of the forecast, with there being a trade off between accurate forecasting and permissible levels of risk in a forecast. It is conceivable that for some scenarios overprediction is desired (i.e., there can be little to no risk that an event be missed). Conversely, there may be some acceptable level of missed events when operations or needs are able to adjust. For example, this might occur at an airport where procedures are in place to adapt for inclement weather and the financial loss of altering schedules for an incorrect fog forecast could be greater than for delays caused by an occasional unforeseen fog event.
Testing different optimization schemes, the M14 method can be easily tuned to these different needs, with thresholds optimized according to a balance between confidence and risk. One of the key results is that even just a slight reduction in confidence (allowing for a small drop in hit rate), leads to a much greater increase in confidence (drop in false alarm rate).
Improving Forecasts through Reducing the Forecast Lead Time
Another area where the forecasts were able to be improved is through the reduction in forecast lead time. While the whole point of forecasts is to know what is coming in the future (making a shorter lead time perhaps counter-intuitive), the conditions before fog events are not entirely distinct as far as six hours in advance of an event onset. Reducing the lead time of the forecast to three hours, or even one hour, in advance of the prediction leads to a significant improvement in overall scores, which may serve to offset the reduced lead time.
Why do False Alarms Occur?
While significant improvements to the prediction scores can be made by either optimizing the thresholds to different needs, or by using a shortened lead time, false alarms remain. These false alarms can be roughly broken into three categories:
While significant improvements to the prediction scores can be made by either optimizing the thresholds to different needs, or by using a shortened lead time, false alarms remain. These false alarms can be roughly broken into three categories:
- Forward Evolution of the System. The M14 method is purely diagnostic based on recent observations (zero-dimensional in time), with no information about the future evolution of the system. This includes something as simple as the rising of the sun (more significant for the 6 hour prediction) as well as a change in synoptic weather conditions.
- Advective History of the Airmass. The method assumes that the local conditions are also representative of the upwind conditions (spatial influence), with the advection of an air mass with different properties (e.g., warmer or drier air) making the conditions unfavourable for fog at a later time.
- Unidentifiable from the observations. I think this one is self-explanatory...in other words, I have no clue!
False False Alarms?
The determination of false alarms vs hit rates relies on the observations of visibility at Cabauw. These, however, are made at a single point, with the lowest level of 1.5 m. It is possible, however, that there is fog present at the time, but it is too shallow, or too patchy, to be captured by the single-point observations. It is not uncommon, for example, to see such shallow fog layers, or to drive along a road at night and suddenly encounter a patch of fog drifting across the road. In this case, it is not necessarily a false alarm in the strict sense as there is still fog present, just not observed (a false false alarm). The possibility poses many interesting questions that should be considered in future observational campaigns and analysis, with high resolution observations in this missed layer. While such events are perhaps not as hazardous as deep, thick fog cases, the presence even of shallow or sparse fog can be dangerous, particularly for road traffic. Likewise, understanding cases of shallow or patchy fogs, and whether or not they continue to develop into thicker and deeper fog events is important for understanding the full lifecycle and evolution of fog events. This is an avenue I am currently pursuing.
Further Reading
The determination of false alarms vs hit rates relies on the observations of visibility at Cabauw. These, however, are made at a single point, with the lowest level of 1.5 m. It is possible, however, that there is fog present at the time, but it is too shallow, or too patchy, to be captured by the single-point observations. It is not uncommon, for example, to see such shallow fog layers, or to drive along a road at night and suddenly encounter a patch of fog drifting across the road. In this case, it is not necessarily a false alarm in the strict sense as there is still fog present, just not observed (a false false alarm). The possibility poses many interesting questions that should be considered in future observational campaigns and analysis, with high resolution observations in this missed layer. While such events are perhaps not as hazardous as deep, thick fog cases, the presence even of shallow or sparse fog can be dangerous, particularly for road traffic. Likewise, understanding cases of shallow or patchy fogs, and whether or not they continue to develop into thicker and deeper fog events is important for understanding the full lifecycle and evolution of fog events. This is an avenue I am currently pursuing.
Further Reading
- Izett, J. G., B. J. H. van de Wiel, P. Baas, and F. C. Bosveld (2018). Understanding and Reducing False Alarms in Observational Fog Prediction. Boundary-Layer Meteorology. DOI: 10.1007/s10546-018-0374-2.
- Menut L, S. Mailler, J. C. Dupont, M. Haffelin, Elias T. Predictability of the meteorological conditions favourable to radiative fog formation during the 2011 parisfog campaign. Boundary-Layer Meteorology, 150, 277-297, DOI 10.1007/s10546-013-9875-1. (2014)
- Román-Cascón C, G. J. Steeneveld, C. Yague, M. Sastre, J. A. Arrillaga, and G. Maqueda. Forecasting radiation fog at climatologically contrasting sites: Evaluation of statistical methods and WRF. Quart. J. R. Met. Soc., 142, 1048-1063, DOI 10. 1002/qj.2708. (2016)
- Taylor, G. I. The formation of fog and mist. Quart. J. Roy. Met. Soc., 43(183), 241-268 (1917).