|The authors provided a new version of the manuscript, trying to correspond to both reviewer’s comments and suggestions. In this revised version the authors improved their analysis mainly by providing additional missing information (actually certain clarifications)- as suggested by the reviewers - and also provided additional graphs in the supplementary materials. |
I am a little disappointed by the fact that the authors did not try to find precipitation data (mainly precipitation frequency) in the areas of interest and correlate it with clean up of the atmospheric pollutants and possible visibility improvement. There are so many studies) on historical precipitation data in the UK (e.g. Alexander, L.V. and Jones, P.D. (2001) Updated precipitation series for the U.K. and discussion of recent extremes, Atmospheric Science Letters doi:10.1006/asle.2001.0025)…..and I think that there must be a very dense network of precipitation data in the UK.
‘Figure S5’ in supplement is not a ‘Figure’ but a ‘Table’ and should be referred accordingly.
Regarding averaging procedures, visibility protocols etc., the authors have added the following in their response:
‘The details of visibility observations are provided within the UK Met Office guidelines (https://badc.nerc.ac.uk/data/ukmo-midas/ukmo_guide.html).’
Looking in this site, I have seen the following :
‘….Visibility is reported in m or km and is stored in MIDAS in dam. In the SYNOP message a non-linear code is used giving a reporting precision of 30m (30m to 100 m); 100m (100 m to 5 km); 1km (5 km to 30 km) and 5 km (30 km to 70 km). There is a further coarser reporting code for use where there are few visual reference points which is principally used at sea. The accuracy requirement for observations of visibility from the synoptic network is +10%. Where visibility is measured at climatological stations the accuracy achieved is generally less that this value….’
So I can understand (as I expected) that there is a visibility scale resulting to much higher uncertainty in higher visibility rates. For this reason I had already asked about ‘averaging’ procedures.
“How do you define good or poor visibility? In Fig 2 authors present long-term trends of the annual/seasonal visibility averages and find an overall positive trend in most stations. However, this cannot provide information on the relative improvement in different visibility ranges. Is the improvement higher in low, average or higher visibilities? I would like to see a frequency distribution of different visibility ranges for different sub-periods, which would be much more informative on visibility improvement.”
To answer the above comment, the authors produced pdfs (probability density functions) for each station and each decade (Figure S1 of supplement material). However, although the (long) figure look detailed, it is not so convenient for comparison purposes. Moreover, in this figure the authors do not examine’ ranges’ of visibility but just visibility values.
I think that the (too long) figure should be replaced by a more simple and ‘easy to read’ figure. For, instance I suggest a few visibility ranges (e.g. < 1km, 1-5, 5-30, 30-70, > 70) as indicated above) and fewer sub-periods, so you can produce histograms with frequencies (%) for each ‘visibility range’ and for each sub-period in the same graph for each stations. This will keep the number of figures at minimum and also enable comparison between different visibility ranges and different subperiods in the same plot for each station. This wil also show if improvement of visibility ia more important in ;high visibility’ levels’ in ;low visibility levels, or in all ranges.