As I’m collaborating with the author on another topic, I’m writing this review with my own name. Despite this collaboration, I believe that I’m able to provide an objective review of this manuscript.
In the manuscript Schutgens uses high resolution simulations (GEOS5) to evaluate the spatio-temporal representativity of AOT and AAOT observations done at AERONET and GAW sites. The topic is scientifically very interesting and the analysis is well executed. The author has satisfactorily addressed the concerns of the reviewers on the first round. I recommend that the manuscript is accepted for publication in ACP after minor revision.
General and specific comments:
Page 1, line 19: “correlate strongly throughout the year”, I’m not sure if I understood this correctly. Do you mean that the monthly representation errors correlate with each other or something else?
Introduction: Several abbreviations (e.g. AERONET, GAW, AAOT, AEROCOM) are mentioned in the text but not defined. Abbreviations should be defined in the abstract and then again at the first instance in the rest of the text.
Page 3, line 21: Was there a minimum number of observations required for the calculation of an hourly average? Or do you assume that even a single observation is representative enough?
Page 3, line 37: “although for dust and biomass burning aerosol higher AOT at 440 nm ≥ 0.5 were
needed”, this was hard to follow. Do you mean that for dust and biomass burning aerosols the SSA errors are in the 0.03 range only for AOTs larger than 0.5?
Page 3, line 56: Please clarify here what are the GAW-ABS measurements and how do you calculate the AAOT from them. “Surface properties” are mentioned which makes me think about aerosol surface properties but I’m guessing you mean ground-based in-situ observations of light absorption coefficients at some wavelenght(s)? If I guessed right, then how do you calculate the AAOT from the absorption coefficients? Do you assume some kind of a vertical profile and integrate that?
Page 4, line 47: Figure S4 doesn’t seem to include any sites above 60 degrees latitude.
Page 5, line 11: “the simulation captures spatial variation rather well”, this seems to be true on yearly time scale but do you know if it holds for shorter time scales as well?
Page 5, line 16: “overestimation of dust”, could you clarify here that do you mean the dust load or dust AOT? Overestimation of dust load could be explained by differences in meteorology and consequent changes in dust emissions but overestimation of dust AOT could also be influenced by the optical properties of dust used in the simulation. Did you check how the comparison looks if you separate Africa into northern and southern part? As the northern part is dominated by dust and the southern part by biomass burning aerosols, the analysis could help disentangle the contributions from dust and carbonaceous aerosols.
Page 5, line 39: I’m not sure if you are aware, but AERONET Inversion L1.5 data has a handy flag called If_Retrieval_is_L2(without_L2_0.4_AOD_440_threshold). You could use that to relax the AOT limit but not the other requirements for L2.0.
Page 5, line 77: “Inversion data is generally closer to the equator”, not sure what you mean with this. Do you mean that inversion data has larger SZAs even though the sites that produce the most Inversion data are close to the Equator? This could be related to the differences in the measurement principles (direct vs. almucantar).
Page 5, line 78: I didn’t understand how the overestimation of dust AOT is related to the observational coverage. Could you please clarify?
Page 5, line 85: Instrument malfunction and maintenance will likely affect all observations, not just inversion products, so they are not likely to explain the difference between comparisons with direct and inversion data.
Page 6, line 25: You mentioned in the replies to the first round of reviewers that likely reason for the regional gradients is cloudiness. I believe it would be good to mention that in the text as well. This is an interesting detail because the MODIS AOT also has/had this kind of east-west trend over US. To my understanding, that was caused by land surface properties/orography.
Page 6, line 31: wet growth → hygroscopic growth
Page 6, line 39: Doesn’t the usage of clear-sky data make sense also for the AERONET observations, especially inversion products, as they are based on observations from cloudless parts of the atmosphere?
Page 6, line 101: “substantial reduction in representation error can be seen for for r > 1 sites”, this is true if you compare r = 0 and r > 1 sites but there doesn’t seem to be such a big difference between r =1 and r > 1 sites, at leas based on the mean errors.
Table 6: Please, clarify in the heading what does the “90 %” stand for.
Section 5.2: There’s a large gap between the heading and the text.
Page 7, line 50: Anthropogenic emissions didn’t have a diurnal cycle but biomass burning did. Did you look at the results from South America and southern Africa in this perspective? These kind of regional analysis could strengthen the conclusions here.
Page 7, line 60: “Sect. 6, f”, there’s an extra “f” at the end of the line
Page 8, line 42: The shift from spatio-temporal to spatial representation errors comes rather suddenly. It would help the reader if there would be a short description (and a reference) how the spatial representation errors were calculated, either in this section or in the methods section.
Page 8, line 69: Thank you for sharing this ranking data with easy access! Would it be possible to include also AERONET sites above 60 degrees latitude? You mentioned in the text that near the poles the simulated pixels become too small but is it an issue already at 70 or 80 degrees latitude?
Section 6: If I understood this correctly, Kinne’s ranking is based on site centered analysis whereas in this study the grid was fixed so the sites may not be in the center of the grid boxes. For finer grids this probably doesn’t matter much but it might affect the results at 4 degree grid. What is your view on this?
Page 8, line 3: You mention several examples where the behaviour of site specific representation errors differ from the “rules” defined on the basis of all sites. It would be an interesting and valuable addition if you could give some explanation why things do not go as expected. Does it depend on local meteorology, aerosol types or something else?
Page 9, line 26: “ground-based remote sensing observations”, can you say it like this? If I understood correctly, the GAW absorption observations are in-situ observations.
Page 10, line 24: There’s something wrong in the author list: “K??rcher”
Figures 4, 6, 7, 9, 10, 12, S2, S3, S5-S8: What are the black circle and bar? I’m guessing mean and median, but which is which?
Figure 8: Did you check how the representation errors behave as a function of AOT? I think GAW stations are often designed to observe the background concentrations, meaning lower AOTs, so I’m just wondering if the difference in the altitude dependence between AERONET and GAW is caused solely by topography or do aerosol concentrations also play a role. Of course they are linked so it is hard separate their effects.
Figure 10: The mean statistics for yearly errors in this figure are not exactly the same as in Figure 4. Shouldn’t they be the same? Then another question about the monthly representation errors. How can you calculate monthly representation errors from yearly averages (the brown bar)?
Figure 11: How does the number and spatial distribution of the sites change during the year? I would guess that not all sites provide data constantly throughout the year due the seasonal changes and maintenance. Would the graph look the same if all the months had the same sites? |