I reviewed a previous version of this submission. In this revision, the authors have obviously put a lot of extra work in, and I commend them for that. It is now a lot clearer exactly what was done, and the errors and clarifications pointed out in the previous version have more or less been addressed. The authors have also done a much better job of articulating what the point of the study was, which was not clear in the previous version.
That being said, I do feel that some more revisions are necessary before the manuscript is of a standard that can be accepted for publication in ACP. If the Editor would find it useful, I am willing to provide a review of the next version. My comments are below:
Abstract: This is quite wordy. Perhaps the authors can trim things down a bit, which I think will help retain focus and help the reader quickly appreciate the main results. For example perhaps the sentence on lines 9-12 can be removed from the abstract.
Section 2.2, general: I suggest the authors add some text here to discuss a bit more why both LAI and NDVI are used, and what is gained from using both rather than one or the other. These are distinct but related quantities, in that they depend on the vegetation cover, type, and health, but in different ways, and are derived from some of the same MODIS wavelengths (through different processing algorithms, so they are partly but not fully independent data). The authors demonstrate later that they act as predictors in their model differently, so it would be useful to expand on the reasons for using both, and if possible the reasons that they do behave differently, in more detail.
Page 5, line 143: Remer et al (2005) is the reference for MODIS Collection 4 aerosols. The reference for Collection 6 for this product is Levy et al (2013). This should be updated.
Page 5, lines 148-149: This statement is not correct. Firstly, there is a ‘noise floor’ component to the error term as well as the AOD-proportional error which is listed here. So it is not only a relative uncertainty. Secondly, the magnitudes given are incorrect, and the error is known to be biased (differing systematic biases in different conditions) rather than unbiased. The Remer et al (2013) reference given is to the MODIS Collection 6 3 km aerosol product, not the 10 km product which is what the authors actually use in the analysis. The reference for the 10 km product is Levy et al (2013). Some validation of the 10 km product is given in that paper (mostly over land, some ocean), and some validation of the ocean component in clean conditions is in Sayer et al (2012). The Collection 6 uncertainty estimates are +/-(0.05+15*AOD) over land and from -0.02-0.1*AOD to +0.04+0.1*AOD over water (i.e. the water uncertainty is biased high and larger than thought previously). The authors’ point that this is probably still ok given the uncertainties in the models is probably still fine, but the discussion of the uncertainties should be corrected.
Page 7, lines 208: The Holben et al (1998) reference given for MISR is the main AERONET network reference, not a MISR reference. This should be corrected here. I am not sure what reference the MISR team prefer people to use for this data product, Mike Garay or Olga Kalashnikova (at JPL) would be the best people to check with.
Page 7, line 212: It looks like there is a missing reference here (the paper pdf has “??” where I think a paper reference is supposed to be indicated). I think this reference is the basis of the authors’ justification for MISR as an appropriate reference for evaluation of their predictive model. From the response to the previous reviewer comments, I think this is Cohen (2014), but after reading through I’m not sure how that paper supports this argument. The use of MISR for evaluation is one of the issues I still have with this manuscript (it was an issue I had with the original version as well). The MISR swath width is around 350 km, in contrast to that of MODIS which is 2,330 km, which means that MISR makes observations about once per week in the tropics. So this is about 4 or 5 times per month; assuming half of these are cloudy means that a MISR monthly mean probably has only 2 or 3 days of data contributing to it for a given grid cell. Even if the MISR retrievals were perfect (and they have uncertainties of order 0.2*AOD, see Kahn et al 2010, i.e. comparable to MODIS and other products), there is a huge question of how representative the 2 or 3 times MISR observes per month are of the monthly average. The MISR team even say at meetings that they don’t like to compare monthly means with other sensors, and they prefer seasonal means, because only after a few months do these sampling errors become small. I am not sure how the ‘smaller error’ referred to by the authors is due in part to the narrow swath width (line 212), perhaps this can be clarified or reworded? Having a narrow swath doesn’t help decrease AOD retrieval error. It limits sampling which causes the opposite problem of representivity issues. Reid et al (2013) discuss some aspects of observability issues in this region and how this relates to climatologies and aggregated data. Reid’s perspective is that temporal variability of AOD remains an issue in this area, which is contradictory to what I think is the authors’ assertion that 2 or 3 samples per month is enough. In short, I don’t believe that MISR can be considered a useful tool to evaluate the authors’ predictive model. By all means do a comparison, but it should be called a comparison, not an evaluation, and I would not read too much into the quantitative results of this comparison. So I would suggest that, as well as filling in the missing reference here, the authors should focus more on AERONET and less on MISR in the later discussions, and be more clear about the strength of conclusions that can be drawn from it.
Section 2.4: I think the start of this section is another good place to go into more detail about the specific differences between LAI and NDVI which make both useful here. It sounds like the authors say (lines 275-277) that NDVI should recover more slowly than LAI? Is that right? What is the biological mechanism for this? Are there other studies that can be cited in support of the differing responses of these two retrieval products to burning?
Page 10, lines 325-326: Again, it is not clear to me why 5% of the total variability corresponds to the magnitude of variability to give confidence that something is “real” and not caused from the uncertainty in the measurements themselves. Can the authors provide more information here on how this measurement uncertainty is transformed to estimate the contribution to variance? Is it as simple as saying the MODIS AOD uncertainty is 5%, therefore we need at least 5% of the variance for a mode to represent something useful? Or is this something more complicated? If it is the former, then this should probably be updated to reflect the Collection 6 MODIS uncertainties (see prior comment), and what about the uncertainties in LAI, NDVI, rain, and fire count? In the response to reviewers the authors say “The value is not arbitrary, as it is based on the statistical robustness of the field of the PCA*EOF… More on this is to be included in the write-up as well.” , however I did not find these details in the paper. Or does the 5% come because 5% (or p=0.05) is just a commonly-used metric of significance by many people doing statistical tests? I looked through the Björnsson and Venegas (1997) reference cited earlier and that seemed to be what they were going by (their Section 4.3); if that is the case, perhaps that paper should be cited again at this point.
General: in the authors’ response, the authors confirmed that the term ‘correlation’ in the paper refers to R^2, the coefficient of determination (the square of the correlation coefficient). However I didn’t see this stated explicitly in the revised version of the paper. I suggest the authors either change to the more standard terminology or else state that by correlation they mean coefficient of determination.
Figures: The font size is somewhat small and hard to read when printed out, or viewed on screen without a lot of zooming. I suggest the authors increase this a lot (2-6 points perhaps, dependent on figure). Figure 3 is ok but the others are hard to read. I know that for the time series plots this will mean some axis labels will have to be deleted as larger font won’t fit (e.g. Figure 5), but I think in these cases 1 tick per 2 years is probably sufficient to track time (as opposed to the current 1 per 3 months).
Figure 10: I appreciate the intent of this figure, which shows that 4 AERONET sites follow more or less one pattern in terms of high-AOD months while the other 7 follow a different pattern. Perhaps there is another way it can be plotted? As it is the y-axis is numbered 1-11 without further legend, and the sites are referred to in the caption by symbol color and shape, which is hard to make out as the points are small and colors are repeated. Maybe something more like a Hovmoller plot could be plotted, where each month is a binary shaded/unshaded. The y axis could give site names (or if numbers are still used, these can be put in the caption). If there is not enough space to fit names then the figure could be flipped so site is on the horizontal axis and time on the vertical.
Dennis et al (2005) reference: this appears to be typeset incorrectly as author names appear as a string of initials.
Fuller and Murphy reference: appears as “fuller” rather than “Fuller”.
References:
Kahn, R. A., B. J. Gaitley, M. J. Garay, D. J. Diner, T. F. Eck, A. Smirnov, and B. N. Holben (2010), Multiangle Imaging SpectroRadiometer global aerosol product assessment by comparison with the Aerosol Robotic Network, J. Geophys. Res., 115, D23209, doi:10.1029/2010JD014601.
Levy, R. C., Mattoo, S., Munchak, L. A., Remer, L. A., Sayer, A. M., Patadia, F., and Hsu, N. C.: The Collection 6 MODIS aerosol products over land and ocean, Atmos. Meas. Tech., 6, 2989-3034, doi:10.5194/amt-6-2989-2013, 2013.
Reid et al (2013), Observing and understanding the Southeast Asian aerosol system by remote sensing: An initial review and analysis for the Seven Southeast Asian Studies (7SEAS) program, Atmos. Res 122, doi:10.1016/j.atmosres.2012.06.005.
Sayer, A. M., Smirnov, A., Hsu, N. C., Munchak, L. A., and Holben, B. N.: Estimating marine aerosol particle volume and number from Maritime Aerosol Network data, Atmos. Chem. Phys., 12, 8889-8909, doi:10.5194/acp-12-8889-2012, 2012. |