the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A Comprehensive Reappraisal of Long-term Aerosol Characteristics, Trends, and Variability in Asia
Shikuan Jin
Yingying Ma
Zhongwei Huang
Jianping Huang
Wei Gong
Boming Liu
Weiyan Wang
Ruonan Fan
Hui Li
Abstract. Changes of aerosol loadings and properties are of importance to understand atmospheric environment and climate change. This study investigates the characteristics and the long-term trends of aerosols of different sizes and types in Asia from 2000 to 2020, by considering multi-source aerosol data, novel analysis method and perspective, all this groundwork promote the acquisition of new discoveries that are different from the past. The geometric mean aggregation method is applied, and serial auto-correlation is considered to avoid overestimation of trend significance. Among regions in Asia, high values of aerosol optical depth (AOD) are mainly concentrated in East Asia (EA) and South Asia (SA), closely related to population density. The AOD in EA showed the most significant negative trend with a value of -5.28×10-4 per year, mainly owing to decreases in organic carbon (OC), black carbon (BC), and dust aerosols. It is also worth noting that this observed large-scale decrease in OC and BC is a unique and significant phenomenon to region of EA, and mainly around China. By contrast, the aerosol concentrations in SA generally show a positive trend, with an increase value of AOD of 1.25×10-3 per year. This increase is mainly due to large emission of fine-mode aerosols, such as OC and sulphate aerosol. Additionally, the high aerosol loading in north SA has lower AOD variability comparing with that of East China plain, revealing a relatively more persistent air pollution situation. Over the whole Asia region, the characteristics of percentage changes in different type AOD are increases in BC (6.23 %) and OC (17.09 %) AOD with a decrease in dust (-5.51 %), sulphate (-3.07 %), and sea salt (-9.80 %) AOD. Except for anthropogenic emission, the large increase in the percentage of OC is also owing to wild fires found in Northern Asia in the summer. Whereas, the different size AOD only shows slight changes in Asia, that small-size AOD decreases (-3.34 %), and the total AOD did not show a significant change, suggesting that, from a trend perspective, decreases in aerosol in recent years have mostly been offsetting earlier increases in anthropogenic emission over Asia. To summarize, the above findings analyse the comprehensive characteristics of aerosol distributions and reappraise the long-term trends of different aerosol parameters, which will greatly enhance the understandings of regional and global aerosol environment and climatology, as well as fill in the gaps and break through the limitation of past knowledge.
- Preprint
(4413 KB) -
Supplement
(668 KB) - BibTeX
- EndNote
Shikuan Jin et al.
Status: open (until 04 Apr 2023)
-
RC1: 'Comment on acp-2023-19 - good paper, a few statistical issues', Anonymous Referee #1, 09 Mar 2023
reply
General comments and recommendation
This paper uses MODIS, MISR, and MERRA2 aerosol data to examine aerosol trends across Asia over the period 2000-2020 (plus AERONET to evaluate the satellite retrievals, and MODIS fires for additional context). There are a lot of papers on aerosol trends published, particularly covering Asia owing to its high population and complicated aerosol system which continues to evolve. This means it is important to question whether a submission really brings anything new to the state of knowledge. This study identifies itself as a “comprehensive reappraisal” and I believe in this case it is warranted. While these data sets are often used for trend analyses, this study approaches trend calculations using different statistical approaches than previous studies (specifically: analyses based on geometric rather than arithmetic means; autocorrelation-resistant trend and significance estimates). This compensates for some of the quirks of aerosol data (i.e. skewed distributions and autocorrelated data) which are often neglected in similar studies, which is a strength. It also accounts for the possibility in change points in trends, which is reasonable as with a 20-year time series it is reasonable to expect that trends might not continue linearly the whole time. The results and previous studies are discussed in the context of the methodological differences.
The topic is relevant and appropriate for the journal. The quality of written language is ok (standard journal copy-editing should be sufficient). The quality of the figures is in general ok. The data availability statement is present and mostly sufficient (I am not sure that “available on request for the Wuhan data is acceptable – I believe Copernicus publications now requires all data are on a public repository unless there is a compelling reason otherwise). The supplementary materials are relevant.
That said, there is some missing information needed to understand the study, and I have some typographical and figure suggestions, and a few questions. Some of this could affect the conclusions, so I favour major revisions and re-review. I would be happy to provide a further review.
I note my expertise is on the satellite retrieval and analysis side and not on the aerosol emissions/transport/policy side, so I recommend at least one other reviewer is an expert in those domains in case I have missed something in my review.
Specific comments
- Lines 105-115: I am not sure how relevant these lines are in the context of the study. It is true that algorithms like GRASP and MAIAC apply spatial and/or temporal smoothness constraints. In principle this could artificially decrease aerosol variability, but in practice I don’t think this is a major concern, as many aerosol events extent for more than one pixel in space or time. From my use of these data sets, such constraints instead work to reduce noise and to reduce artefacts resulting from e.g. sporadic cloud contamination. So that would be a net benefit to trend analysis as there would be fewer artificial (positive) outliers – the current wording of the paper implies that these smoothness constraints are a problem. I suggest this text is modified or removed.
- Section 2.2: the AERONET references here are outdated. For the current version 3 direct sun (AOD) data set is Giles (2019): https://amt.copernicus.org/articles/12/169/2019/ The inversion data set (size, SSA/AAOD) is Sinyuk et al (2020): https://amt.copernicus.org/articles/13/3375/2020/
- Lines 144-145: Could the authors provide more information on the Wuhan site? It does not look like it is part of the official AERONET (checking on their website). The paper says it is the same instrument type – what about data processing? Does it run on the AERONET processing code or something else? How is it calibrated? The Jin paper cited provides a calibration reference (which should be included here too); it says it uses the same algorithms as AERONET but it’s still not clear whether that means the same code or a different implementation of the same approach.
- Section 2.3: the spatial resolution of the satellite data sets is described here, but what is not clear is how they are aggregated in space-time for the trend analyses later. It can’t be a simple stacking because the products are on swath-referenced grids, and the trend analysis would need reprojection to an Earth-referenced grid. This is also important for the later analyses of spatiotemporal mapping. It is a simple nearest-neighbor remapping or is there averaging if multiple pixels fall within a grid element on a given day? What is the spatial size of the grid – 10 km equal area or 0.1 degree equal angle, for example? Does it vary dependent on data set? How are MODIS Terra and Aqua data combined, are they treated as one data set? This should all be stated in the manuscript.
- Line 152: the reference for Collection 6.1 MODIS Deep Blue is Hsu et al (2019): https://agupubs.onlinelibrary.wiley.com/doi/full/10.1029/2018JD029688 the paper is about VIIRS and MODIS. The 2013 paper cited is about collection 6.
- Line 190: the text about GCOS doesn’t really make sense as written. I know what the authors mean but the wording needs improvement. This uncertainty represents a goal uncertainty for an aerosol climate record. GCOS itself is an organization, not a metric. Words like “goal uncertainty” or similar should be added into that sentence. I also suggest not naming the metric “GCOS” maybe something like %GCOS (with GCOS subscripted) would make it clearer that this is a percentage metric relating to GCOS goals.
- Section 3.1 and 4.1: linear least squares regression is not appropriate for this type of analysis because e.g. the data distribution is skewed, the errors are dependent on AOD magnitude (as seen in Figure 2), there can be multiple different sub-populations with different error characteristics at a single site or region, the data are autocorrelated, etc. All of these violate the assumptions needed for the least squares slope and offset estimators to be unbiased. Calculating Pearson’s linear correlation coefficient is ok (I personally prefer Spearman’s rank correlation) but the slope and offset calculated are likely biased and misleading and so should be removed from the manuscript otherwise it is not statistically sound. I don’t believe they are necessary for the argument about uncertainty any bias anyway, as relevant information is provided by the other statistics calculated and by the histograms in Figure 2. I have strong opinions on this point because inappropriate statistics are too common in papers and is sometimes justified by saying that other papers did it. The discussions of regression slope and intercept should be removed from the paper.
- Section 3.2.2.: I have read this text and the accompanying supplementary figures a few times and am still not sure I understand it. I recommend rewording – maybe more detail is needed in the supplement to explain these figures, and some of the explanatory text could come after the equations instead of before. Moving Figure S2 to the main paper would also be useful. I am a bit lost on how the p-value is calculated in that figure. If I understand correctly the threshold represents the magnitude of autocorrelation you would be no more than 5% likely to observe if there were a true zero underlying autocorrelation? So the distributions narrowing and centering around zero mean that the method goes from having about 20% of grid cells with correlation smaller than this threshold, to about 90%? Also, Equation 10 implies that correcting for (positive) autocorrelation makes the corrected trend estimate (beta prime) bigger than the initial trend estimate (beta) – i.e., unaccounted for autocorrelation would have made the trend appear smaller than it really is. Is that correct or is that backwards?
- Tables: in general I think too many significant figures are given in these, it adds clutter and implies more precision on the metrics (these are all estimates of the population true metric) than we likely have given sample sizes. For example in Table 1 do we really need the GCOS goal percentage to 5 significant figures? I suggest two decimal places for MAE, RMB, RMSE, and R, and rounding to the nearest 1% for the GCOS goal.
- Lines 327-328: The wording about a significance test is a little strange. Is this saying that because AERONET and MISR AAOD are essentially uncorrelated (R=-0.023) it’s not used in the trend analysis because it can’t capture the spatiotemporal variation? If so, I would say it that way. Since AERNOET AAOD is also a retrieved (not measured) parameter, and has significant uncertainty, it would be worth stating that some of this may be due to limitations on the AERONET side and not just the MISR side.
- Figure 7: I like this figure but think that including the Pettitt scores here adds too much clutter. I suggest removing that (the right-hand axis, red time series, and threshold lines) and only including the vertical red lines where a change was detected. I think this will make the plots more readable, and the most relevant point about the change detection is if and when it happens and not the precise Petttitt rank score.
- A more general comment: this paper frames a lot of analyses in terms of statistical significance at the 5% level. This is quite a binary way of thinking that is common in science and is something of an arbitrary practice that has become entrenched. One issue which is not really discussed here but is common in the analysis here (and many other papers) is that the study is not doing a single hypothesis test: there are thousands of them in the paper (every single grid point where a significance test is made is a hypothesis test). So it is fine to take the p=0.05 threshold and say we will mask the areas where the odds of seeing a result at least this big if there is no underlying relation is 5% or less. But the flip side is the false discovery rate – that some of the apparent significant results will belong to those 5%. And due to the high spatiotemporal autocorrelation in aerosol data, some of these will be clustered and so look realistic as opposed to noise or artefacts. The danger is then in overinterpreting something which looks realistic but may be spurious. One of my favourite papers on this topic is Wilks (2016): https://journals.ametsoc.org/view/journals/bams/97/12/bams-d-15-00267.1.xml That and studies cited within suggest methods to control the false discovery rate by adjusting the p-value threshold chosen, so you can be more sure that apparent significant results are real. Even if that is not done, though, I think it is important that this aspect is acknowledged and discussed in the paper as many readers might be statistically unaware. So in general I prefer to focus less on p-values and more on uncertainty estimates for e.g. trends – an “insignificant” trend could be because the likely trend is small and its uncertainty is also fairly small, or because the trend uncertainty is large (in which case we might not have a good estimate of its magnitude or sign). These two are quite different situations. For example in this study the MERRA2 trend maps have a lot more data labelled significant than MODIS – this is expected because MERRA2 is spatiotemporally complete which means the uncertainty of the trend is lower so confidence in the estimate is generally higher (so there is more statistical power to detect a trend and you can see smaller ones).
Citation: https://doi.org/10.5194/acp-2023-19-RC1 -
RC2: 'Comment on acp-2023-19', Anonymous Referee #2, 18 Mar 2023
reply
General comments
The manuscript by Jin et al. evaluated the distributions, trends, and variabilities of aerosol parameters (concentration, sizes, and types) using multi-source, long-term aerosol records, including ground-based observations, satellite products, and atmospheric reanalysis, which ensure the quality of the study and the conclusions. More importantly, geometric mean is used to better describe the lognormal distribution of aerosol, and a TFPW-MK method was introduced to avoid error in trend analysis of time-series data due to probable auto-correlation. Generally, the manuscript is well organized and written. However, before it could be considered to publish, there are still some problems that need clarification.
Specific comments
- Line 59: The statement is unclear for me. How could the short-term variances be beneficial to reveal the natural aerosol emissions? Please give a brief explanation.
- Line 224: What is the type I error, and please give a brief explanation.
- Line 296: This study used the AOD geometric to investigate the temporal trends, which is an essential point for the work. Meanwhile, it appears they use normal mean for standard deviation calculations. Does this lead to any biases?
- Section 4.1 is helpful to understand the quality of different products before looking at the long-term trends. However, some further analysis/discussion would be beneficial for the readers. For example, how much data is removed due to cloud cover and/or unfavorable land cover such as snow? How does this impact the analysis of the long-term trends? Does this lead to more or less data in some areas during the same season?
- The AOD was higher in spring than winter due to the emissions associated to heating and industry and lower boundary layer? How much of this is due to cloud cover vs. dust from deserts?
- As another note on the aerosol type, the authors report decreasing or increasing dust and sea salt. But why these are decreasing or increasing as the emissions of these two aerosol types are natural and not anthropogenic.
- Line 105-109: Except for the consideration of adjacent pixels, a globally consistent assumption is also very important for continuous distribution in aerosol retrieval.
- Section 2.2: Please clarify the estimated uncertainties of different parameters from observation of CE-318.
- How different it is if using a normal distribution in the long-term aerosol studies compared with using a lognormal distribution.
- Line 280: What do the ‘pixel level’ and ‘region level’ refer to? Please clarify.
- Figure 10: Is the aggregation method of aerosol percent also the geometric?
- Table 2: The SS aerosol in SEA showed a slight decrease but this was not revealed in size-segregated AOD.
Citation: https://doi.org/10.5194/acp-2023-19-RC2
Shikuan Jin et al.
Shikuan Jin et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
275 | 89 | 10 | 374 | 26 | 3 | 5 |
- HTML: 275
- PDF: 89
- XML: 10
- Total: 374
- Supplement: 26
- BibTeX: 3
- EndNote: 5
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1