the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Satellite-based evaluation of AeroCom model bias in biomass burning regions
Nick Schutgens
Guido van der Werf
Twan van Noije
Kostas Tsigaridis
Susanne E. Bauer
Tero Mielonen
Alf Kirkevåg
Øyvind Seland
Harri Kokkola
Ramiro Checa-Garcia
David Neubauer
Zak Kipling
Hitoshi Matsui
Paul Ginoux
Toshihiko Takemura
Philippe Le Sager
Samuel Rémy
Huisheng Bian
Mian Chin
Kai Zhang
Jialei Zhu
Svetlana G. Tsyro
Gabriele Curci
Anna Protonotariou
Ben Johnson
Joyce E. Penner
Nicolas Bellouin
Ragnhild B. Skeie
Gunnar Myhre
Download
- Final revised paper (published on 31 Aug 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 28 Feb 2022)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on acp-2022-96', Anonymous Referee #1, 30 Mar 2022
Review of Zhong et al, Satellite-based evaluation of AeroCom model bias in biomass
burning regionsThis paper present an evaluation of AeroCom model aerosol optical properties in regions strongly influenced by biomass burning. In line with previous research, large biases are found. Diverse satellite products are used and a valuable comparison of satellite products is included. Furthermore, a useful disentangling of the biases associated with emissions and with lifetime is presented. The paper is well written and has potential to be an important contribution to ACP. I have a number of minor comments which should be addressed before the paper is published.
Minor comments
Models and variables section: More historical context on when the simulations were run and what the differences in model versions are between the experiments would be useful here. Were the model versions the same, or did the models change between the BBE, 2016 and 2019 experiments? I don’t think you can expect your readers to be familiar with AeroCom protocols or to go through other AeroCom papers or the Excel sheet supplement, though of course all the details of specific changes from one experiment to the next do not need to be repeated here.
Even though the size distribution of the model output is not available, the size distributions of the simulated BB emissions inputs are mentioned in the Appendix Table, so it should be possible to infer the impact of these size distributions on lifetime and AOD to some extent. It would be useful to try to do this, and it seems odd to have such a long discussion on hygroscopicity when size is probably more important.
Why does the NMB for BBE5 reach up to 19? Isn’t it a bit surprising that it ever exceeds 7.5, given BBE1 has a maximum NMB of 1.5? Is this a linear increase (line 372)?
Figure 5: I find this figure hard to extract much meaning from – a great deal of the information is lost by just showing charts of the correlations. I did not understand the value of a correlation between spatial correlation and temporal correlation. I think it would be better to have AOD vs time line plots for POLDER and for all of the models, with one subfigure for each region (or similar). Then we could see which part of the season the biases are most apparent in, and where the biases are in the regions. It is surprising the spatial correlation can be so low for some models (GISS and INCA) – perhaps a scatter plot would be useful here of simulated AOD vs POLDER AOD for these models?
Also, why are the results in subfigures a, b and c so different? What differed between the three experiments to cause this? You comment in the text that the figures are pretty similar, but they look quite different to me.
L381-390 this is a nice analysis, should be very useful.
What is the real distinction between section 4 and section 5, before section 5.1? The sections may need more thought.
L450-465: The interesting part here is not so much the negative correlation, which is presumably coded into the models by their parameterizations of Mie theory, but why the models deviate from the Mie curve- presumably due to the mixing of several broad size distributions.
L541 I did not see a discussion of the clear-sky assumption in the appendix, and the references given there are mostly generic model description papers, so it would take the reader unfeasibly long to reconstruct what difference the authors are referring to, so please clarify.
Technical corrections
Abstract: "comprise" at line 60 is the wrong word
L240 “proposed” is an odd word here.
L240-270 the paragraph is much too long and should be split up, with clearly defined topics introduced in the first sentence of each paragraph. That said, the paragraph from 271 to 274 does not have its own topic and seems to belong with the previous text.
L256 improve sentence
L520 not clear what ‘thoroughly’ means
Citation: https://doi.org/10.5194/acp-2022-96-RC1 - AC1: 'Reply on RC1', Qirui Zhong, 29 May 2022
-
RC2: 'Comment on acp-2022-96', Anonymous Referee #2, 11 May 2022
Dear Authors,
Thank you for this exhaustive and well-described analysis of the factors governing uncertainty in simulation of atmospheric aerosols in regions affected by biomass burning. This is a problem of long-standing concern in the atmospheric composition community, and your study provides valuable information on the commonalities and differences of the atmospheric simulation models currently in use.
I have only minor recommendations for revisions. I encourage you to also attend closely to the revisions requested by the other reviewers.
Line 226 The Schutgens (2020) paper makes a number of interesting assertions about the potential effects of cloud contamination, but I do not see the suggestion there that southern hemisphere Africa during the burning season is subject to high cloud contamination. That is not consistent with other literature either. I would examine other explanations such as the extent of arid areas in southern Africa where satellite retrieval is more difficult.
Line 377: “For the aerosol lifetime and MEC which were mainly affected by other model aspects than emissions, there was no significant difference found among the three fire regions for the same model.” Are you saying that the models used each had uniform MEC among the three regions? Are you saying that the models did not have varying lifetimes for the three regions? Either of these findings is quite significant, as they represent model assumptions and outcomes that can be compared to observations.
Line 118: “regarding to knowing issues for BBA models for more than ten years” I would update this sentence and expand to clarify that BBA has been acknowledged as a large source of uncertainty in atmospheric aerosol for a very long time (e.g. AeroCom phase II paper from 2013: https://acp.copernicus.org/articles/13/1853/2013/, or before that this 2005 review by Kanakidou https://acp.copernicus.org/articles/5/1053/2005/, or before that this 1992 Science paper by Joyce Penner https://www.science.org/doi/abs/10.1126/science.256.5062.1432), and this study was undertaken to examine uncertainties and variation in current state-of-the-art modeling systems.
Line 135 “in multi” => “in multiple”
Line 191 “To avoid sampling issues” => “To mitigate sampling issues associated with varying coverage of the observational data sources”
Line 256: “impacts of different were” “impacts of verifying against different satellite data products were”
Line 270: “for the whole research” => “for the whole analysis.”
Line 421: “positive correlation” is this actually a positive correlation? Your figure shows a positive correlation between precip and deposition load.
Line 520: “thoroughly” choose a different word—perhaps you mean “uniformly?”
Citation: https://doi.org/10.5194/acp-2022-96-RC2 - AC2: 'Reply on RC2', Qirui Zhong, 29 May 2022