Satellite-based evaluation of AeroCom model bias in biomass burning regions
- 1Department of Earth Sciences, Vrije Universiteit, Amsterdam, The Netherlands
- 2Royal Netherlands Meteorological Institute, De Bilt, the Netherlands
- 3Center for Climate Systems Research, Columbia University, 2880 Broadway, New York, NY 10025, USA
- 4NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025, USA
- 5Finnish Meteorological Institute, Kuopio, Finland
- 6Norwegian Meteorological Institute, Oslo, Norway
- 7Laboratoire des Sciences du Climat et de l'Environnement, IPSL, Gif-sur-Yvette, France
- 8Institute for Atmospheric and Climate Science, ETH Zurich, Zurich, Switzerland
- 9European Centre for Medium-Range Weather Forecasts, Reading, UK
- 10Graduate School of Environmental Studies, Nagoya University, Nagoya, Japan
- 11NOAA, Geophysical Fluid Dynamics Laboratory, Princeton, NJ, USA
- 12Research Institute for Applied Mechanics, Kyushu University, Fukuoka, Japan
- 13HYGEOS, Lille, France
- 14University of Maryland, Baltimore County (UMBC), Baltimore, MD, USA
- 15NASA Goddard Space Flight Center, Greenbelt, MD, USA
- 16Pacific Northwest National Laboratory, Richland, WA, USA
- 17Institute of Surface-Earth System Science, School of Earth System Science, Tianjin University, Tianjin 300072, China
- 18Department of Physical and Chemical Sciences, University of L’Aquila, L’Aquila, Italy
- 19Center of Excellence in Telesening of Environment and Model Prediction of Severe Events (CETEMPS), University of L’Aquila, L’Aquila (AQ), Italy
- 20Department of Physics, University of Athens, Athens, Greece
- 21Met Office, Exeter UK
- 22Department of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI, USA
- 23Department of Meteorology, University of Reading, Reading, UK
- 24Center for International Climate and Environmental Research-Oslo (CICERO), Oslo, Norway
- 1Department of Earth Sciences, Vrije Universiteit, Amsterdam, The Netherlands
- 2Royal Netherlands Meteorological Institute, De Bilt, the Netherlands
- 3Center for Climate Systems Research, Columbia University, 2880 Broadway, New York, NY 10025, USA
- 4NASA Goddard Institute for Space Studies, 2880 Broadway, New York, NY 10025, USA
- 5Finnish Meteorological Institute, Kuopio, Finland
- 6Norwegian Meteorological Institute, Oslo, Norway
- 7Laboratoire des Sciences du Climat et de l'Environnement, IPSL, Gif-sur-Yvette, France
- 8Institute for Atmospheric and Climate Science, ETH Zurich, Zurich, Switzerland
- 9European Centre for Medium-Range Weather Forecasts, Reading, UK
- 10Graduate School of Environmental Studies, Nagoya University, Nagoya, Japan
- 11NOAA, Geophysical Fluid Dynamics Laboratory, Princeton, NJ, USA
- 12Research Institute for Applied Mechanics, Kyushu University, Fukuoka, Japan
- 13HYGEOS, Lille, France
- 14University of Maryland, Baltimore County (UMBC), Baltimore, MD, USA
- 15NASA Goddard Space Flight Center, Greenbelt, MD, USA
- 16Pacific Northwest National Laboratory, Richland, WA, USA
- 17Institute of Surface-Earth System Science, School of Earth System Science, Tianjin University, Tianjin 300072, China
- 18Department of Physical and Chemical Sciences, University of L’Aquila, L’Aquila, Italy
- 19Center of Excellence in Telesening of Environment and Model Prediction of Severe Events (CETEMPS), University of L’Aquila, L’Aquila (AQ), Italy
- 20Department of Physics, University of Athens, Athens, Greece
- 21Met Office, Exeter UK
- 22Department of Climate and Space Sciences and Engineering, University of Michigan, Ann Arbor, MI, USA
- 23Department of Meteorology, University of Reading, Reading, UK
- 24Center for International Climate and Environmental Research-Oslo (CICERO), Oslo, Norway
Abstract. Global models are widely used to simulate biomass burning aerosols (BBA). Exhaustive evaluations on model representation of aerosol distributions and properties are fundamental to assess health and climate impacts of BBA. Here we conducted a comprehensive comparison of Aerosol Comparisons between Observation project (AeroCom) model simulations with satellite observations. A total of 59 runs by 18 models from three AeroCom Phase III experiments (i.e., Biomass Burning Emissions, CTRL16, and CTRL19) and 14 satellite products of aerosols were used in the study. Aerosol optical depth (AOD) at 550 nm was investigated during the fire season over three key fire regions reflecting different fire dynamics (i.e., deforestation-dominated Amazon, Southern Hemisphere Africa where savannas are the key source of emissions, and boreal forest burning on boreal North America). The 14 satellite products were first evaluated against AErosol RObotic NETwork (AERONET) observations, with large uncertainties found. But these uncertainties had small impacts on the model evaluation that was dominated by modeling bias. Through a comparison with Polarization and Directionality of the Earth’s Reflectances (POLDER-GRASP) observations, we found that the modeled AOD values were biased by -93–152 %, with most models showing significant underestimations even for the state-of-art aerosol modeling techniques (i.e., CTRL19). By scaling up BBA emissions, the negative biases in modeled AOD were significantly mitigated, although it yielded only negligible improvements in the correlation between models and observations, and the spatial and temporal variations of AOD biases did not change much. For models in CTRL16 and CTRL19, the large diversity in modeled AOD was in almost equal measures caused by diversity in emissions, lifetime, and mass extinction coefficient (MEC). We found that in the AEROCOM ensemble, BBA lifetime correlated significantly with particle deposition (as expected) and in turn correlated strongly with precipitation. Additional analysis based on Cloud-Aerosol LIdar with Orthogonal Polarization (CALIOP) aerosol profiles suggested that the altitude of the aerosol layer in the current models was generally too low, which also contributed to the bias in modeled lifetime. Modeled MECs exhibited significant correlations with the Ångström Exponent (AE, an indicator of particle size). Comparisons with the POLDER-GRASP observed AE suggested that the models tended to overestimate AE (underestimated particle size), indicating a possible underestimation of MECs in models. The hygroscopic growth in most models generally agreed with observations and might not explain the overall underestimation of modeled AOD. Our results imply that current global models comprise biases in important aerosol processes for BBA (e.g., emissions, removal, and optical properties) that remain to be addressed in future research.
-
Notice on discussion status
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
-
Preprint
(4690 KB)
-
Supplement
(21 KB)
-
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4690 KB) -
Supplement
(21 KB) - BibTeX
- EndNote
Journal article(s) based on this preprint
Qirui Zhong et al.
Interactive discussion
Status: closed
-
RC1: 'Comment on acp-2022-96', Anonymous Referee #1, 30 Mar 2022
Review of Zhong et al, Satellite-based evaluation of AeroCom model bias in biomass
burning regionsThis paper present an evaluation of AeroCom model aerosol optical properties in regions strongly influenced by biomass burning. In line with previous research, large biases are found. Diverse satellite products are used and a valuable comparison of satellite products is included. Furthermore, a useful disentangling of the biases associated with emissions and with lifetime is presented. The paper is well written and has potential to be an important contribution to ACP. I have a number of minor comments which should be addressed before the paper is published.
Minor comments
Models and variables section: More historical context on when the simulations were run and what the differences in model versions are between the experiments would be useful here. Were the model versions the same, or did the models change between the BBE, 2016 and 2019 experiments? I don’t think you can expect your readers to be familiar with AeroCom protocols or to go through other AeroCom papers or the Excel sheet supplement, though of course all the details of specific changes from one experiment to the next do not need to be repeated here.
Even though the size distribution of the model output is not available, the size distributions of the simulated BB emissions inputs are mentioned in the Appendix Table, so it should be possible to infer the impact of these size distributions on lifetime and AOD to some extent. It would be useful to try to do this, and it seems odd to have such a long discussion on hygroscopicity when size is probably more important.
Why does the NMB for BBE5 reach up to 19? Isn’t it a bit surprising that it ever exceeds 7.5, given BBE1 has a maximum NMB of 1.5? Is this a linear increase (line 372)?
Figure 5: I find this figure hard to extract much meaning from – a great deal of the information is lost by just showing charts of the correlations. I did not understand the value of a correlation between spatial correlation and temporal correlation. I think it would be better to have AOD vs time line plots for POLDER and for all of the models, with one subfigure for each region (or similar). Then we could see which part of the season the biases are most apparent in, and where the biases are in the regions. It is surprising the spatial correlation can be so low for some models (GISS and INCA) – perhaps a scatter plot would be useful here of simulated AOD vs POLDER AOD for these models?
Also, why are the results in subfigures a, b and c so different? What differed between the three experiments to cause this? You comment in the text that the figures are pretty similar, but they look quite different to me.
L381-390 this is a nice analysis, should be very useful.
What is the real distinction between section 4 and section 5, before section 5.1? The sections may need more thought.
L450-465: The interesting part here is not so much the negative correlation, which is presumably coded into the models by their parameterizations of Mie theory, but why the models deviate from the Mie curve- presumably due to the mixing of several broad size distributions.
L541 I did not see a discussion of the clear-sky assumption in the appendix, and the references given there are mostly generic model description papers, so it would take the reader unfeasibly long to reconstruct what difference the authors are referring to, so please clarify.
Technical corrections
Abstract: "comprise" at line 60 is the wrong word
L240 “proposed” is an odd word here.
L240-270 the paragraph is much too long and should be split up, with clearly defined topics introduced in the first sentence of each paragraph. That said, the paragraph from 271 to 274 does not have its own topic and seems to belong with the previous text.
L256 improve sentence
L520 not clear what ‘thoroughly’ means
- AC1: 'Reply on RC1', Qirui Zhong, 29 May 2022
-
RC2: 'Comment on acp-2022-96', Anonymous Referee #2, 11 May 2022
Dear Authors,
Thank you for this exhaustive and well-described analysis of the factors governing uncertainty in simulation of atmospheric aerosols in regions affected by biomass burning. This is a problem of long-standing concern in the atmospheric composition community, and your study provides valuable information on the commonalities and differences of the atmospheric simulation models currently in use.
I have only minor recommendations for revisions. I encourage you to also attend closely to the revisions requested by the other reviewers.
Line 226 The Schutgens (2020) paper makes a number of interesting assertions about the potential effects of cloud contamination, but I do not see the suggestion there that southern hemisphere Africa during the burning season is subject to high cloud contamination. That is not consistent with other literature either. I would examine other explanations such as the extent of arid areas in southern Africa where satellite retrieval is more difficult.
Line 377: “For the aerosol lifetime and MEC which were mainly affected by other model aspects than emissions, there was no significant difference found among the three fire regions for the same model.” Are you saying that the models used each had uniform MEC among the three regions? Are you saying that the models did not have varying lifetimes for the three regions? Either of these findings is quite significant, as they represent model assumptions and outcomes that can be compared to observations.
Line 118: “regarding to knowing issues for BBA models for more than ten years” I would update this sentence and expand to clarify that BBA has been acknowledged as a large source of uncertainty in atmospheric aerosol for a very long time (e.g. AeroCom phase II paper from 2013: https://acp.copernicus.org/articles/13/1853/2013/, or before that this 2005 review by Kanakidou https://acp.copernicus.org/articles/5/1053/2005/, or before that this 1992 Science paper by Joyce Penner https://www.science.org/doi/abs/10.1126/science.256.5062.1432), and this study was undertaken to examine uncertainties and variation in current state-of-the-art modeling systems.
Line 135 “in multi” => “in multiple”
Line 191 “To avoid sampling issues” => “To mitigate sampling issues associated with varying coverage of the observational data sources”
Line 256: “impacts of different were” “impacts of verifying against different satellite data products were”
Line 270: “for the whole research” => “for the whole analysis.”
Line 421: “positive correlation” is this actually a positive correlation? Your figure shows a positive correlation between precip and deposition load.
Line 520: “thoroughly” choose a different word—perhaps you mean “uniformly?”
- AC2: 'Reply on RC2', Qirui Zhong, 29 May 2022
Peer review completion


Interactive discussion
Status: closed
-
RC1: 'Comment on acp-2022-96', Anonymous Referee #1, 30 Mar 2022
Review of Zhong et al, Satellite-based evaluation of AeroCom model bias in biomass
burning regionsThis paper present an evaluation of AeroCom model aerosol optical properties in regions strongly influenced by biomass burning. In line with previous research, large biases are found. Diverse satellite products are used and a valuable comparison of satellite products is included. Furthermore, a useful disentangling of the biases associated with emissions and with lifetime is presented. The paper is well written and has potential to be an important contribution to ACP. I have a number of minor comments which should be addressed before the paper is published.
Minor comments
Models and variables section: More historical context on when the simulations were run and what the differences in model versions are between the experiments would be useful here. Were the model versions the same, or did the models change between the BBE, 2016 and 2019 experiments? I don’t think you can expect your readers to be familiar with AeroCom protocols or to go through other AeroCom papers or the Excel sheet supplement, though of course all the details of specific changes from one experiment to the next do not need to be repeated here.
Even though the size distribution of the model output is not available, the size distributions of the simulated BB emissions inputs are mentioned in the Appendix Table, so it should be possible to infer the impact of these size distributions on lifetime and AOD to some extent. It would be useful to try to do this, and it seems odd to have such a long discussion on hygroscopicity when size is probably more important.
Why does the NMB for BBE5 reach up to 19? Isn’t it a bit surprising that it ever exceeds 7.5, given BBE1 has a maximum NMB of 1.5? Is this a linear increase (line 372)?
Figure 5: I find this figure hard to extract much meaning from – a great deal of the information is lost by just showing charts of the correlations. I did not understand the value of a correlation between spatial correlation and temporal correlation. I think it would be better to have AOD vs time line plots for POLDER and for all of the models, with one subfigure for each region (or similar). Then we could see which part of the season the biases are most apparent in, and where the biases are in the regions. It is surprising the spatial correlation can be so low for some models (GISS and INCA) – perhaps a scatter plot would be useful here of simulated AOD vs POLDER AOD for these models?
Also, why are the results in subfigures a, b and c so different? What differed between the three experiments to cause this? You comment in the text that the figures are pretty similar, but they look quite different to me.
L381-390 this is a nice analysis, should be very useful.
What is the real distinction between section 4 and section 5, before section 5.1? The sections may need more thought.
L450-465: The interesting part here is not so much the negative correlation, which is presumably coded into the models by their parameterizations of Mie theory, but why the models deviate from the Mie curve- presumably due to the mixing of several broad size distributions.
L541 I did not see a discussion of the clear-sky assumption in the appendix, and the references given there are mostly generic model description papers, so it would take the reader unfeasibly long to reconstruct what difference the authors are referring to, so please clarify.
Technical corrections
Abstract: "comprise" at line 60 is the wrong word
L240 “proposed” is an odd word here.
L240-270 the paragraph is much too long and should be split up, with clearly defined topics introduced in the first sentence of each paragraph. That said, the paragraph from 271 to 274 does not have its own topic and seems to belong with the previous text.
L256 improve sentence
L520 not clear what ‘thoroughly’ means
- AC1: 'Reply on RC1', Qirui Zhong, 29 May 2022
-
RC2: 'Comment on acp-2022-96', Anonymous Referee #2, 11 May 2022
Dear Authors,
Thank you for this exhaustive and well-described analysis of the factors governing uncertainty in simulation of atmospheric aerosols in regions affected by biomass burning. This is a problem of long-standing concern in the atmospheric composition community, and your study provides valuable information on the commonalities and differences of the atmospheric simulation models currently in use.
I have only minor recommendations for revisions. I encourage you to also attend closely to the revisions requested by the other reviewers.
Line 226 The Schutgens (2020) paper makes a number of interesting assertions about the potential effects of cloud contamination, but I do not see the suggestion there that southern hemisphere Africa during the burning season is subject to high cloud contamination. That is not consistent with other literature either. I would examine other explanations such as the extent of arid areas in southern Africa where satellite retrieval is more difficult.
Line 377: “For the aerosol lifetime and MEC which were mainly affected by other model aspects than emissions, there was no significant difference found among the three fire regions for the same model.” Are you saying that the models used each had uniform MEC among the three regions? Are you saying that the models did not have varying lifetimes for the three regions? Either of these findings is quite significant, as they represent model assumptions and outcomes that can be compared to observations.
Line 118: “regarding to knowing issues for BBA models for more than ten years” I would update this sentence and expand to clarify that BBA has been acknowledged as a large source of uncertainty in atmospheric aerosol for a very long time (e.g. AeroCom phase II paper from 2013: https://acp.copernicus.org/articles/13/1853/2013/, or before that this 2005 review by Kanakidou https://acp.copernicus.org/articles/5/1053/2005/, or before that this 1992 Science paper by Joyce Penner https://www.science.org/doi/abs/10.1126/science.256.5062.1432), and this study was undertaken to examine uncertainties and variation in current state-of-the-art modeling systems.
Line 135 “in multi” => “in multiple”
Line 191 “To avoid sampling issues” => “To mitigate sampling issues associated with varying coverage of the observational data sources”
Line 256: “impacts of different were” “impacts of verifying against different satellite data products were”
Line 270: “for the whole research” => “for the whole analysis.”
Line 421: “positive correlation” is this actually a positive correlation? Your figure shows a positive correlation between precip and deposition load.
Line 520: “thoroughly” choose a different word—perhaps you mean “uniformly?”
- AC2: 'Reply on RC2', Qirui Zhong, 29 May 2022
Peer review completion


Journal article(s) based on this preprint
Qirui Zhong et al.
Qirui Zhong et al.
Viewed
HTML | XML | Total | Supplement | BibTeX | EndNote | |
---|---|---|---|---|---|---|
595 | 196 | 18 | 809 | 36 | 9 | 13 |
- HTML: 595
- PDF: 196
- XML: 18
- Total: 809
- Supplement: 36
- BibTeX: 9
- EndNote: 13
Viewed (geographical distribution)
Country | # | Views | % |
---|
Total: | 0 |
HTML: | 0 |
PDF: | 0 |
XML: | 0 |
- 1
The requested preprint has a corresponding peer-reviewed final revised paper. You are encouraged to refer to the final revised version.
- Preprint
(4690 KB) - Metadata XML
-
Supplement
(21 KB) - BibTeX
- EndNote