the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Modeled and observed properties related to the direct aerosol radiative effect of biomass burning aerosol over the southeastern Atlantic
Sarah J. Doherty
Pablo E. Saide
Paquita Zuidema
Yohei Shinozuka
Gonzalo A. Ferrada
Hamish Gordon
Marc Mallet
Kerry Meyer
David Painemal
Steven G. Howell
Steffen Freitag
Amie Dobracki
James R. Podolske
Sharon P. Burton
Richard A. Ferrare
Calvin Howes
Pierre Nabat
Gregory R. Carmichael
Arlindo da Silva
Kristina Pistone
Ian Chang
Robert Wood
Jens Redemann
Download
- Final revised paper (published on 03 Jan 2022)
- Supplement to the final revised paper
- Preprint (discussion started on 30 Apr 2021)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on acp-2021-333', Anonymous Referee #1, 15 Jun 2021
Review for “Modeled and observed properties related to the direct aerosol radiative effect of biomass burning aerosol over the Southeast Atlantic” by Doherty et al.
This study presents a thorough comparison between a number of modelling frameworks and in situ observations made during the ORACLES field campaign that took place over the southeast Atlantic Ocean during three month-long periods in three consecutive years. The study focuses on parameters and quantities that are used in quantifying the direct radiative effect from biomass burning aerosol. A first order calculation of the direct effect using observations and comparisons to the models is presented. The study helps identify key failings in the ability of the models to reproduce the observations, which will be very useful for focusing future studies.
Although the manuscript is considerably long I believe the in-depth presentation of methods and the authors’ treatment of uncertainties is on the whole necessary. However, I believe some figures and passages of text can be improved to improve the flow of the study and highlight the key messages being presented throughout. I therefore recommend this manuscript is published in ACP once the, largely minor, comments below are addressed.
Main comments:
My main concern is the length of the manuscript. On reading it I often found that the key messages in each section or paragraph were not as clear as they could be. For shorter manuscripts this would be fine, but due to the length of this manuscript I strongly suggest the authors rethink some aspects. The figures are very large and I found myself endlessly scrolling through them. I suggest replotting the figures to make them smaller and more compact. For instance, try and combine Figures 7-9, same for 10-13, etc. The second aspect that can be improved is the construction of the paragraphs. The manuscript occasionally contains sections consisting of short paragraphs (for example see lines 1560 to 1580, 1600 to 1625) which break the flow of the manuscript and makes it difficult to identify the key messages. To make the manuscript more readable I suggest the authors go through the manuscript and make sure the key messages for each section are clearly delivered. Some paragraphs can be reduced in length with just the key result put forward – the addition of exact values and consideration of individual grid boxes sometimes made it difficult reading.
I don’t feel that section 4.1 brings much to the manuscript. I believe the methodology is somewhat flawed and the outcomes are not used in the rest of the manuscript. Using the models to provide the test of representativeness is entirely dependent on the ability of the models to reproduce the observations – which as shown later in the manuscript isn’t great. This is further illustrated by the lack of consistency between the two models used. At the end of the section there is no discussion on what the results actually tell us and how they influence the rest of the manuscript. Even in the summary it is difficult to understand what the outcomes of the analysis are. Does this tell us anything that Shinozuka et al. (2020) does not? Does the analysis mean that the proceeding comparison is not representative of the climatology? I suggest either removing the section or making the outcomes clearer.
The DARE section is the highlight of the paper. In its current form it feels like a second paper that was appended onto the manuscript. The authors may wish to consider moving the summary section to the end and integrate the DARE section into the manuscript.
Minor comments:
Line 61. This one sentence hides a lot of the importance and uncertainties and of ARI and ACI. Maybe expand to give a better account of why understanding ARI and ACI is important?
Line 243. How sensitive is the weighting function to the chosen value for the standard deviation?
Line 250. Please provide a characteristic size for the accumulation mode.
Line 283. Please clarify what the ‘original PSAP instrument’ is referring to.
Line 305. It may be useful to include a sentence that sums up what satellite product you end up assigning to each cloud property variable.
Line 388. Can you include a concluding sentence that answers the question at the beginning of this paragraph?
Line 485. The figure shows ratios and the text discusses percentages. Please change one of them to make it consistent.
Line 508. Doesn’t Figure 3f show a bias up to +200% at one level? Or are you talking about the column mean?
Line 700. Is this consistent with Shinozuka’s conclusions for representativeness?
Line 940. Do the models also show weak wet deposition?
Line 956-958. Are you discussing the characteristics of the observations or comparing against the models?
Line 958’ “a broader vertical extent towards the core of the plume”. This doesn’t make sense to me.
Line 963. There doesn’t seem to be much consistency in the ‘over-estimation above and below the plume centre’ for WRFCAM in Figure 7 or in table 1.
Line 965. I’m surprised you don’t point out the substantial and largely consistent overestimation at lower altitudes for GEOS.
Line 1136. Spelling mistake: humidification
Line 1144. Do you have a sense of how sensitive your results would be to gamma?
Line 1152. Please can you include the calculated ranges?
Line 1173. Having subsections for each model would be beneficial for the reader.
Line 1173. The WRF-CAM5 section is difficult to follow, but the other models are better. I suggest the authors look at this model section and try to improve the clarity of it.
Figure 10. It would be useful to have a title above both columns to easily differentiate the two without having to refer to the caption.
Line 1224. ‘closer to or greater than 1.0’ this is not consistent though..
Figure 11. ‘as in figure 8, but for the GEOS model’ I don’t think the cross ref is correct. There are also other instances of incorrectly cross-referenced figures so please check all captions.
Line 1520. ‘SSA is consistently higher than that observed’ but aren’t observations for low RH? If so, isn’t ‘consistently higher than observed’ actually good?
Line 1523. What trends are being referred to here?
Figure 15. The observations bar makes it seem like you are comparing like for like, but it may be more appropriate to make the observations bar unfilled as it for low RH, and therefore more comparable to the UM-UKCA unfilled bar?
Line 1598. But the UM-UKCA dry vs ambient SSA are very different – doesn’t this go against this statement? Does it suggest the model is completely wrong?
Line 1614. Any ideas why this is occurring?
Line 1713. So the models are accidentally correct because they don’t include absorption by brown carbon. Do the models provide information on brown carbon content?
Line 1965 (and 1988). Do you mean QFED2 rather than GFED?
Line 1972. Could differences in model dynamics lead to discrepancies?
Line 2017. Has this been reported in previous literature?
Line 2021. ‘the plume top terminating at lower altitude in the observations than in some of the simulations’ I thought the HSRL data showed that this wasn’t actually the case and that the aerosol was indeed present at these higher altitudes? (see line 1236)
Line 2093. ‘8-11 for all four’ is this the range or difference? Please can you clarify
Line 2008. This estimation of the direct effect must implicitly assume that there are no rapid adjustments that may have influenced the underlying cloud field. Do you assume that all retrievals used are consistent with a sufficiently separated smoke-cloud scene? I think adding a sentence to clarify the assumption would be useful.
Citation: https://doi.org/10.5194/acp-2021-333-RC1 -
RC2: 'Comment on acp-2021-333', Anonymous Referee #2, 02 Aug 2021
This comprehensive study utilizes aircraft observations from the NASA ORACLES campaign during three biomass burning seasons to analyze differences in modeled properties of aerosol plumes over the Southeast Atlantic. The modeling comparison to in situ observations was conducted using two regional models and two global models for the same temporal periods and specified aircraft transects. The work further extends insights into the importance of adequately modeling aerosol, cloud, and optical properties by demonstrating the propagation of biases in parameterized and simulated direct aerosol radiative effect (DARE) which leads to a range of largely positive and marginally negative values. Approaches and findings presented here are compelling and point to modeled parameters that require tuning for improved simulation of biomass burning aerosol on regional and global climate. This manuscript is overall well-written, though ordering and structure of the results sections contribute to some lack of clarity in the manuscript. Nevertheless, this work is worthy of publication in ACP if the results can be presented in such a way that they support the conclusions.
General Comments
- The problem with the presentation of these results is that the authors have not provided calculations with quantitative results or consolidated figures that support their main findings (plumes too diffuse, underestimates in plume properties, COT well simulated). The result is a meandering collection of multi-page graphs that the reader is expected to visually integrate and compare in order to reach the cryptic qualitative assessments described in the results. The authors need to provide summary figures that support their summary statements, preferably with numbers to describe them rather than qualitative descriptors like “low” and “high”. I am not disputing any of these results, but I find it a disservice to readers to not provide them with figures that actually support, in a condensed way, the very generalized conclusions reached. Almost all of the existing figure panels belong in the SI, as they are not discussed in the text.
- The next big concern for this submission is the presentation of DARE as one of the leading messages of the manuscript. The section as a whole is well-written and findings sound, though its current position is awkward following the summary of the modeling comparisons and reads as an addendum to the manuscript. Given the combined impact of aerosol-cloud-optical properties on DARE (the main assessment of this work), a more appropriate placement for this major section would follow the discussion of cloud fraction and optical thickness biases (Section 4.4) and before the summary and concluding remarks (Section 5), which is neither a summary (as it is long and winding) nor concluding (as it is followed by DARE). Many passages within Section 4.4 and Section 5 repetitively allude to expected findings that are provided in the DARE section (Section 6) and add to the already exhaustive length of the manuscript. Removing these passages or placing them within context of a reordered DARE section would improve the flow from significance of aerosol-cloud-optical properties to climate impacts.
Minor Corrections:
Line 102-113: Other than the supplementary modeling/forecasting information provided by UM-UKCA and ALADIN during their respective SE Atlantic campaigns, are there further reasons for using these models? A single regional and global model comparison to observations is a considerable effort. Are these additional models, which in some cases lack some comparison necessary variable fields, used only to expand the comparison discussion? A brief reference to this methodology choice would provide better perspective and support for the length of this manuscript.
Line 250: Provide the typical size range of accumulation mode aerosol, particularly as it pertains to biomass burning aerosol. Redemann et al. (2021)1 provides support of this claim.
Figure 3: Average symbols (black circles) are not clearly identifiable in these panels. Symbols with a bolder weight would improve the presentation and clarity of these figures. This should also be addressed in Figures 7-13 and subsequent supplementary figures (e.g. Figure S.4). This figure (and several others) was paginated as 3 separate pages, making it pretty unwieldy to review as a single figure. The authors should consider a format with less wasted space and higher density of information. Also, the scatter plot approach makes it challenging to see vertical trends for each color; I recommend adding a connecting line.
Fig. 4-14: as with Fig. 3, please rearrange, shrink, or break up into figures that each fit on a single page, with relevant axis labels and legends on the same page and provide captions written upright.
Figure 15: This figure seems superfluous in the main text of the manuscript given the inclusion of vertical profiles of SSA in Figure 14. The statistical inter-model comparison of SSA is interesting, but may be better suited for the supplementary text.
Line 658 “Future studies may want to use a synthesis of modeled and satellite-retrieved properties
(e.g. of AOD) for a more robust analysis of sampling representativeness.” Is there a reason this is not done here, i.e. at a minimum a comparison to the avg+SD of the models shown? If the models are too disparate to make this meaningful, then why use them?
Line 967: “UM-UKCA has a significant low bias in BC at plume altitudes in 2016 and a
smaller low bias in 2017 (Figure 9).” This statement (and many others like it) are difficult to support from the figure noted (9). This reflects two problems (1) There are no multi-flight-type results presented from which to infer the yearly differences noted, suggesting perhaps that one should be able to eyeball this to see and quantify the low bias? And (2) There is no metric or test presented for significance of the 2016 result vs. 2017. Both of these issues are present throughout the manuscript, with figures showing individual results but text describing trends amalgamated over several graphs.
Of course there are many ways to consolidate these results (by region, by altitude, etc.), and the way chosen will of necessity be limited to the particular aim of this work. But the failure to present any such consolidation results in a mismatch between the text and the figures. A much more useful paper would present the consolidated papers and move the detailed figures and tables to the SI.
Line 1123: This is super interesting; how does it compare to past comparisons, e.g. Heald et al. 2008(?) or similar?
Line 1136: Correct “humification” to “humidification”.
Line 1233: “Except in the 2018 Meridional2 transect, WRF-CAM5 ðep is generally 30-40% lower
than measured by HSRL-2.” As l.967, where is this shown?
Line 1309: “The bias also has a less consistent dependence on altitude in 2017 (Figure 11). In 2018, the higher ambient RH could be
1310 compensating for some of the low bias in dry aerosol ðep.”As l.967, where is this shown?
- 1320 “In the 2016 Diagonal transect this produces significant low model biases for 3-5 km
altitude and high biases for 2-3 km and 5-6 km (Figure 12; Table 1), much” What is “significant low” and what is “high”? Which values or plots are referenced?
Lines 1601-1603: This sentence is hard to follow and should be revisited for clarity. Are the authors trying to say a larger number of small particles less than 1000 nm dry diameter as is the case in the biomass burning plume?
Line 2055: Remove “the”
Lines 2201-2202: Suggest “to produce values of DAREavg that is a factor of” to “to produce a DAREavg that is a factor of”.
References:
1https://acp.copernicus.org/articles/21/1507/2021/
Citation: https://doi.org/10.5194/acp-2021-333-RC2 - AC1: 'Replies to Reviews of acp-2021-333', Sarah Doherty, 24 Sep 2021