the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
Characterization of errors in satellite-based HCHO ∕ NO2 tropospheric column ratios with respect to chemistry, column-to-PBL translation, spatial representation, and retrieval uncertainties
Amir H. Souri
Matthew S. Johnson
Glenn M. Wolfe
James H. Crawford
Alan Fried
Armin Wisthaler
William H. Brune
Donald R. Blake
Andrew J. Weinheimer
Tijl Verhoelst
Steven Compernolle
Gaia Pinardi
Corinne Vigouroux
Bavo Langerock
Sungyeon Choi
Lok Lamsal
Shuai Sun
Ronald C. Cohen
Kyung-Eun Min
Changmin Cho
Sajeev Philip
Xiong Liu
Kelly Chance
Download
- Final revised paper (published on 07 Feb 2023)
- Supplement to the final revised paper
- Preprint (discussion started on 15 Aug 2022)
- Supplement to the preprint
Interactive discussion
Status: closed
-
RC1: 'Comment on acp-2022-410', Anonymous Referee #1, 30 Sep 2022
This manuscript presents a detailed and comprehensive analysis on the use of the HCHO/NO2 as measured by satellites to characterise the photochemical regimes for ozone production. The manuscript focusses on four different aspects usefulness of HCHO/NO2 as a proxy, the impact of the vertical distribution, spatial heterogeneity, and retrieval uncertainties itself. The analysis draws from a range of model and measured data and makes uses of different statistical approaches. The manuscript provides a wealth of information, but it will be most valuable for the specialist community. I recommend publication in Atmos. Chem. Phys. (although it would also fit well into AMT) after consideration of my comments below.
For the different aspects, different methods and different statistical metrics are used. I would like to get some justification why a specific metric is used and more detail on applied the methods:
- Altitude dependency (section 3.5)
- Can you please provide some more details on the equation used to compute the first moment of the area (equation 9). The moment of an area is the integral of distance over area. Also, dz is missing.
- Note that a satellite observesa column which is either given by the integral of the concentration over altitude or mixing ratio over pressure, while here mixing ratios seem to be integrated over height which is not correct.
- Why is the standard-deviation of the ratio of the first moment of the interquartile range a good metric for the uncertainty
- What is the impact of altitude sensitivity of the satellite column measurement as described by the averaging kernel on the estimate uncertainty.
- Spatial heterogeneity (Section 3.6)
- Please justify the use of the metrics given in equation 14 to quantify the representation error.
- Important to point out that this is not an absolute but a relative metric (with 3x3 km2) as reference
- Satellite errors (section 3.7):
- 15 assumes uncorrelated random errors between the HCHO and N02 retrieval. This is the case of measurement noise-driven errors but the scatter (standard deviation) in both will also be the result of variable geophysical parameters (e.g. aerosols) which will have some level of correlation.
- What is the role the different averaging kernels between the satellite and ground-based DOAS instruments
- Total error (Section 3.8)
- The different error terms are combined into a total error. However, only assumed random components of uncertainties are included (and not systematic ones) so it should be called the total random error. For me, eq. 16 is too some extend trying to combine apples and oranges as the underlying metric in the 3 components is very different and have different meanings.
Minor points:
- Please make sure that all acronyms and abbreviations are spelled out when used for the first time (e.g. NOx, P(O3), DISCOVER-AQ, PAN, VOC, SENEX, SZA, …)
- 4, l149: …FNR from a chemistry perspective…
- 5, l.188: heterogenous chemistry is not considered -> can you add a statement on the importance of that assumption on the study.
- 5, l.206: hv -> h. and define h and (nu)
- 6, eq. 1-3: define k and M, state what the sum is summing up
- 6, l. 239: unconstrained observations -> independent observations
- 6 l. 255: contrary to an overestimation in clean ones
- 7, l.262: of NO in the chemical mechanism
- 7, l.262: some of the oxygenated VOCs
- 7 l264: with larger PAN because -> with larger PAN mixing ratios because
- 7, l.277: to reproduce HO2 with -> to reproduce HO2 mixing ratios with
- 7, l. 286: 0.62 106 cm-3 -> 0.62 x 106 cm-3
- 7 l. 288: at least virtually representative -> what do you mean by ‘virtually’?
- 7, l. 291: an analytical solution suggesting… -> solution to what?
- 8, l. 328: PO3 -> this has been written as P(O3) before.
- 10, l.399-402: I don’t clearly see this larger decrease in NO2 than of HCHO. The media value of the ratio in Fig.5 is more or less 5 with some variability.
- 31: figure :3 the 3 green lines are very hard to distinguish.
- 37, Figure 9: I assume the y-axis is not given in %
Citation: https://doi.org/10.5194/acp-2022-410-RC1 -
AC1: 'Reply on RC1', Amir Souri, 15 Nov 2022
The comment was uploaded in the form of a supplement: https://acp.copernicus.org/preprints/acp-2022-410/acp-2022-410-AC1-supplement.pdf
- Altitude dependency (section 3.5)
-
RC2: 'Comment on acp-2022-410', Anonymous Referee #2, 08 Oct 2022
Souri et al present a detailed study highlighting four major shortcomings associated with FNRs and their ability to categorize ozone sensitivity. The sections about column-to-PBL translation, spatial representation error, and retrieval error are all well-written. The manuscript as a whole has understandable writing style and clear, well-made figures. However, I do have a few major concerns, mostly surrounding the modeling section of this work. I recommend that the manuscript be sent to the authors for major revisions.
VOC inputs for the box model: The modeled radical environment can be incredibly sensitive to changes in VOC inputs, especially in polluted urban areas. This manuscript is lacking detail about how VOC inputs were created, leaving readers to assume the authors used a simplistic approach that excludes many potentially important VOCs. As written, the authors’ treatment of VOC inputs does not rise to the level established in previous modeling studies performed for the same field campaigns, leaving this reviewer wondering if the modeling presented in this study can represent the ambient radical environment.
The field campaigns modeled in this study have unique VOC measurement suites which require unique data engineering strategies to generate realistic VOC inputs. DISCOVER-AQ was served only by a quadrupole PTRMS, and features a very limited set of VOCs. The authors do not give adequate detail about how they generated VOC inputs based on these data. For example, previous studies (i.e. Schroeder et al 2017) generated speciated VOC box model inputs for DISCOVER-AQ using a fusion of VOC data from concurrent airborne campaigns (DISCOVER-AQ+SEAC4RS+FRAPPE). This enabled somewhat realistic estimation of VOCs that were not measured by the PTRMS during DISCOVER-AQ.
During KORUS-AQ, the whole air sampler was flown concurrently with a PTRMS, giving a richer suite of speciated VOCs. However, these two instruments had wildly different sampling cadences and integration times, with WAS measurements being incapable of resolving fine-structure details in pollutant gradients. As a result, previous studies (i.e. Schroeder et al 2020) fused the two datasets together to generate a pseudo-high-resolution set of VOC inputs for their box modeling work with KORUS-AQ.
As it is currently written, I have serious concerns about the VOCs used as model inputs, and thus have lower confidence in the results presented here. Can you show that the simplistic VOC inputs used in this study do not yield significantly different results from the two Schroeder papers?
Perhaps a more pointed observation: the box model inputs and outputs from the two Schroeder papers are publicly available online. What does this study gain by running its own model simulation – with questionable VOC representation – instead of using the freely-available Schroeder/Crawford data which has already been heavily vetted and used in multiple studies?
Model Setup: I have a few concerns with model setup:
Why use an arbitrary model run time of 5 days? Ideally, the model should be run indefinitely until it converges on a solution for key species, but I understand the desire to set a lower limit for the sake of computation. Do your outputs change if you use 4 days? Or 6 days? Or 20 days? Can you include a sensitivity analysis to back up your work – that is, show that your arbitrary choice of 5 days does not impact results?
If I understand this correctly, you calculate a unique dilution factor for each field campaign, deriving it empirically to yield the best agreement between measured and modeled HCHO. What is the physical basis for why one field campaign would have different dilution rates than another? Without further explanation, this feels like an arbitrary “correction factor” to game the model for better agreement with observations – which does nothing to tell you how well the model represents the underlying chemistry. Can you explain?
Model Validation: As written, the model validation section does not give me confidence in the model’s ability to represent the ambient radical environment (especially given the simplistic treatment of VOCs).
If one of the model parameters (dilution rate) is based on empirical model/measurement agreement, then comparing simulated values to observations is cyclical. If I am understanding this correctly, then Section 3.1 is incredibly problematic. Based on the description given in line 220, campaign-average simulated HCHO is not allowed be >5% off from campaign-average observed HCHO – its part of the model setup with an empirically derived dilution factor. How can you evaluate the model’s representation of the chemical environment with such a setup? For example, in line 254 you state that HCHO had a mean bias of less than 5% - which is meaningless because you’ve coded the model to do exactly that.
In practice, your dilution factor acts as a quasi-constraint on HCHO, which greatly influences calculated radical budgets. This eliminates your ability to truly test whether the model is capable of representing the radical budget from first principles. Furthermore, this does not allow you to test if your simplistic treatment of VOCs is adequate. You mention that the bias in PAN changes if you ignore dilution, but PAN can have a large impact on modeled radical and NO2 concentrations. What happens to other test-species if you ignore dilution?
I don’t buy the idea that this model, as currently setup, has proven itself as sufficient for representing ozone chemistry.
-----------------------------------------------------
In short, you have questionable VOC inputs and a questionable model setup which prevents you from truly testing model performance. I’d suggest re-running the model with a fixed dilution factor based on reasonable physics. This will enable “true” unconstrained model runs, providing a testbed for evaluating model performance (and your VOC inputs). Or, you could run another, established model in parallel on a subset of data and compare the two. Or, you could use the freely-available model inputs/outputs from published studies from the same field campaigns, rather than re-invent the wheel.
----------------------------------------------
Purpose of the paper: Finally, I would challenge the authors to include paragraphs in the Introduction and Summary sections describing the motivation for doing this work. Why are incremental improvements in our understanding of FNRs and ozone chemistry necessary? Martin et al first published their paper about satellite FNRs more than twenty years ago – yet, to the best of my knowledge, no regulator or policymaker has ever used satellite FNRs in their ozone planning strategies. Clearly FNRs were first developed as a potential tool for policymakers to fine-tune ozone mitigation strategies, but if policymakers have shown no interest in using these tools, why continue refining them? Is there a pathway for FNRs to be used by anyone outside of academia? What does the author think is preventing policymakers from using this tool - or is this simply a tool for academics?
Citation: https://doi.org/10.5194/acp-2022-410-RC2 -
AC2: 'Reply on RC2', Amir Souri, 15 Nov 2022
The comment was uploaded in the form of a supplement: https://acp.copernicus.org/preprints/acp-2022-410/acp-2022-410-AC2-supplement.pdf
-
AC2: 'Reply on RC2', Amir Souri, 15 Nov 2022