The authors of "Greenhouse gas network design using backward Lagrangian particle dispersion modelling – Part 2: Sensitivity analyses and South African test case" have made some significant corrections to account for the reviews of their first manuscript. Regarding this, we can note the inclusion of the aggregation errors in the experiments and some critical change of point of view when analysing the results, and a better introduction to the sensitivity tests. However, there is still some critical improvement needed.
From my point of view, this text is not suitable for a second submission (see below) even though the main comment from my first review pointed out its low quality. Other comments seem to have been skipped by the authors as illustrated by their slightly selective way of answering to the reviews (see below).
Acknowledging the efforts made by the authors for revising the study (conducting new experiments), my recommendation is thus to conduct a major revision but to be aware that without a serious improvement of the text, it should be rejected once again.
1) Regarding the quality of the text, I now list some examples that illustrate my general feeling even though, again, it is impossible to give an exhaustive list of the problematic sentences or paragraphs:
- Some sentences which should not have survived serious proofreading: "The carbon assessment product produced monthly outputs for all the products. These products..." (l. 305); "By basing the metric to be optimised during the optimisation procedure on the result of the posterior covariance matrix of the fluxes under a given network, this score can be optimised so that the uncertainty in the estimated fluxes is reduced." (l77-79); "without the need to [...] make unnecessary assumptions about the measurements" (l149-150)
Examples of sentences that do not work for a given page (page 3): l56, l61-63, l68-69, l77-79, l89-91
- the confusion regarding the terminology for covariance matrices was highlighted by both reviewers during the first review and the authors tried to account for this. However, it still does not work despite the explicit correction given by the second reviewer. The covariance matrices and correlations relate to errors in the fluxes, not to the fluxes themselves. Mathematically, the covariance of the estimate of the actual flux knowing the prior/posterior flux is equivalent to the covariance of the error in the prior/posterior estimate. However, in order to avoid any ambiguity about the meaning of "fluxes" in "flux covariance matrix", one should use the usual notation "error covariance matrix". This sounds like surprising to see that this mistake has been amplified in the second version of the manuscript while the same mistake for the observation error covariance has been corrected adequately.
Other examples of what sounds like mistakes related to basic principles of inverse modelling (but that I interpret as approximative writing and weak proofreading): l357 the justification for assuming that the authors know perfectly the ocean fluxes is that the target quantity are land fluxes; the "can be" at line 257; l127: initial condition in the domain would be ok, but not "at the site”; l128-129: I do not see why the observations would easily constrain the 3D initial condition but this relate to the issue at line 127.
- the configurations and estimates discussed in paragraphs l232-l241 and l277-l286 seem to have nothing (let say nearly nothing in the case of l230-l241) to do with the configuration used in this study. However, the discussion from lines 232 to 245 looks illogical, and at least useless given the very simple set up used by the authors for the observation error in South Africa. Note the "we assumed a similar standard deviation" at line 242, while actually the authors have doubled this value.
Other examples of confusing and often useless discussions: l.402-407 (hardly understandable), l300 to 311 (monthly values are converted into daily values before being converted into weekly values: why not converting monthly values into weekly values directly) ?
- by regularly stating "in the Australian test case" (e.g. l.244), this study sounds like considering other cases.
2) I wrote in the first review: "Additionally, the discussions regarding whether one should minimize the mean uncertainty in fluxes at pixel scale [...] rather than the uncertainty for the mean fluxes which drive to a "sensitivity test" sound absurd. These discussions and the mixing of fossil fuel fluxes and natural fluxes in the corresponding "cost functions" highlight the absence of "physical" target for the network and for this study. Consequently, the analysis of the results is rather poor."
-> these are still critical weaknesses of the paper. The introduction only states that the target = sources and sinks of CO2. However, changing the spatial resolution of the control vector and/or the metric for the optimisation procedure (i.e. the mean uncertainty at the control resolution or the uncertainty in the total fluxes over 1 month and over South Africa) changes the target of the monitoring system. Targeting the sum of anthropogenic emissions and biogenic fluxes without attempting at separating these components does not sound sensible.
On this topic, note that l206 which characterizes the control vector is critical for the paper, but that it is lost between technical details about LPDM. Equations 2 and 3 are given without any explicit description of the vector f or of vector c (what kind of time averaging is used for the measurements ?).
3) notations in section 2.1 can be confusing (c vs cmod instead of cmeas vs c since the variable is the model c, not the vector of measured concentrations) and equation 11 is either wrong or confused by such a problem of notation (c_B is not explained, but it likely corresponds to the boundary concentrations while the text says that it is c_b). l136: shortcut or simply an error since the uncertainty from boundaries will never be projected into the posterior uncertainty in inverted fluxes.
4) the discussions about the observation errors should be rewritten. The authors seem to assume that the reader perfectly knows what observation errors correspond to (except when dealing with the aggregation errors), and that the reader is fully aware of issues with observation errors at night; however, they still give awkward details such as at line 251. Explanations regarding the aggregation errors are confusing: the part l253-275 is hardly readable unless already knowing what the author detail. For example: whose “spatial resolution” does line 254 refer to ?
5) When do the authors explain that they use a regional model which is why they need boundary conditions ?
6) I still feel that this paper gives too much useless details about the system and its configuration even though there is less overlapping with the part I paper. Some examples of useless details: all the discussions about CCAM (nearly one page) ? redundancies between the beginning of section 2.2 and the end of page 6 ...
7) I do not understand the point about the aggregation error at line 304. Using values from the closest pixel in South Africa does not really sound far more sensible than a "blanket estimate" .
8) around l335: rescaling uncertainties as a function of the land cover in a grid cell makes sense for the natural fluxes, but I think that it does not make sense for the anthropogenic emissions.
9) l357-363: I do not really understand how the NEP from land ecosystems can be used to derive uncertainty in the ocean fluxes. Even if considering uncertainties in the ocean productivity only, how to relate land NEP to this ? "The nearest land NEP": the map may not look better than if a single value was used far from the coast.
10) equation 7 does not really correspond to something consistent between the different experiments: when changing the resolution of the control vector, it targets a different space scale. Therefore, comparing results obtained with such a metric when using different resolutions of the control vector may not really make sense. One should rather have selected a metric that corresponds to a fixed horizontal resolution of the fluxes that could be addressed by any of the control vectors tested in this study.
Again, the discussion at lines 386-387 sounds absurd. It does not make sense to question whether it is better to be interested in improving the mean knowledge on local fluxes or on the total fluxes (which is the translation of whether it is better to use metric from eq 7 or 8). See also the major comment 2.
11) Section 2.6 starts with redundancies and ends with 2 sentences saying the same thing.
12) End of section 3.1: the authors likely misunderstood the comment from reviewer 2 about the correlations due to errors in the boundary conditions. He questioned about correlations in the observation error due to errors in the boundary conditions which definitely increase the weight of such an error over long time periods. This could be critical when assessing the budget of fluxes in South Africa over 1 month.
13) I do not really see what the plot of the footprints (Figure 4) is supposed to illustrate. It sounds like the beginning of section 3.1 is redundant with the explanation about the derivation of the sensitivity matrix earlier.
14) Section 4 exploits few of the details from section 3. Therefore, much of the details in section 3 seem useless while this section is relatively short. This highlight a lack of more relevant analysis.
15) Given the small number of sites to be added in the network, I feel that the "DI" diagnostic is a bit artificial and useless while section 2.8 is hardly readable. Looking at the maps bring more insights about the similarity between the networks than table 4.
16) The new discussion at lines 690-695 does not make sense to me. If the aggregation error is perfectly set-up in the inversion system, the inversion will provide the same results for large areas (here for the whole South Africa) when solving for the fluxes at coarse or high resolution. I assume that this is not the case here because the estimate of the aggregation errors has been simplified e.g. through ignoring temporal correlations in this error. L695 sounds strange.
17) Similarly, the new discussion at lines 723-728 does not make sense to me neither. There is the same need for confidence on the knowledge about correlations in the prior error when using null or positive correlations (and lines 339-340 favour having positive correlations). If you use null correlation but that you actually do not have any idea about whether it is more realistic than assuming positive correlations, this yields a budget of uncertainty over South Africa which is not reliable and thus the network optimization procedure can be driven by a wrong diagnostic. Therefore, I do not understand why using null correlations is presented as a safety measure. In principle, the stronger constraint when using correlations has no reason to be problematic.
Checking the realism of the budgets of prior and posterior uncertainties when aggregating over South Africa would have helped raise insights on the set up of the correlation in this study. But such numbers are never analysed or discussed in this paper. The last sections focus on scores of uncertainty reduction only. |