Although I like the introduction of the poor-man’s inversion, the description in the methods section seems incorrect to me, and I find the way it is integrated into the study not very strong. This comes from the choice to use it as an extra inverse solution from the beginning, and to discuss its flux results alongside that of the other inversions. But the poor-man’s inversion can only be used to look at the global total flux (which it matches by design), and to look at the distribution of CO₂ mixing ratios and XCO2 values across the globe. This it should follow reasonably well, thus setting a benchmark to beat for real inverse solutions. Currently, the label “benchmark” is used throughout the text including that of “benchmark inversion” which is confusing: the flux result of this poor-man’s method is the one thing one should *not* put much emphasis on, especially not below the global total scale. It is therefore also no use to show its regional flux solution in Table 2 and in Fig 4, nor be discussed in Section 4.2 in my opinion.
It is a bit awkward that the reader is first learning a lot about GoSAT to OCO-2 flux differences and how their regional budgets differ in great detail in Section 4.2, but only later in Section 4.3 learns that the OCO-2 inversion is not very trustworthy and is not able to reproduce the atmospheric XCO2 and surface CO₂ better than the poor-man’s inversion (which can be called a benchmark in this context). So in fact, all I read earlier becomes then in a sense irrelevant. Please consider bringing the assessment of the quality of the inversions forward in the manuscript, so that the flux analysis that comes afterwards can focus more on the relevant part of the study (GoSAT and in-situ inverse results). OCO-2 can then be still discussed, but only to indicate whether GoSAT satellite results are corroborated or not by OCO-2.
Abstract: I think that the text does not summarize so well the main findings anymore, and should be rewritten. The main message should focus on the posterior fluxes compared to the in-situ inversion, and not comparing the two satellites to the prior. Then, one can highlight that the main difference on the largest scale is the latitudinal distribution of land sinks, with the satellites suggesting a smaller Boreal and Tropical sink, combined with larger temperate sinks in both the NH and SH. However, OCO-2 and GoSAT generally do not agree on which continent contains the smaller or larger sinks. Also, the comparison of the simulated surface mixing ratios and XCO2 columns shows that only GoSAT and the in-situ inversion perform better than a poor-man’s solution that closes the annual global mass balance of CO₂. This puts the usefulness of the OCO-2 retrieval product used here into question.
List of remarks:
page 1, line 15 “benchmark inversion”: I would refer to the latter as a poor-man’s inversion in which only the global CO₂ growth rate is projected onto the land biosphere, to be used as a benchmark for the simulated atmospheric CO₂ distributions of the real inversions.
page 1, line 22: “more consistent with …” simply say that the GoSAT-based inversion seems to best capture the observed global CO₂ growth rate.
Page 2, line 29: it is worth to say explicitly that the OCO-2 retrieval you used here seems unfit for inverse modeling, but that later versions seem to perform better (Chevallier et al., 2019, ACPD). I also urge the authors to focus their future efforts on the later retrieval products from OCO-2.
Page 4, line 84: please do not use “benchmark inversion” to label this flux product, but explain the purpose of this approach better.
Page 6, line 128: This is where my main comment comes into play. The Fair Use Statement given in the readme file of the Obspack you downloaded was:
# ObsPack Fair Use Statement
# This cooperative data product is made freely available to the scientific community and is intended to stimulate and support carbon cycle modeling studies. We rely on the ethics and integrity of the user to assure that each contributing national and university laboratory receives fair credit for their work. Fair credit will depend on the nature of the work and the requirements of the institutions involved.
# Your use of this data product implies an agreement to contact each contributing laboratory for data sets used to discuss the nature of the work and the appropriate level of acknowledgement. If this product is essential to the work, or if an important result or conclusion depends on this product, co-authorship may be appropriate. This should be discussed with the appropriate data providers at an early stage in the work. Contacting the data providers is not optional; if you use this data product, you must contact the applicable data providers. To help you meet your obligation, the data product includes an e-mail distribution list of all data providers.
# This data product must be obtained directly from the ObsPack Data Portal at www.esrl.noaa.gov/gmd/ccgg/obspack/ and may not be re-distributed. In addition to the conditions of fair use as stated above, users must also include the ObsPack product citation in any publication or presentation using the product. The required citation is included in every data product and in the automated e-mail sent to the user during product download.
Page 7, line 147: insert “area” between shaded and shows
Page 9, line 196: This is yet another reference “CO₂ trend” to the poor-man’s inversion. Pleas try to introduce it better, and use it consistently please.
Page 11, line 232: descripted = described
Page 12, line 249: I do not understand this formula and I wonder if a mistake was made. piror = prior (typo). But why do you add something proportional to the prior flux uncertainty, instead of proportional to GPP? And why do you need trial-and-error to determine the scaling factor k? This is not the same approach as taken by Chevallier, whom you cite for this approach.
Page 12, line 260: “inverted global carbon budgets” please remove “inverted”
Page 12, line 263: “benchmark inversion” please rewrite
Page 16, line 316: In my opinion the benchmark inversion is not very useful here, as its regional flux simply reflects global GPP and not a piece of information derived from the data like in the actual inversions. I suggest to remove it here, and in Fig 4.
Page 17, line 321: “close to the benchmark result”: by writing this, you suggest to the reader that it is a good thing for the inversions to be close to the benchmark. But for continental fluxes this is not true at all, and this is why I think this gives the wrong message when put into the figure/text/table.
Page 20, line 380: Why is this section here, and not part of Section 4.3 where once again a comparison to TCCON is presented? And why are the other two results (in situ and benchmark) not shown? I think it would help to group these results together.
Page 21, line 392, space missing and typo in “to0.59 pm”
Page 21, line 409: Please make clear that this is not surprising because part of these evaluation data were used in the inversion in that case
Page 22, line 426: litter = little
Page 23, line 436: “ground XCO2 observations”, please simply write “We use data from 13 TCCON sites to…”
Page 23, line 436: Please make clear that also here the comparison is not fully independent: the TCCON data were used in the bias correction scheme of at least OCO-2 (I don’t know about GoSAT but I suspect the same there).
Page 23, line 437: into = onto
Page 24, line 461: The fact that only the in-situ inversion beats the benchmark on all 4 numbers should be mentioned in the text.
Page 25, Figure 7: There seems to be an error in the figure: the bars for benchmark and in-situ are exactly the same for all sites. Please check and fix this.
Page 26, line 490: I would not say that OCO-2 could improve the modeling of CO₂ concentrations: your poor-man’s inversion shows that you can achieve better results by simply scaling your fluxes to match the global growth rate of CO₂.
Page 26, line 492: “bench inversion” incorrect
Page 26, line 495 “GOAST” typo