General Comments
I appreciate the further analyses as well as the additional global view in Figure 9 and also understand that it is hardly possible to carry out a complete and comprehensive analysis of the capabilities and short-comings of the method within the framework of this paper. Therefore, I think that a publication in AMT would be more appropriate to present an introduction of a new interesting technical method but I do not insist on moving the article if the editor has a different opinion.
However, without a more detailed evaluation of the global capabilities some of the statements and conclusions have to be weakened (see below). I would also propose to emphasise more clearly what is actually done already in the title and/or in the abstract: local XCH4 anomalies are detected (by combining satellite measurements and forecasts), which are due either to actual emission anomalies relative to the CAMS emissions or systematic biases in the satellite data or a combination of both.
The distinction between the two cases has to be made by (subjective) interpretation and is prone to error. I do not think that the decision should be made by solely assessing the persistence of features as suggested in the manuscript. Persistent emissions can also lead to persistent features as there is not always a clear plume structure in satellite GHG data, in particular for non-point sources, when topography causes accumulation, and when using a 30-day time window (see also specific comments). How persistent (in space and time) is a pattern allowed to be to be considered real? You need additional information, e.g. to check if the features are correlated with albedo features (like you do for Figure 13), to classify patterns as retrieval biases. In principle, you also have to check the albedo features in the cases where anomalies are expected (e.g. Permian and Turkmenistan in Figures 10 and 11) to avoid expectation bias. On the other hand, emission patterns could indeed be correlated with albedo (e.g. wetlands or facilities vs. surroundings) complicating the interpretation. Please elaborate more on these issues and discuss them in a more balanced way.
For example, in case of the new example shown in Figure 14, where there is no correlation of the outlier pattern with albedo, I would be cautious to classify this feature as retrieval bias unless another good explanation for a potential retrieval bias has been found.
Specific Comments
Page 2, Lines 58-59: The sentence is a little misleading because the cited papers analyse different regions, but all include the Permian basin. Therefore, I suggest to change it to something like: "... large and extended enhancements in different US oil and gas production regions such as the Permian basin."
Page 3, Lines 86-87: What is the difference between instrument precision and random error?
Page 4, Lines 117-121: Your data assimilation technique only corrects the concentrations and not the emissions. But this is exactly the potential problem, isn't it? As a consequence, H(x) (in ppb) potentially depends on patterns observed by IASI and TANSO. Or am I getting something wrong here? Assume there is a (unknown) source, which is observed by TANSO and TROPOMI. Then the concentrations in the forecast are corrected upwards due to TANSO and the difference d to TROPOMI (which also sees an enhancement) is getting smaller because of the assimilation. The other way round, isn't it possible that a potential bias in the IASI or TANSO data, which is assimilated, causes an artificial outlier of your method although emission data bases are actually consistent with the TROPOMI measurements? Along these lines, wouldn't it be better to use a model without assimilation of satellite data as starting point if you want to assess the quality of emission data bases?
Page 6, Lines 166-168: Is the averaging kernel function as a function of pressure really discussed in the cited paper? Moreover, the paper analyses a different algorithm than the one used here. Please cite a paper describing the averaging kernels of the operational TROPOMI algorithm if possible.
Page 9, Lines 265-267: Please check if there is actually a correlation with surface albedo features (as in the case of Figure 13).
Page 10, Lines 301-303: This statement is too strong. Please write e.g. "potential retrieval error artefacts". Persistence isn't everything because plumes are not always visible in daily GHG data and may disappear when using multi-day time windows if the wind direction changes. As a consequence, it is possible that you only get an anomaly right above the source with your method (see also general comments). Please revise this section accordingly.
Page 10, Lines 305-307: I would be cautious to classify this feature as retrieval bias when there is no correlation with albedo features. Are there other potential explanations? (Other features causing biases? Could it be a real signal?)
Page 11, Lines 317-318: Please add a sentence that the distinction between over-/under-reported sources and local retrieval errors is challenging and needs correlation analyses with external data sets such as albedo.
Figures 9-14: The colours of the four categories are sometimes hard to distinguish in the maps (in particular with the updated colours in the revised version). Please consider to use different colours or to code the classes additionally in a different way (for example by different symbol shapes or hatching). |