Review.
I thank the authors for their thoughtful and informative responses to my initial review comments. They have clarified the micrometeorological issues I raised to my satisfaction, and I enjoyed reading the enhanced discussion of these measurement methods and the associated possible choices for treating atmospheric dispersion. I acknowledge that the choices they have made are reasonable and defensible.
I also appreciate the addition of Table 7, and the effort to bring together the uncertainty assessment in an organized and coherent fashion. I do, however, have some remaining suggestions related to the uncertainty assessment and its presentation.
Major comments.
1. The authors note that they discard cases with plume maxima whose mole fraction is less than 50 ppb because this is below their detection limit. This raises an overall serious methodological concern. Many industrial sites may have small or no emissions. Measurements of very small emissions are very important, even if the relative uncertainty (% of emission rate) may be very large. Since the authors survey nearly 1000 sites, I do not understand the logic here. The authors must explain this more clearly in the text. Perhaps they do not want to quantify a fractional uncertainty for these cases, and that their uncertainty estimates do not apply to these cases. If so, they should define this limitation of their uncertainty estimates clearly and in the abstract, and include this caveat in their comparison to other measurements. It would be very unfortunate, however, if they are arguing that the IGM approach cannot be applied to cases with very small emission rates, since this would hamstring our ability to sample a broad distribution of emitters and non-emitters.
2. It is unfortunate that the authors do not deal with the overall accuracy of their mean or median emission rates. This is an important problem, and the authors have some indications of potential sources of bias in their measurements, and it would aid in their interpretation of the nearly 1000 emission rate measurements that they have collected. I understand, however, that this is a difficult problem that may be beyond the scope of their study.
3. The link between the results presented, and Tables 6 and 7, is murky and should be clarified.
For example, the text on page 18, line 13, describing Table 6, reads:
These include the uncertainty in the Gaussian diffusion constant by comparing to LES calculated diffusion, uncertainty due to source location and height and uncertainty due to wind speed and stability class. In addition, the LES was used to observe bias in the Gaussian derived concentration distributions and the controlled release was used to evaluate bias in both the Gaussian and LES results. Finally, the LES was used to determine the optimum sampling pattern to constrain actual atmospheric variability.
This text should note the portions of the results that are the bases for these statements. E.g. “…the LES was used (section X) to observe bias in … (Figure Y) …and the controlled release (section Z)…”
In addition, the experiments conducted from section 3.1 up to this point are difficult to follow in that their objective is not clear. They appear to me to be constructing the basis for Table 6, but it is hard to match the elements of Table 6 (and their propagation into Table 7) with the contents of these sections. Please state clearly, for each section, the objective, and the primary result, so that its realization in Table 6 is evident. I would also recommend more logic to the section numbering and names to make this progression easier to follow.
Details:
1. page 8, line 25. paragraph has some minor English grammar problems.
2. page 9, line 8. larger, not large
3. page 11, lines 15-20. please define these statistics when they are presented. RSD of what? mean deviation is explained the second time it is presented.
3. Same spot. These results, and comparison to LES experiment, might be clearer in a table. Authors’ discretion.
4. page 11, lines 13-15. Replicate sampling sites were chosen to test the sensitivity of the emissions estimates to changing atmospheric conditions? Where is this presented in the manuscript?
5. The ordering of sections 3 and 4 is difficult to follow. What is the point of section 3.2? Why is the source strength determination strategy (section 4.1) presented after the field implementation (of source strength determination)? (section 3.2) is presented? See major comment 3, above.
6. What is the purpose of section 4.1? What are “LES emissions” (page 13, lines 1-2)? This seems out of place. See major comment 3, above.
4. If section 4.1 is intended to explain how the authors solve for an emission rate, it would help to have one of the equations solve for the emission rate.
5. Page 13, “Meandering” discussion. This discussion lacks the implicit issue of the distance downwind from the source where dispersion is measured. The rule of thumb of maximum eddy size that is relevant must be limited to dispersion relatively close to the source relative to the size of the large eddies. Far enough downwind, the largest eddies in the boundary layer will mix and disperse the plume. Since the authors’ measurements are all (I believe) close to the source compared to the size of these large eddies, I am comfortable with their rule of thumb arguments about the sizes of eddies they find to be relevant. Question: Are the dispersion coefficients employed in the Gaussian model specific to this “near-field” dispersion argument? In any case, I am satisfied with this discussion and the tests conducted.
6. Page 15, line 1. SS Gaussian is not defined.
7. Table 3. Please define the error bars for wind speed and wind direction.
8. Table 4. What are the “NOAA winds” vs. “tower winds?” It would help if these were explained. And which mean wind was imposed on the LES? My apologies if I am missing something simple here. (I think I found that tower winds are used, apologies…but maybe make this easier to find. And don’t you need to impose a wind profile, not just a point?)
9. Section 4.3. Controlled releases. What is the conclusion? The IGM does not converge on the true flux. The LES is biased for the test site. NOAA winds are biased relative to the tower-measured winds. How is this reflected in the conclusions? What is learned? See major comment 3, above.
10. Section 4.4 Is the point that the number of sources is not known? Thus, is MS vs. SS roughly a factor of two source of uncertainty in emissions estimates, unless the true number of sources can be determined? The experiment being conducted here is not clear. See major comment 3, above.
11. Page 17, lines 10-15. Background uncertainty. As I understand it, the authors 1) used the minimum value measured on a transect as the background and 2) compared this to using the lowest 2% of measurements on a transect as the background. And from some method that I cannot determine, an uncertainty of 5 ppb from all 1000 sample sites was determined. But, sites with a plume of less than 50 ppb were eliminated from the measurements since this was said to be below the detection limit of the system. This paragraph raises questions. First, I agree that if the plume size is large compared to fluctuations in the background, this problem is minimized. Identifying the background, however, as a % of the plume mole fraction (lowest 2%) is not sound. Fluctuations in the background may have nothing to do with the plume being measured. Thus I question this approach for quantifying the uncertainty in the background. A value based on deviations from the Gaussian shape away from plume center would be more defensible. I expect, as the authors suggest, that this will remain a relatively small source of uncertainty for large magnitude plumes. More worrisome is the note that plumes with a maximum mole fraction enhancement of less than 50 ppb are discarded because these are below the detection limit of the system. First, I believe that the instrumental precision is much better than 50 ppb, so I’m puzzled by this value. And if the background uncertainty is only 5 ppb, what makes the detection limit 50 ppb? Most importantly, in a survey of many, many sites, some / many may have very small emissions. You may have large uncertainty in the determined flux, but that does not make the measurement invalid. And retaining small emission rate measurements, even if the relative uncertainty is large, is an important issue for surveying emissions from industrial facilities. How many of the ~1000 sites fall into this category? Would the results of this work (uncertainty in emissions measurements) change significantly if these sites were included?
12. Page 19, lines 4-5. The inputs in Table 7 are all assuming a Gaussian distribution of errors, are they not? This is not clear. The text notes that errors may not be Gaussian in nature.
13. Page 19, line 5. If emissions from equation 2 are less than zero, is that a valid outcome that should be retained for a proper description of the measurement statistics? This seems linked to the practice of eliminating plume maxima less than 50 ppb. Mean values may be biased if valid small or negative plume concentrations are eliminated, and a negative emission rate may be a valid result from a noisy measurement system.
14. page 19, line 16. The computational demands associated with LES vs. a Gaussian plume model is not a new finding. It is not worthy of much text.
15. Page 20, lines 10-11. Reanalysis winds differ by source, and this one comparison effort is limited in terms of locations and times of day. A note of caution is warranted.
16. Page 21, line 5. This conclusion that the approach can identify emissions that are orders of magnitude larger than the mean is relatively weak, and not a new finding. It is disappointing that the manuscript doesn’t deal with biases in mean values when accumulating data over tens of transect and hundreds of sites, which seems to be the focus of this overall research effort.
17. Data availability. I understand that the data will become available upon publication, and that this text will be edited. That seems acceptable to me. |