Re-review of “Comparison of the CMAM30 data set with ACE-FTS and OSIRIS:
Polar regions” by Pendlebury et al.
Overall, I feel that the authors have done a reasonable job of responding to the comments from both referees. However, a few minor issues remain – or have been introduced through the editing process – in the revised manuscript. In addition, there are a few instances in which the authors apparently misconstrued the meaning of my comments on the original manuscript. I apologize for the misunderstanding in those cases. I have rewritten those comments here in an attempt to be clearer. Thus, in my opinion some further minor modifications are needed to finalize the manuscript for publication.
Specific comments (page and line numbers refer to revised manuscript):
P4, L107: My comment on this sentence in the original manuscript (P11184, L12) was grammatical in nature, not substantive. In my opinion, “denitrification/dehydration” should be followed by “do” (not “does”). The authors stated that they added the qualifier “in the stratosphere” to make things clearer, but I do not see where they mean as this sentence is unchanged from the original. In any case, the qualifier is not needed.
P8, L230: In response to my comment on this sentence in the previous draft, the authors have changed the wording to “where OSIRIS temperatures are known to be problematic”. I agree that this wording is an improvement. However, it seems to cry out for a reference. Is there an appropriate reference to cite (maybe one of the Sheese et al. papers)?
P8, L249: Typo: “stratosphere *is* within”.
P9, L283-285: Awkward wording. It would flow better to write this as “… ACE-FTS observations. Such a kink is also seen in HALOE…”.
P10, L304: “Descent” is not a term often applied to PSCs. I think that “sedimentation” would be better here.
P10, L330: The results for Section 4.2 are referred to in this sentence. But we are just reading Section 4.2 at this point. Perhaps Section 4.1 was meant? In addition, the wording of this sentence is awkward. I suggest something along the lines of “… the results for 2006 from Section 4.(1?) are similar to those for other years”.
P11, L340-342: Sorry for the very unclear comment on this sentence in the original manuscript. I was not necessarily looking for a detailed breakdown of the different shades of grey corresponding to the different sPV intervals, but rather suggesting that it would be helpful to the reader to define which grey shades identify the surf zone and which the inside of the polar vortex. I myself have used both sPV=1.2 and 1.4 PVU to locate the vortex edge, depending on the application, and I would think that 1.8 PVU would be deep in the vortex core, but it's not at all clear what the authors intend here. In addition, it is not necessary to repeat that sPV values greater than –1 PVU outside the vortex have no shading or are white (i.e., the same information is conveyed twice in this sentence). Finally, if this list of sPV values is retained, then a comma is needed after “darker gray” in L342. (Note that this color is referred to as both “grey” and “gray” in the manuscript – it might be better to be consistent.)
P11, L344-345: Are the locations of the tangent heights still marked in Figures 11 and 12? I don't see them in the figures in the revised manuscript.
P11, L359-360: Sorry for not catching this earlier, but since it is Type II PSCs that are being discussed here, isn’t the more appropriate threshold 188 K, not 196 K?
P11-12, L371-378: I still find the discussion of the calculation of the error bars unclear. For one thing, I am not sure what is meant by “cumulative errors” – this term should be defined. Perhaps it is the root mean square of the accuracy and precision/sqrt(N) values, but that is not typically referred to as “cumulative error”. Alternatively, it’s possible that the authors mean “accuracy”, in which case “cumulative error” is a misnomer. The statement “For Aura MLS, the reported errors, which indicate the precision of the instrument” may give the wrong impression to non-expert readers. As explained in the MLS Data Quality Document, both accuracy and precision values are documented for each product. Accuracy reflects systematic uncertainties (which arise from a variety of sources, including instrumental issues, spectroscopic uncertainty, approximations in the forward model or retrieval process, etc) and is typically quantified through comparison with correlative data sets and bottom-up sensitivity modeling. This uncertainty does not “average down”. Precision reflects radiance noise and represents the statistical repeatability of the measurements. Precision is estimated both empirically and by the Level 2 data processing system, and the latter value is what is reported for each data point in the MLS data files. As I mentioned in the first review, since it is really the relative changes throughout the season, which indicate how well the instruments can track day-to-day variations, and how those compare to the modeled values that are most relevant here (rather than the absolute mixing ratio values), the accuracy can be ignored in the error bars. Thus, in my opinion the error bars for both instruments should be calculated by including only the precision term. Moreover, the precision estimate that is reported for each MLS product is the value for a single profile. Precision is improved by averaging. Thus for each vortex average, the proper error bar is computed by dividing the single-profile precision value by the square root of the number of data points contributing to that average. Judging from the size of the MLS error bars in Fig. 13, such a division by sqrt(N) may in fact have been performed, but the text suggests otherwise: L374 talks about taking the square root of the sum of the squared errors, but does not mention the necessity of then dividing by sqrt(N). So, either the error bars for this figure need to be recomputed or the manuscript text and figure caption need to be revised to make this point clear. Finally, what does “fall” mean in L376?
P12, L386: Typo: “Livesey et al. (2013)” should be “(Livesey et al., 2013)”.
P12-13, L404-424: I appreciate that the authors have added a reference to Santee et al. [2008]. However, they appear to have missed part of the point of the comment on the original manuscript that motivated mentioning that paper in the first place. In the revised manuscript, the 2008 paper is now cited in the middle of the HNO3 discussion to make a general point about the effect of the ACE orbit on the calculated vortex averages (with an emphasis on explaining the variations from early winter to August). That’s fine as far as it goes, but a critical aspect of the ACE sampling that the authors fail to note is its transit through the collar region in mid to late September, which strongly affects both HNO3 and ClONO2 measurements (as seen clearly in the rapid increases in both species at this time in Fig. 13). Thus the sampling point is also highly relevant for the comparisons of modeled and observed chlorine partitioning in late winter. The authors state that: “ClONO2 in the CMAM30 data is also high compared to ACE-FTS from mid-August to late September”. This statement could be interpreted to suggest that modeled and measured ClONO2 are in better agreement in or after late September. However, I believe that the apparent agreement in late September arises largely because of the ACE sampling. The ACE occultations sweep rapidly through the ClONO2 collar region in mid to late September, imposing an increase in observed “vortex-averaged” ClONO2 on top of that caused by chlorine deactivation. In other words, some of the increase in observed ClONO2 comes about from conversion of active chlorine back into the reservoir form, but much of it is “artificial” – aliasing from the change in the region being observed, and that must be taken into account in interpreting the comparisons with the modeled behavior. Santee et al. [2008] briefly discussed this feature, and that’s why I mentioned it in my earlier review.
P13, L438-442: In reviewing the previous draft, I had inquired why, since the model does include a treatment of STS PSCs, it shows no sign of PSC formation in 2005, apart from a brief interlude in late January. 2005 was a cold winter (as Fig. 13 shows), with extensive PSC formation observed, and the model does calculate some solid-phase HNO3 in late January, as the manuscript points out – so why no hint of PSCs after that time? In response to this comment, the authors added to this sentence “(i.e. solid phase H2O is absent)”. I am confused by this addition, which I do not feel addresses my comment. What does solid phase H2O have to do with STS formation or its lack in the model? Moreover, Fig. 13 does not even show H2O in the solid phase (just gas-phase H2O), so, while no doubt true, this statement is unsupported. Even if this was a typo (and “HNO3” was meant), I still do not understand how it is responsive to my question. Looking more closely at Fig. 13 now, I see that the MLS data also do not indicate significant PSC activity over much of the rest of the winter. I suspect that this may be because a lot more PSC formation occurred at lower altitudes than at 500 K (the level represented in Fig. 13), and furthermore the signatures of localized PSCs tend to be smeared out in vortex averages. Thus it is probably not too surprising that there is very little indication of PSC activity in Fig. 13.
P14, L452-453: In my earlier review, I had commented on the statement “… there is a lot of variability in the ACE-FTS data despite averaging over the polar vortex”. On some days there may be only a few ACE occultations inside the vortex, and therefore I suggested that some minimum number of measurements (e.g., 5) be required to define a “vortex average”, and that eliminating points that are not actually representative of vortex averages might reduce the day-to-day variability. The authors have chosen not to apply such a criterion for discarding averages, and I accept their decision. But I still feel that the wording “despite averaging over the polar vortex” is misleading. The authors argued in their response that because of the “zonal nature of the vortex” removing some points would be unlikely to change their results. But the NH vortex is NOT zonally symmetric – it is frequently elongated and shifted off the pole. In particular, although it was cold in 2005, the vortex was very dynamically active and distorted throughout much of that winter [Manney et al., GRL 33, 2006]. Thus ACE averages based on only a handful of measurements at a fixed latitude could have sampled portions of the vortex with vastly different characteristics as it sloshed around from one day to the next. However, after having gone back and re-read this passage in the revised manuscript, I now find myself questioning why this statement, which is much more relevant for the SH, was even included in reference to the bottom right panel of Fig. 13. It seems to me that the ACE-FTS ClONO2 data display far less variability in the NH than they did in the SH – in fact ACE ClONO2 has nearly constant low values in early January and is essentially zero when coverage resumes after mid-March. Indeed, the statement that “CMAM30 ClONO2 compares reasonably well with ACE” can only be made about early January – they do not match at all after that time. This should be clarified.
P15, L499-502: Typo in L500: “conditions *are* appropriate”. In addition, this sentence is awkwardly worded and should be rewritten, and since H2O is mentioned along with HNO3, dehydration should be added. I suggest something along the lines of “… they persist. Thus the model does not allow for denitrification/dehydration …”.
P15, L502-505: I think that it is confusing to use the phrase “temporary dehydration”, because (like denitrification) the term “dehydration” is often taken to mean the irreversible removal of water vapor from the lower stratosphere. In addition, the grammar of this sentence is awkward. Thus I suggest rewriting it along the lines of: “In addition, the model does not seem to sequester enough water vapour in PSCs in the winter lower stratospheric vortex, because CMAM30 water vapour in this region can be ~20-30% too high despite the pervasive ~10-25% low bias in this field.”
P16, L531-532: In my previous review I noted that it was not surprising that “even during a cold year without an SSW, water vapour shows very little change from the consistent low bias”. I assumed that the authors were drawing a contrast with the SH, where CMAM water vapour wound up being high despite the low bias because it fails to simulate dehydration. That’s why I pointed out that, although it was a moderately cold year, there was essentially no dehydration in the 2005 NH winter, and thus no possibility that CMAM30 would show anything other than the expected low bias. I cited Jimenez et al. [2006] for the fact that there was only a single occurrence of short-lived localized dehydration in the 2005 Arctic winter. I do not understand the authors’ response to this comment. They indicate that they have added the reference, but it does not appear in either the body of the manuscript or the references. They also assert that Jimenez et al. discuss the dehydration that occurred and how it affected the stratosphere several months later, but that is not true for the 2005 Arctic winter (Jimenez et al. also include two Antarctic winters, and perhaps the authors were looking at those results instead).
P16, L536: Typo: “in during”. Delete one.
P32, Fig. 11 caption: In my original comment I did not mean to suggest that the T contours needed to be labeled on the plot itself. I was merely trying to say that it is better if all elements of a figure are described in its caption. I understand that the white contours are mentioned in the body of the manuscript, but they should be defined in the figure caption as well.
P33, Fig. 13: There is a stray ClO label (in green, barely visible) on the right-hand y-axis in the bottom left (SH) panel. |