|I recommend this paper be rejected. It does not present interesting new science, and there are still flaws in the figures. The analysis of differences in climate change simulations in which the global average temperature does not change, with a scaling that depends on temperature differences, does not make sense to me. I do not understand why that part is in the paper.|
I am very annoyed that the authors did not respond to one of the items in my previous review. The maps are still not plotted correctly. The longitude labels are still in the wrong place. And there is a border at the top, bottom and right edge of the maps with no shading. This gives me no confidence that the results are plotted correctly. You don’t have to use GrADS, which would not have this problem, but there are many other graphics programs, such as NCL, ferret, and even Matlab that can do this. This refusal to fix this aspect of the paper alone makes me recommend to the Editor that this paper be rejected, and that if resubmitted the Editor makes sure the maps are of an acceptable quality.
Fig. 4 has no significance measures or error bars. How different from zero would the values have to be to merit consideration?
p. 4, last paragraph. No, the small differences between simple and extended scaling, and the large disagreement between them and the actual results, mean that this is not an appropriate way to analyze the results. First of all, you need statistical tests to show how different the scalings need to be from each other to even deserve consideration. To say that relative humidity (RH) plays a modest role is incorrect, and certainly should not be in the abstract. What is correct is that you cannot tell how important RH is.
In various places in the paper, the authors say data are not available. But did they write to the modelers to obtain the data? Just because they are not posted to the websites they looked at does not mean they do not exist. In my experience, modelers are happy to send data in response to a request.
If the authors are going to analyze RH and ITCZ location, and their changes with geoengineering, it is incumbent on them first to analyze the piControl runs to see if the models do a good job of simulating these in the first place. If not, then how can we trust small changes. It is traditional to through out models in such a comparison if their current climate differs quite a bit from observations, not because you were not able to get the model output.
The ITCZ shifts found here are very small (<1°) and completely expected given the N-S temperature change differences. Since the models differ so much in their simulation of the ITCZ, this does not seem an important result.
For the seasonal analysis, why did the authors choose the unconventional JFM and JAS or the seasons rather than the more traditional DJF and JJA? Without any special reason this was the wrong decision and prevents comparison with the results of others.
There are 15 more comments in the attached annotated manuscript that would need to be addressed.