Thanks to the authors for their replies and for making some minor changes to the
previous manuscript version. However, the manuscript still does not convey
sufficiently the very clear qualitative and quantitative differences between the
model's stratospheric response to 11-year solar forcing and that which has been
derived from the available observational data. It is not noted anywhere that
at least a few of the CMIP-5 models (some of those with coupled oceans as well
as interactive ozone chemistry) do a much better job of simulating the observationally
estimated response, including the zonal wind response in the upper stratosphere
that is found in both the northern and southern hemispheres. The results in sections
5 and 6 regarding the role of analysis method and interannual variability on
the detectability of the solar cycle signal in observations are not really
new although I can agree with the authors that more discussion of these issues
is needed in the literature. The results in section 6 do not take into consideration
the lack of realism of the model and the possibility that interannual variability
in the model may be greater than that in the actual atmosphere. In view of the
minor changes made to the revision, I went back and forth between recommending
rejection and major revisions and finally settled on major revisions.
(1) The abstract and summary (section 7) still make little mention of the strong
disagreements between the model-simulated responses of stratospheric ozone,
temperature, and zonal wind to 11-year solar forcing and that derived
from observations (here, ERA-Interim and SAGE II). The summary section is virtually
unchanged. The only admission of model deficiencies added to the manuscript
is (abstract): ``... there are some differences in magnitude, spatial
structure and timing of the signals in ozone, temperature and zonal winds.'' There is no
acknowledgment of the very clear qualitative differences between the model response
and that derived from observations in the abstract or summary sections.
In the reply to this criticism, it is argued that, in Fig. 2, ''the
uncertainties in the best estimates of temperature and ozone responses in the
tropics are overlapping throughout most of the stratosphere.'' This is not
a convincing answer for three reasons. First, there is a qualitative difference in
the altitude structure of the ozone and temperature responses estimated from
observations and that estimated in the model. The observations (both ERA-Interim
and SAGE II) indicate maximum responses in both the tropical upper and lower
stratosphere with a minimum near 30-35 km. In contrast, the model tropical
temperature response declines monotonically with decreasing altitude while
the model ozone response has only a broad maximum centred near 35-38 km. The model
has no lower stratospheric response and the response in the upper stratosphere
is significantly weaker than that derived from SAGE II data. Second,
the fact that the model error bars overlap with the observational error bars is not a
reason to accept the model results because it is the best estimate by the model (the mean
of that estimated from the ensemble members) that should be compared to the observational
error bars. If an infinite number of ensemble members was available, the error
of the mean would decrease to zero. The mean profile based on 3 ensemble members falls
outside of the observational error bars at 2 hPa for temperature and near 30 km for ozone.
Third, Figure 2 is a very smoothed (latitudinally averaged) comparison that makes the model
results look better than they really are. The actual latitudinal structure is shown in
Figures 3-5 where it is seen that there is a clear qualitative difference between the
tropical stratospheric solar response of ozone or temperature simulated in the model
and that estimated observationally. In the reply, it is argued that observational
uncertainties nevertheless allow use of the model results to ``provide a valuable insight into
the role of detection method and interannual variability for the detected
solar cycle signal/response that has not been widely acknowledged in the previous
literature.'' However, the presented model results do not give any significant
new insights beyond what is already well known to data analysts and modelers
(see comments 5 and 6 below).
(2) The weight of the evidence still indicates an underestimation by the
model of the upper stratospheric ozone and temperature responses. Accepting
the argument that it is the MLR-derived model ozone response of 1.5% at 45 km in
Figure 5b (rather than 1.0%) that is relevant, the ozone response derived from SAGE II
data within 10 degrees of the equator is still more than 3 per cent and extends up to 50
km, whereas the model mean value is between 1 and 1.5 per cent between 45 and 50 km.
The smoothed comparison shown in Figure 2c is unconvincing for reasons given in comment
(1). The ERA-estimated tropical temperature response in Figure 3c exceeds 1.1 K at
altitudes as low as 43 km while the model temperature response in Figure 3b maximises
at 0.9 K near 53 km and is less than 0.6 K at 43 km. Accepting the reply that the
6 spectral bands apply only to the model's shortwave radiative transfer scheme,
there remains a concern that the assumed irradiance variation may be too small.
On p. 5, line 31 of the new manuscript version, it is noted that the assumed irradiance in
the main UV band is 20\% less than that recommended in the CMIP-5 SSI specifications.
Several CMIP-5 models with interactive ozone chemistry have simulated stronger upper
stratospheric ozone responses (about 2% at 45 km) and temperature responses
(about 1 K at 45 km), suggesting that the adopted SSI variation may be too weak.
But even the CMIP-5 recommendations could be too small because, as reviewed by Ermolli
et al. (ACP, v. 13, p. 3945, 2013), there remain significant differences between proxy
solar spectral irradiance models. If the SSI variation is larger than the CMIP-5
recommendations at wavelengths that affect O2 photolysis and radiative heating near
45 km, then the existing model code would produce larger ozone and temperature responses
consistent with those derived observationally. There is no acknowledgment of this fact
in the manuscript.
(3) The lack of a tropical lower stratospheric response to 11-year solar forcing
is still a deficiency of the model. Thanks for adding the sentence to the
Introduction noting that coupling between the lower stratosphere and tropical
tropospheric convection could be important for producing or amplifying the tropical
lower stratospheric response (lines 18 and 19 on p. 3 of the revised version). However,
the authors' reply that using prescribed SSTs accounts sufficiently for this coupling
is unconvincing. There is no tropical lower stratospheric response in the model.
Prescribed SSTs will not account for any coupling via the MJO, for example. Several
CMIP-5 models with coupled oceans produce a tropical lower stratospheric response even
when time periods without significant volcanic eruptions are analysed.
(4) The lack of a zonal wind response at northern midlatitudes in December,
January, and February remains a major shortcoming of the model. Without such
a response, the model is incapable of simulating the solar dynamical signal.
There is no admission of this important fact in the manuscript.
While the manuscript notes the too-early timing of the modeled response in
November, there is still no mention of the latitudinal bias of the response
such that it is found at 60N in the model but at 30-40N
in the observations. While the reply emphasises uncertainties in the observations,
it is important to note that the strong zonal wind positive response in the midlatitude
upper stratosphere and lower mesosphere that is found in DJF is also found with even
larger amplitude at southern midlatitudes (30-40S) in JJA (see, e.g.,
Figure 4 of Crooks and Gray, J. Climate, v. 18, p. 996, 2005). The existence of a
corresponding zonal wind
response during austral winter provides empirical evidence that the response is real
and must be simulated realistically in a model if it is to be used to evaluate
the detectability of the solar cycle signal. The comparison shown in Figure R1 of the
reply does not alleviate this concern because the comparison is done at 60N
where the observationally estimated response is weak or non-existent. The error bars
on the modeled zonal wind response are also quite large because the ensemble includes
only 3 members. Again, the requirement should be that the model's best estimate
(the mean of the three realisations) should fall within the observational error bars.
But the comparison should be made at the latitude where the observationally estimated solar
signal is found.
(5) The results in section 5 regarding the role of analysis method (compositing
versus MLR) are well known and do not provide any new insights regarding
the detectability of a given forcing signal. It is well known that MLR
results for any quasi-periodic signal (ENSO, QBO, or solar cycle) will differ from
compositing results because the MLR method accounts approximately for other sources
of interannual variability. For this reason, nearly all observational analyses use MLR.
It is also well known that the differences will be larger in regions where interannual
variability is larger (i.e., in the tropical troposphere and at high latitudes).
(6) The results in section 6 (comparison of yearly mean temperature, ozone, and zonal
wind results derived by MLR for the three ensemble members (Figures 10-12)) are of
interest but do not take into consideration the possibility that interannual variability
in the model may be greater than that in the atmosphere. There is no mention of
this possibility in the manuscript. It is well known that a
large number of ensemble members is needed to extract with high confidence a given
geophysical signal from model data. The number of members and total record length
considered here (three
45-year simulations representing a total of 135 years subjected to MLR analysis)
is too small. The sentence added to section 6 (lines 14-17 on p. 19 of version 3)
(``From a modelling perspective, it is therefore crucial that the impact of the
solar cycle forcing on climate is studied with sufficiently long simulations and
that the current observations and reanalysis records for the stratosphere are
interpreted carefully, bearing in mind the relatively small number of degrees of
freedom they represent.'' is certainly true but is there anything new here?
(7) There is no discussion in the manuscript about the need to validate the model
using observational estimates of the atmospheric response to 27-day solar forcing
and the stratospheric QBO before applying it to evaluate the effects of 11-year
solar forcing. The observational uncertainties for these shorter-term forcings
are much smaller than for the 11-year solar forcing problem. Although 27-day
solar UV variability has been weaker during the last few solar cycles, there is
ample satellite and reanalysis data available for the 1979-1993 period when
27-day variability was stronger and atmospheric effects were easier to measure.
It would be straightforward to add daily resolution of the SSI forcing to the model
and investigate its ability to simulate observed atmospheric responses
on this time scale. Probably only 3 ensemble members would be sufficient to
accurately determine the model responses with very small error bars. Similarly, if
the model simulates a QBO, it should be straightforward to determine whether there
are any tropospheric consequences and whether these agree with observational
constraints (e.g., Yoo and Son, 2016). |