Articles | Volume 25, issue 22
https://doi.org/10.5194/acp-25-16969-2025
© Author(s) 2025. This work is distributed under the Creative Commons Attribution 4.0 License.
Applying deep learning to a chemistry-climate model for improved ozone prediction
Download
- Final revised paper (published on 27 Nov 2025)
- Preprint (discussion started on 10 Jun 2025)
Interactive discussion
Status: closed
Comment types: AC – author | RC – referee | CC – community | EC – editor | CEC – chief editor
| : Report abuse
- RC1: 'Comment on egusphere-2025-1250', Anonymous Referee #2, 21 Jul 2025
- RC2: 'Comment on egusphere-2025-1250', Anonymous Referee #1, 12 Aug 2025
- AC1: 'Response to reviewers comments', Zhenze Liu, 23 Sep 2025
Peer review completion
AR – Author's response | RR – Referee report | ED – Editor decision | EF – Editorial file upload
AR by Zhenze Liu on behalf of the Authors (23 Sep 2025)
Author's response
Author's tracked changes
Manuscript
ED: Referee Nomination & Report Request started (30 Sep 2025) by Pedro Jimenez-Guerrero
RR by Anonymous Referee #2 (16 Oct 2025)
ED: Publish as is (16 Oct 2025) by Pedro Jimenez-Guerrero
AR by Zhenze Liu on behalf of the Authors (24 Oct 2025)
Liu et al. (2025) uses 6 different statistical models to bias correct surface UKESM1 ozone comparing to CAMS reanalysis. A weighted approach is shown to improve performance over any single model. This bias correction is then applied to future scenarios, specifically SSP3-7.0 and SSP3-7.0-lowNTCF. The manuscript is well written and the figures are broadly of good quality.
Major concerns
My major concern with this study is the assumptions behind it and the validity of the bias correction for future scenarios. The authors state "We assume that UKESM1 exhibits systematic biases that are associated with other self-generated variables" (L97), and UKESM biases are then corrected by comparing UKESM1 to CAMS. However, there was only a limited discussion of CAMS, and while CAMS shows reduced biases compared to TOAR observations there were no detailed comments on whether CAMS uses the same emissions as UKESM or how emissions and how these are represented in models could influence biases against observations. How much is CAMS constrained by data assimilation compared to a free-running model such as UKESM, and is correcting ozone for e.g. biases in temperature a valid and fair approach? I feel that more reasoning is required here, along with clear caveats on the approach taken.
I am currently also unconvinced that bias-correcting to CAMS for the present day, then applying this bias correction to future scenarios is also valid and fair. How sure are we that both UKESM and CAMS have the correct internal relationships in the present day simulations to be sure that projecting this bias correction into the future, with a different climate state and different emissions, would lead to that bias correction still being valid? Again, I feel that a greater discussion here of the validity of this method and the caveats in doing so should be made, especially as quite strong statements are made assuming that this is an entirely valid approach. The authors state that "This indicates that the UKESM1 has a greater sensitivity of seasonal O3 changes due to unknown reasons" (L149) in the discussion of future surface O3 changes, but these unknown reasons may make this approach less certain to succeed. How sure are the authors that they have the correct assumptions in the calculation of the biases in the present day, and is UKESM the correct model to consider regional air pollution in the context of their wider questions?
Although at the end of the paper, the authors do state that "we acknowledge that uncertainties remain, particularly regarding the use of CAMS data as a reference for model training" (L262-3) I would have preferred a much more detailed discussion of the assumptions and limitations of this study, as from the current manuscript I am not convinced that this is a valid approach. In terms of a technical peice of work it is well formulated and presented, but scientifically I am nervous about the strength of the scientific statements that the authors make, particularly around the size of the biases UKESM may simulate when considering future climates.
Specific issues
Figure 3 - why do there seem to be discontinuities at 0.01, 0.1, and 1? The behaviour of the curve seems to jump at each of these values of sigma.
Figure 4 - I needed to zoom in quite a lot to be able to see the detail of the hatching. I would recommend making this plot bigger, perhaps a full-page 6x2 rather than the 3x4 currently presented.
Figure 6 - given that the error bars (only 1 standard deviation) in winter mostly all straddle 0, can it be said that there are any biases in that season at all from this method?