the Creative Commons Attribution 4.0 License.
the Creative Commons Attribution 4.0 License.
A survey of radiative and physical properties of North Atlantic mesoscale cloud morphologies from multiple identification methodologies
Ryan Eastman
Isabel L. McCoy
Hauke Schulz
Robert Wood
Download
- Final revised paper (published on 06 Jun 2024)
- Preprint (discussion started on 26 Sep 2023)
Interactive discussion
Status: closed
-
RC1: 'Comment on egusphere-2023-2118', Anonymous Referee #1, 21 Oct 2023
Summary:
The authors compare three different supervised neural network classifications of low cloud morphologies in the North Atlantic. The geographic distributions, the overlap statistics and the radiative and physical properties of the different morphologies are discussed in detail. The authors find that the all-sky albedo is more strongly correlated to cloud albedo then cloud amount for nearly all morphologies, and that each morphology displays a distinct set of physical characteristics.
I find the paper to be very well-written and a suitable contribution to ACP. The analyses are carefully done and clearly explained. I only have some minor comments that I detail in the following.
Main comments:
- Some more comparison of MIDAS and MEASURES ‘shared’ categories: I was expecting that most differences between MIDAS and MEASURES is in the disorganized Cu type, which is distributed over more classes in MEASURES. However, there are very pronounced differences in the Open MCC class for example. I think the authors should analyse and explain these differences in a bit more detail. Fig. 3d for example shows that MEASURES hardly identifies Open MCCs. Do you understand why this is the case?
5 shows the overlap of MEASURES with MIDAS Open MCCs, but are there also cases where MIDAS detects Open MCCs but MEASURES doesn’t detect anything? And if so, what are the conditions / regions where this occurs? Also in e.g. Fig. 14, Open MCC from the MIDAS and MEASURES classifiers seem to be the furthest away compared to e.g. the closed and disorganized morphologies. Any ideas why this is so? - Seasonality & diurnality: As the authors have data for an entire year, I’d find it very interesting to see the seasonal cycle of morphology occurrence, e.g. in the subdomains shown in Figure 13. Likewise, if nighttime morphologies would be available, a few words on the diurnal cycles of the different classifiers would be very interesting.
- Rain rate data: I’d like to see 1-2 sentence near L166 regarding how well this routine for deriving rain rates works for the low clouds considered in the study. Especially since the authors find a ‘curious difference’ when comparing mean and peak OD versus rain rate (L305), I wonder whether this isn’t related to the way the rain rates are derived.
More specific comments:
- The SGFF morphologies are mostly written in italics in the manuscript, but not the other morphologies. I’d suggest to also write e.g. Suppressed Cu in italics.
- L33: Maybe talk about Stratocumulus and Cumulus as archetypal cloud types than rather cloud organizations?
- Paragraphs starting in L45 and L52: I’d suggest to switch the sequence of the two paragraphs, as the three routines are only introduced in L52 but already discussed in L45.
- L158: I didn’t fully understand that ‘spaced 333m apart along the satellite ground track’ refers to a horizontal spatial resolution of 333 m. Maybe rewrite.
- L254f: I am a bit surprised that SGFF doesn’t show a lot of within-routine overlap. Previous studies like Vial et al. (2021, https://doi.org/10.1002/qj.4103) mentioned a lot of overlap among SGFF morphologies. What is different here?
- Refer to some literature already in the results section: I’d suggest to add a reference to Mieslinger et al. 2022 in L274; a reference to the statement in L279 that “cloud amount as a proxy ....”; and a reference to McCoy et al. (2023) in L298.
- L280ff: I find the conclusion in L286 regarding the complex picture of morphology and location interesting, but wonder whether it needs Fig. 7-9 in the main text for this. Maybe a selection of the most important subplots is enough? There are already a lot of Figures with many panels and I find it hard to digest all the information. So this could be a good point to reduce information.
- L418ff: I don’t really see what you mean here, e.g. how we can see the change from stratiform types to Flowers from Fig. 5, and what suppressed Cu evolves into. Please clarify.
- L425f: Maybe good to mention cold pools as a potential driving process in this context.
- L428: This summary of Leahy et al. (2012) is confusing and seems to contradict what is written in the next sentence. Please rewrite.
- L453: perterbations --> perturbations
Comments on Figures:
- 1: some colors in panel a) differ from the colors of the three categories. Please explain. Also, please enlarge the axis labels and morphology legends.
- 2-4: I’d suggest to combine all of them in one figure, such that they don’t distribute over different pages. I’d also suggest to use a different color scale – for Fish it’s not visible easily if we’re at the lower or upper end of the range.
- 14 and 15: I find the comparison in these figures very interesting! Suggestions: Change MCC to MIDAS in the figure legends. And add in the caption what filled vs. hollow symbols refer to.
Citation: https://doi.org/10.5194/egusphere-2023-2118-RC1 - Some more comparison of MIDAS and MEASURES ‘shared’ categories: I was expecting that most differences between MIDAS and MEASURES is in the disorganized Cu type, which is distributed over more classes in MEASURES. However, there are very pronounced differences in the Open MCC class for example. I think the authors should analyse and explain these differences in a bit more detail. Fig. 3d for example shows that MEASURES hardly identifies Open MCCs. Do you understand why this is the case?
-
RC2: 'Comment on egusphere-2023-2118', Anonymous Referee #2, 13 Dec 2023
Review of "A Survey of Radiative and Physical Properties of North Atlantic Mesoscale Cloud Morphologies from Multiple Identification Methodologies" by R. Eastman, I. L. McCoy, H. Schulz, and R. Wood (egusphere-2023-2118)
Several recent machine learning methods have been developed to identify different types of mesoscale patterns of low-level clouds. This study compares the patterns types between three methods, their spatial distributions, and their radiative and microphysical properties. As such, it provides a useful bridge between newer and more qualitative approaches to studying cloud phenomena to older and more quantitative approaches. For this reason, I think it is worth publishing, but I have several ideas for improvement.
Major comments:
1) The underlying source data for the pattern identification methods MODIS is imagery, which I believe relies at least in part on a threshold method for detection. Also, I believe some MODIS pixels are labeled as partly cloudy, rendering plane-parallel retrievals of cloud properties questionable. It would be helpful to have a little discussion about how limitations and assumptions going into the MODIS imagery might affect the identification of cloud patterns and characterization of their properties.
Subcomment A: The first MODIS issue that comes to mind is that grid box cloud fraction and average cloud optical thickness are highly dependent on whether pixels near the threshold of detection or partly cloudy pixels are identified as cloudy or not. If they are included, cloud fraction will be greater but average cloud optical thickness (or cloud albedo) will be smaller. If they are not included, then cloud fraction will be smaller but average cloud optical thickness (or cloud albedo) will be larger. Because pixels near the threshold of detection or partly cloudy pixels have little impact on radiation flux, whether they are included or not has little impact on the total radiative impact from clouds in the grid box, but it can substantially affect whether differences in radiation flux from clouds are attributed to differences in cloud fraction or differences in cloud optical thickness (or cloud albedo). Since some of the main results concern whether cloud albedo or cloud fraction is more important for all-sky albedo variability, I think it is important to clarify how this depends on assumptions and decisions made about the MODIS source data.
Subcomment B: The second MODIS issue that comes to mind is that retrievals of cloud droplet number and cloud droplet effective radius are most accurate in areas of extensive homogeneous cloud layers and biased in areas of broken cloud and partly cloudy pixels. For this reason, studies often limit characterization of droplet number and size to areas where retrievals are most accurate, but there is reason to believe that these areas are not representative of the scene as a whole. Since some of the main results concern cloud droplet size and implied rain rate, I think it is important to clarify how this depends on assumptions and decisions made about the MODIS source data.
2) The fact that the morphology data are projected onto a 1x1 latitude-longitude grid and that a 1x1 grid box can contain multiple pattern types raises the issue that two pattern types from different methodologies might be identified in the same grid box yet actually be only geographically adjacent with no geographical overlap. How much this happens would depend on the size of the scene classified into pattern types by the various methods and the spatial autocorrelation of pattern types. Additionally, there would be a sampling bias introduced by the fact that latitude-longitude grid boxes are smaller at higher latitudes so that it is less likely that the grid box would contain multiple pattern types. Although consolidation to a 1x1 grid makes comparison with level-3 MODIS and CERES data simpler, it muddles the interpretation of co-occurrence of various pattern types since it is not known for sure whether there is true geographical overlap between pattern types or whether pattern types are geographically adjacent. Also, there is ambiguity in matching pattern types to level-3 MODIS and CERES grid box properties when there are multiple pattern types in a grid box.
Subcomment A: I think it might be better to use equal-area grid boxes to avoid the sampling bias with latitude, although this makes comparison with level-3 MODIS and CERES data difficult. It may also be useful to investigate the sensitivity of the results to grid box size.
Subcomment B: Although perhaps not feasible, I think a better approach would be to go to a much smaller equal area grid box resolution that would in almost all cases be associated with a single pattern type. Then the true geographical co-occurrence of pattern types from two different methods would be known. The co-occurrence of two different types from the same methodology in adjacent grid boxes could also be determined. This approach would result in a less muddled interpretation.
Minor Comments:
1) The first paragraph of the abstract is awkward. It would probably be better to split it up into several more conventional sentences.
2) Although it would add another figure set, I think it would be helpful to show a representative scene for each of the pattern types rather than require the reader to go back to three papers to see what each of the pattern types looks like.
3) The text size in Fig. 1 and Fig. 5 is very small and is barely readable without zooming in a bit.
4) The blank areas in Fig. 1 suggest that some scenes are not classified into any type of low cloud pattern. Or were they left out purposefully? Note that non-classification is itself is a type.
5) I think number of observations is not a good unit to use in Figs. 2-4 since it is difficult to directly interpret. I would prefer instead frequency of occurrence of each pattern type, including perhaps the frequency of non-identification of a type, in which the frequency of each pattern type plus non-identification adds up to 100%. This would enable the reader to know the frequency at which a certain cloud type pattern is identified at a particular location over the North Atlantic.
6) Figs. 2-4 do not have a color scale that is friendly to people with color-impaired vision.
7) I am not sure that “fraction of maximum overlap” is the best way to characterize how often two pattern types from two different are co-identified. It might be more insightful to calculate the frequency of occurrence that type B is identified when type A is already identified, or the frequency of occurrence that type A is identified when type B is already identified. These numbers may not be the same. For example, let’s say that one method has stricter criteria for identifying open cell stratocumulus compared to another method. At a particular location, method 1 open cell Sc might occur on 50 out of 100 days and method 2 open cell Sc might occur on 25 out of 100 days, but always on days on which method 1 open cell Sc occurs. In this case, method 2 open cell Sc occurs 50% of the time when method 1 open cell Sc is already identified, but method 1 open cell Sc occurs 100% of the time when method 2 open cell Sc is already identified. The method of “fraction of maximum overlap” would yield a value of 1 for the above scenario, which is consistent with method 2 open cell Sc always occurring when method 1 open cell Sc is already identified, but it not inform the reader about how much method 1 open cell Sc occurs given that method 2 open cell Sc is already identified. It seems that this might be useful information.
8) I suppose the large values of fraction of maximum overlap seen between the MIDAS types in Fig. 5 are due to the 50% overlap between neighboring boxes employed by that cloud pattern identification method. In this case, no physical insight can be drawn from that fact since it arises by construction.
9) I suppose the small values of fraction of maximum overlap seen between the SGFF types in Fig. 5 are due to the very large spatial scale of clouds identified as a single type that is suggested by Fig. 1. In this case, it is less likely that two different types would occur in the same 1x1 grid box and thus possibly overlap. In this case, it is difficult to draw physical insight from the values of fraction of maximum overlap since they appear to be substantially driven by the large spatial scale of SGFF type identification (assuming that Fig. 1 is representative).
10) If I understand correctly, the overlap in Fig. 5 between pattern types from different methodologies can arise because they both occur in the same exact area within a 1x1 grid box or because they occur in different areas within the same 1x1 grid box. It seems undesirable that “overlap” does not have a unique physical meaning. There is ambiguity about whether different methods are identifying types that are co-located or adjacent.
11) It is not clear to me from the method explanation how cloud albedo and cloud amount are matched to types in 1x1 grid boxes. Since a single 1x1 grid box on one day could contain more than one cloud type from the same method, there is not a unique matching between type and cloud albedo and cloud amount associated with that type. If type A was only a small fraction of the 1x1 grid box, the cloud albedo and cloud amount would primarily be caused by type B but nevertheless get averaged into type A, thus muddling the results. It would be better to calculate cloud albedo and cloud amount only from those 1x1 grid boxes and days in which only one type was identified for a particular method.
12) Does the relative importance of cloud albedo and cloud amount in explaining all-sky albedo depend on thresholds for cloud identification or inclusion of partial cloud pixels in MODIS?
13) In Fig. 6 the thickness of the lines represents the uncertainty of the mean. By this metric, there is not much overlap between cloud types, especially for the cloud albedo vs. cloud amount plot. But how much overlap is there between distributions. Does the statement “Taken together, these figures show how radiative properties for each cloud morphology are a unique function of cloud cover and cloud albedo” apply to individual scenes or only to the mean?
14) It might be useful to show the annual climatology of low cloud albedo in order to put the albedo anomaly plots in context. One thing that is confusing about the albedo anomaly plots is that they do not appear to add up to zero across all the pattern types for a particular methodology. For example, the SGFF albedo anomaly is negative for all types in the midlatitude North Atlantic. Isn’t the climatology of low cloud albedo constructed from the four SGFF types? If so, how can they all have an anomaly less than the climatology? Shouldn’t some have an anomaly greater than the climatology to balance out?
15) I wonder how accurate some of the relationships in Fig. 10 are. With broken and scattered cloud fields, is it really possible to accurately retrieve LWP and droplet number concentration?
16) “This section analyses ‘cloudy’ retrievals in 30m height bins in the lowest 4km of CALIOP LIDAR profiles in classified boxes.” How is it handled if there are clouds above 4 km elevation? These clouds may attenuate the signal and cause misidentification of optical thickness of lower clouds if the signal does not reach the surface.
17) “A 1:1 line is also shown, where the area-wide North Atlantic mean values for each morphology are shown as a hollow symbol”. This information should be in the caption.
18) Possibly the information in Figs. 14-15 could be more simply presented in a table or two.
19) Line 453: perterbations -> perturbations
Citation: https://doi.org/10.5194/egusphere-2023-2118-RC2 -
AC1: 'Comment on egusphere-2023-2118', Ryan Eastman, 17 Jan 2024
The comment was uploaded in the form of a supplement: https://egusphere.copernicus.org/preprints/2023/egusphere-2023-2118/egusphere-2023-2118-AC1-supplement.pdf