Atmospheric inverse modelling has become an increasingly useful tool for
evaluating emissions of greenhouse gases including methane, nitrous oxide, and
synthetic gases such as hydrofluorocarbons (HFCs). Atmospheric inversions for
emissions of
The US state of California currently emits roughly 100 Tg C of fossil fuel
Previous research has shown that inferring ff
Recent studies with both real atmospheric measurements of
Although atmospheric inversions may provide a method for estimating emissions
that is useful for evaluating emission reduction policies, such as AB-32,
systematic errors can arise from the atmospheric transport and prior emission
models (e.g. Nassar et al., 2014; Liu et al., 2014; Hungershoefer et al.,
2010; Chevallier et al., 2009, Gerbig et al., 2003). Comparisons of
The objective of this paper is to examine the sensitivity of a regional
inversion for Californian ff
Our approach is to use simulation experiments to quantify representation and transport error using the inversion set-up and the observation network from Graven et al. (2018) as a test case. Specifically we test whether the inversion can estimate the “true” emissions that were used to produce the pseudo-data within the uncertainties when the prior emission estimate includes spatial and temporal representation errors within the scope of current emission estimates (Vulcan v2.2 and EDGAR v4.2 FT2010). We further test whether the inversion can estimate true emissions within the uncertainties when the transport model used for the prior simulation is different from the transport model used to produce the pseudo-data, emulating transport error.
The analysis approach applies a Bayesian inversion developed from previous
work that combines atmospheric observations, atmospheric transport modelling,
prior flux models, and an uncertainty specification (Jeong et al., 2013;
Fischer et al., 2017). Here, the inversion scale's prior emission estimates
for 15 regions (Fig. 1a, Table 1) termed “air basins”, classified by the
California Air Resources Board for air-quality control (
The 15 air basins of California with respective emissions as estimated by Vulcan and EDGAR models. Also shown are the SD prior uncertainty estimate (Fischer et al., 2017) and difference in magnitude between Vulcan and EDGAR for each air basin. Air basin numbers correspond to those marked in Fig. 1.
As a test case for exploring uncertainties in ff
The observed ff
The two prior emission estimates used here are gridded products produced by
EDGAR (version FT2010; EDGAR, 2011) for the year 2008 and Vulcan (version 2.2)
for 2002 (Gurney et al., 2009). EDGAR is produced at an annual
resolution, whilst Vulcan has an hourly resolution. The two models use
different emission data and different methods to spatially allocate
emissions with annually averaged statewide emissions differing by 17.8 Tg C
(
We estimate prior uncertainty in the same way as in Fischer et al. (2017),
using a comparison of four gridded emission estimates in California (Vulcan version 2.2,
EDGAR FT2010, ODIAC version 2013, and FFDAS version 2) as well as a comparison across
an ensemble of emission estimates for one model (FFDAS version 2; Asefi-Najafabady
et al., 2014). Prior uncertainty is specified for the whole air basin. The
relative 1
Comparison of the three atmospheric transport models used in this study.
We simulate ff
The first WRF-STILT model is run at Lawrence Berkeley National Laboratory
(WS-LBL; Fischer et al., 2017; Jeong et al., 2016; Bagley et al., 2017) and uses
WRF version 3.5.1 (Lin et al., 2003; Nehrkorn et al., 2010). Estimates for the planetary
boundary layer height (PBLH) are based on the
Mellor–Yamada–Nakanishi–Niino version 2 (MYNN2) parameterization
(Nakanishi and Niino, 2004, 2006). As in Jeong et al. (2016), Fischer et al. (2017)
and Bagley et al. (2017), two land surface models (LSMs) are used
depending on the location of the observation site. A five-layer thermal
diffusion land surface model is used in the Central Valley for the May
campaign, whilst the Noah LSM (Chen and Dudhia, 2001) is used in the remaining
campaigns and regions of California. We implement multiple nested domains,
with the outermost domain spanning 16–59
The second WRF-STILT model is from the CarbonTracker-Lagrange (WS-CTL), an effort
led by the NOAA to produce standard footprints for greenhouse gas observation
sites in North America
(
The third model, UM-NAME, is the UK Met Office's NAME model, version 3.6.5
(Jones et al., 2007), driven by meteorology from the Met Office's global
numerical weather prediction model, the UM (Cullen,
1993). The UM has a horizontal resolution of
Simulated ff
Our inversion method is a Bayesian synthesis inversion to scale emissions in
separate regions of California. We follow the same approach as Fischer et al. (2017)
to solve for a vector of scaling factors,
We conduct a series of experiments to test the performance of the inversion in estimating the true emissions when the emission estimates or transport models used to produce pseudo-observations are different to those used to produce the prior simulations. The tests explore the effect differences in the magnitude, spatial distribution, and temporal variation of prior emissions have on posterior emissions. We also examine the effect of using different transport models to simulate pseudo-observations and to simulate prior concentrations.
As part of these experiments, we evaluate the impact of outlier removal on
the simulation experiments. Outlier removal is generally used in atmospheric
inversions when there is an issue with the ability of the model to simulate a
particular observation. We use the outlier removal method outlined in Graven
et al. (2018) and compare it with inversion results where no outliers are
removed. In this outlier removal method, an observation (here, a
pseudo-observation) is designated as an outlier if (1) the absolute
difference between the ff
First we test how well the inversion estimates the true emissions if the
prior emissions have a systematic error in magnitude but have no error in the
spatial or temporal distribution of emission and no error in atmospheric
transport. In this experiment, the prior emission estimate is given by EDGAR,
and the true ff
To investigate the bias in the posterior emission estimate that could result from errors in the spatial distribution of prior emissions within each air basin, we now use annually averaged Vulcan emissions as the true emissions and EDGAR emissions scaled in each air basin to match the annually averaged Vulcan emissions in that region as the prior estimate of emissions. In this experiment, the prior estimate of the total emissions in each air basin is unbiased, and we assess how differences in the spatial distribution of emissions between Vulcan and EDGAR in each air basin may lead to a bias in the posterior emission estimate. As shown in Fig. 1c, the most significant discrepancies in spatial distribution are in the major urban areas of Los Angeles and the San Francisco Bay. This experiment is also run for all the transport models using the same transport model for both the true and prior simulation and including no temporal variation in emissions.
To assess the impact of temporally varying emissions on the inversion result,
we generated true ff
To test the effect of differences in the simulated atmospheric transport of
emissions, the same emission estimate (annually averaged Vulcan) is coupled
with two different transport models to generate prior and true
ff
The average ff
Before presenting the results of the inversion experiments, we first examine
simulated ff
In our simulation experiments, signals from outside California are generally
small compared to the total signal for most sites (
Figure 3a shows the statewide inversion result for the experiment testing
the effect of a bias in magnitude in regional emissions in the prior
simulation. In this figure, and in similar figures that follow for the other
experiments, prior estimates are represented by black markers and posterior
estimates are represented by coloured markers, with the 2
For all transport models and campaigns, the inversion is able to reduce prior
bias and scale posterior emissions towards the truth. The
To determine what is driving the statewide results, we examine the individual
air basin inversion results. Figure 3b shows the inversion results for the
six main emission regions of California, with the San Joaquin Valley (8.SJV) and
South Coast (14.SC) having the largest prior biases. For the San Joaquin
Valley (8.SJV) and South Coast (14.SC) regions with the largest prior bias,
the biases are reduced in most cases; however, only the posterior estimates
from the 70 % prior uncertainty experiment overlap the true emissions. The
posterior estimates for SD prior uncertainty do not overlap with the truth,
indicating that the 2
The bias in the posterior estimate of statewide emissions is larger in May than in October–November and January–February (Fig. 3a, triangles). This poorer performance of the inversion in May can be largely attributed to the San Joaquin Valley (8.SJV), where the posterior emissions are largely unchanged from the prior in May. There is no observation site in the San Joaquin Valley, and as shown in Fig. 2, emissions in the San Joaquin Valley do not reach observation sites in neighbouring air basins in May, but they do reach these sites in October–November and January–February. In contrast, the South Coast (14.SC) influences the two observation sites, CIT and SBC, located in the region as well as several other sites (Fig. 2). Both CIT and SBC show that prior signals are too high compared to true signals for all campaigns and models (Fig. 3c), reflecting the positive bias in prior emissions in the South Coast region, which is reduced in the posterior. Changing the uncertainty parameter from 0.5 to 0.3 or 0.8 had the result of decreasing the ability of the inversion to scale statewide emissions towards true emissions by 1 %–4 %, with an increase in posterior uncertainty by a similar percentage.
The statewide inversion results for the experiment, including errors in the
spatial distribution of emissions, are shown in Fig. 4a. In this case the
magnitude of prior emissions in each air basin is equal to true emissions, and
we aim to quantify how errors in the spatial distribution of emissions (EDGAR
as prior and Vulcan as true distribution) lead to bias in posterior emission
estimates. Posterior emissions are negatively biased, apart from WS-LBL in
January–February. Posterior bias was between
Posterior emission results in the two largest emitting air basins (the San
Francisco Bay and South Coast) are also negatively biased in most cases
(Fig. 4b). In several cases, posterior biases are larger than the associated
posterior uncertainties, for example in the South Coast for WS-LBL in all
cases. Considering Fig. 4c, prior ff
Since the prior emissions from EDGAR have been scaled to have the same total as Vulcan (the true emissions) in each region, the pattern of more negative posterior emissions is only caused by the subregional spatial distribution of emissions. Comparing Vulcan and EDGAR native grid cell emissions in Figs. 1c and S2, EDGAR tends to have greater emissions in high-emission grid cells. In other words, the emissions are more concentrated in EDGAR and more dispersed in Vulcan. This pattern explains the negative bias in posterior emissions for the urban South Coast air basin. The opposite effect does not appear to hold for rural observation sites and regions, perhaps because rural emissions are already rather dispersed and have less of an influence on the observations.
In these experiments, 0 %–3 % of observations were identified as outliers,
but excluding outliers did not change the statewide result significantly
(
Figure 5a shows the statewide inversion result for the experiment where
the emissions are Vulcan temporally varying in the prior simulation (see
Fig. 1b) but are Vulcan temporally invariant in the true simulation. Posterior bias
was between
The posterior estimate for WS-LBL in May with SD prior uncertainty has a
significant negative bias of
The statewide inversion results for the experiment where the atmospheric transport in the prior simulation uses WS-CTL or UM-NAME but the atmospheric transport in the true simulation uses WS-LBL are shown in Fig. 6a. Outliers were identified in these experiments, and we present results for inversions including all data and for inversions where outliers were removed.
Inversion results for the experiment where the atmospheric transport
in the prior simulation uses WS-CTL or UM-NAME, but the atmospheric transport
in the true simulation uses WS-LBL. Posterior statewide emissions
When all data are included, differences in the atmospheric transport model
introduce a bias in statewide posterior emissions of between
Removing outliers significantly improved the inversion results (Fig. 6b);
the mean bias was between
All simulated ff
While the statewide posterior emission estimate is significantly biased in
only one case (WS-CTL in October–November) when outliers are not removed, the
posterior emission estimates for the main emission regions are
significantly biased in several cases (Fig. 6c). The largest bias is in the
South Coast region, where posterior estimates are biased by more than
To investigate the differences in simulated ff
We also examined if differences in simulated ff
Our results show that atmospheric inversions can reduce a hypothetical bias
in the magnitude of prior ff
The largest bias in statewide posterior estimates was found to be caused by
errors in the temporal variation in emissions. This highlights the necessity
for temporally varying emissions to be estimated and included in prior
emission estimates, particularly for urban regions. Similar results have
been found in other regions including Indianapolis (Turnbull et al., 2015) and
Europe (Peylin et al., 2011) and, more generally, for high-emission regions
around the globe (Zhang et al., 2016). Although the afternoon sampling is
near to the diurnal maximum in emissions in California (Fig. 1c, Gurney et
al., 2009), which might be expected to lead to higher simulated
ff
Errors in model transport, as represented in our experiments by using
different transport models, were shown to bias posterior ff
The fraction of pseudo-observations we identified as outliers in these
transport error experiments (10.5 %, range of 6.9 %–20.6 %) was similar to
Graven et al. (2018), where 8 % of all observations were removed as outliers
using the same method. The outliers in our experiments were primarily high
ff
Attributing differences in simulated ff
The results of these experiments suggest that the choice of a prior emission
estimate and transport model (among those considered here and currently used
in the community) used in our ff
In our results, emissions from many small or rural air basins did not have a
significant contribution to the local enhancement of ff
We have shown that atmospheric inversions for the US state of California
can reduce a hypothetical bias in the magnitude of prior emission estimates
of ff
Data and code related to the Bayesian inversion procedure can be made available upon request.
The supplement related to this article is available online at:
Prior and simulated concentrations of fossil fuel
The authors declare that they have no conflict of interest.
This article is part of the special issue “The 10th International Carbon Dioxide Conference (ICDC10) and the 19th WMO/IAEA Meeting on Carbon Dioxide, other Greenhouse Gases and Related Measurement Techniques (GGMT-2017; AMT/ACP/BG/CP/ESD inter-journal SI)”. It is a result of the 10th International Carbon Dioxide Conference, Interlaken, Switzerland, 21–25 August 2017.
This project was funded by the Grantham Institute – Climate Change and the Environment Science and Solutions for a Changing Planet DTP (NE/L002515/1), the Natural Environment Research Council (NERC, UK), the UK Met Office, the NASA Carbon Monitoring System (NNX13AP33G and NNH13AW56I), and the European Commission through a Marie Curie Career Integration Grant. The authors thank Arlyn E. Andrews and the CarbonTracker-Lagrange team for providing footprints. Support for CarbonTracker-Lagrange has been provided by the NOAA Climate Program Office's Atmospheric Chemistry, Carbon Cycle, and Climate (AC4) program and the NASA Carbon Monitoring System. Edited by: Nicolas Gruber Reviewed by: Sourish Basu and one anonymous referee