Yonatan Zunger (zunger) wrote in climatepapers,
Yonatan Zunger
zunger
climatepapers

Journal presentation: "Dangerous human-made interference with climate: a GISS modelE study" (Part 1)

Hansen et al., "Dangerous human-made interference with climate: A GISS modelE study." J. Geophys. Res., submitted.

This paper models climate from 1880 to 2100 using GISS model E, the current "gold standard" model being used by the IPCC. For 1880-2003, this is a calibration and sanity check; for 2003-2100, they run with five alternative scenarios about greenhouse gas emissions. It talks about all sorts of things, so it looks like a good introduction to climate modelling.



There are three other papers that I'll refer to a lot below, all of which are possible good candidates for future journal clubs:

[Short] Hansen et al., "Earth's energy imbalance: Confirmation and implications." Science 308, 1431-1435, doi:10.1126/science.1110252. (A short paper which was this group's previous major project)

[Efficacy] Hansen et al., "Efficacy of climate forcings." J. Geophys. Res. 110, D18104, doi:10.1029/2005JD005776. (This is the paper where a lot of the input parameters used in this paper are calculated)

[GISS-E] Schmidt et al., "Present day atmospheric simulations using GISS ModelE: Comparison to in-situ, satellite and reanalysis data.", J. Climate 19, 153-192, doi:10.1175/JCLI3612.1. (This paper documents the model code they actually use)

BTW, just as an introduction: The lead author of this paper (Hansen) is the guy from NASA who went public with allegations that party operatives were ordering him to keep quiet about implications of his work, etc. This is the paper they were talking about. Also, there was a recent federal study that concluded that the lower atmosphere is indeed growing warmer; this paper had no small part in that, too. The main purpose of this paper was actually submission to the IPCC (Intergovernmental Panel on Climate Change) for use as their latest gold-standard model.

This paper is pretty long, although half of it is figures, so I'm going to split this post in two. Today I'll post about sections 1-5, which cover the model itself, how they got their input data, and their runs for 1880-2003. I'll do a separate post later about sections 6-8, which talk about 2003-2100.

And since this is our first journal club entry, I'd like to encourage people to participate, to read along with the paper in hand, and ask lots of questions. If I can't answer them, hopefully someone else staring at the same thing will. :)

So, let's begin.

Before jumping into this, here are some term definitions that I had to look up while reading this paper. Wikipedia, as usual, is very handy.

Layers of the atmosphere: The troposphere is the lowest layer, from surface up to tropopause; that's a height of about 16km in the tropics and about 8km at the poles. Tropopause is a region where the derivative of temperature with respect to altitude is roughly zero; it's negative in the troposphere (due, I'm assuming, to surface warming of the air getting weaker) and positive in the stratosphere (which is heated by UV absorption from the Sun). The stratosphere is structurally fairly different and extends up to about 50km. The wikipedia article about the atmosphere explains all of ths much more clearly.

Climate forcing: A forcing is any external driving force that acts on the climate. For the purposes of these models, typical forcings are things like addition of greenhouse gases or changes in the Sun's output. For any constant forcing, the Earth's climate will reach an equilibrium configuration; analyses referenced in the introduction to [Short] say that it typically takes 25-50 years to reach equilibrium after a change. (The [Short] paper is basically trying to estimate how much forcing has been added to the Earth which hasn't reached equilibrium yet)

Even though each forcing is really fairly different (the way in which solar changes inject energy into the system is very different from the way gas emissions do, and they need to be treated separately in models) it's useful to define an overall forcing intensity by comparing everything to solar heating. A forcing of strength 1W/m2 is therefore defined to be a forcing which, if you turn it on alone, and then allow the Earth's climate to evolve to equilibrium, while holding the state of the surface and the troposphere fixed, would cause the same change in tropopause temperature as an increase of 1W/m2 in the Sun's radiance. (The wiki page gives the official definition. The real definition isn't tropopause temperature but tropopause irradiance; since we're in thermal equilibrium, though, that amounts to the same thing)

For a sense of scale, an additional forcing of 1W/m2 is believed to lead to an overall global mean temperature change of roughly 0.75C. ([Short] discusses this in its introduction)




Now, on to the show. Briefly, section 2 of this paper is the model; section 3 is the input data, and section 4 talks a bit more about how that data was calibrated. Section 5 presents the results from 1880-2003, to show that the model is basically correct; section 6 runs the simulation from 1880-2100, using five different scenarios for the future. Section 7 talks about how realistic these scenarios are, and section 8 tries to give a "how screwed are we?" analysis.

Part 1: (Sections 2-4) The Model

Section 2, the short version: The model is GISS model E [GISS-E] with all the latest refinements. One big issue in GISS-E is the ocean model: they use three of them. Ocean A is experimental data (not modelled), so it's very accurate but doesn't exactly continue past 2003. Ocean B is the Q-flux model, which is well-tested; Ocean C is the Russell model, which is more sophisticated but not as tested as Ocean B. The biggest known bug in oceans B and C is that neither of them model El Niño (aka ENSO); this is why it's important to sanity-check B/C results against A.

Section 2 also summarizes known bugs (specifically, their model of the upper atmosphere doesn't include gravity waves, which could screw up mixing there) and prediction errors it still makes. (e.g., cloud cover on the west coasts of all continents is low by about 25%) However, the point of section 5 is that these errors don't seem to seriously compromise the model's ability to describe overall climate.

(But I'll add: their sigmas, especially close to the poles, are often a good deal higher than I'd like. I'll point out specific examples below. It's mostly a sea ice problem)

Next, sections 3 and 4 talk about the inputs to the model. The summary of the inputs is given in section 3.4, table 1, and figure 5, which give plots of the global mean climate forcing for each input as a function of time.

It's easiest to walk through the figures, but I'll go out-of-order for a moment. First of all, there's the question of against which measurements of global temperature they should calibrate the model. (Section 4.1) Over time, weather station data gets sparser. 1880 was chosen as a lower bound because [Hansen & Lebedeff] that's where the data gets detailed enough to get good estimates of global temperature, despite poor coverage in the S Hem. Ocean data remains sparse until smewhat later, and sea ice is practically unobserved in this time. Figure 6 walks through four ways of getting the data:
(a) using only land station measurements;
(b) adding sea surface temperatures from ship data; this makes good sense, but the data seems to miss the cold years after Krakatau. (cf the notes on aerosols below; that's probably a bug in the experimental dataset, since the ship data is fairly sparse pre-1900)
(c) using land station data and sampling the outputs from the model at the same points; (good for comparison of data with model, but bad because the result isn't a true global mean), and
(d) use only land stations for 1880-1900, then land + ship for 1900-2003, when the ship data gets better. (The problem being arbitrariness at 1900)

Figure 6 shows comparisons of their model simulations to the data for each of these four approaches; in all cases the model works fairly well, except that the model predicts post-Krakatau cooling which (b) misses. Nonetheless, (b) seems the most rigorous, so it's used as the reference point in the rest of the paper.

Now, to the inputs to the model. Basically they need a model of the time and space variation of all the different forcings. Most of section 3 summarizes these in detail; section 3.4, table 1 and figure 5 give a summary of world averages as a function of time, and it's handy to refer to this while looking at the rest.

  • Well-mixed greenhouse gases (GHG's): CO2, CH4, N2O, and CFC's. [Figure 1, section 3.1.1] These are a very large positive forcing. (+2.71W/m2 in 2003) They are called "well-mixed" because the gases are long-lived and end up uniformly distributed in the atmosphere. Data comes from in situ and ice core measurements.

  • Inhomogenously mixed GHG's: O3 and stratospheric H2O. [Figure 2, section 3.1.2] These gases are short-lived in the atmosphere and so they have a nontrivial spatial distribution. Historical levels of these can be calculated using climate chemistry models (e.g., the rate at which methane leads to ozone and water formation in the troposphere is well-known, and methane levels can be found from the well-mixed GHG measurements). For recent years, there are direct measurements of quantities like stratospheric ozone which can check this.

  • Stratospheric aerosols, i.e. fine particles in the air: sea salt, soil dust, sulfates, nitrates, black carbon [i.e., soot] and organic carbon. [i.e., soot from biomass and diesel burning, which isn't nearly as absorptive as soot] [Figure 3, section 3.2] These have both a direct effect, mostly by reflecting light in the upper atmosphere, and an indirect effect on cloud cover, of similar magnitude.

    The geographic distribution of carbon and sulfates comes from things like fuel use statistics, including changes in fuel use technology over time, as well as information about biomass burning; other papers apparently collected this data in bulk. They also make some assumption about the mean particle size and optical properties of the various aerosols. The relationship of the indirect and direct effects was worked out in the Efficacy paper.

    The big wildcard here, which is also the biggest source of error in the calculation, is volcanos. If you look at the summary plot of figure 5, Krakatau (1883) and Pinatubo (1991) are clearly visible as huge, negative spikes in the forcing. Pinatubo happened in the satellite era, so observations of the actual temperature response to it give a strong constraint on its aerosol forcing, but there are big uncertainties earlier in time. (Before satellites, many volcanos may have gone off unrecorded!) Overall, these contribute an uncertainty of roughly 0.5W/m2 in the pre-satellite era.

    In section 4.2 and figure 7, they return to this in more detail, looking at the eruptions of Krakatau and Pinatubo to test how good their models of aerosol emissions after volcanos are. They ran an extra set of detailed simulations for the years after Krakatau and Pinatubo, and verified that their model gives fairly good measurements of global mean temperatures. (Exception: if you use land + ocean temperature data for Krakatau, that dataset sees no temperature drop. The L+O dataset for Pinatubo sees a temperature drop and it matches the land dataset and the model, and the years post-Krakatau were famous as the "year without a summer," so it's most likely that the ocean dataset in the 1880's is just corrupt) Based on this and some calculations they did based on satellite observations of Pinatubo in [Efficacy] fig. 11, they estimate about 20% error bars for Pinatubo and about 50% for Krakatau.

    (So yes, it's these errors that dominate noise in the whole calculation.)

  • Land use: (Section 3.3.1) This is a large effect on regional climates, but a small effect on global climate. There are big uncertainties, but the overall number (0.5 W/m2) is fairly small. The geographic pattern is a bigger deal, and that's covered in [Efficacy] fig. 7.

  • Soot's effect on albedo (Section 3.3.2) They parametrized the effect of soot on ice's albedo (the fraction of sunlight it reflects) as a function of local black carbon density, local ice cover, and a conservative overall multiplier. There are a lot of uncertainties here, but the effect is very small; turning it off completely had negligible effects.

  • Solar radiance (Section 3.3.3, figure 4): The standard ways of measuring solar radiance in the past involve looking at geomagnetic proxies for solar radiance (how solar radiance affects mineral depositions, etc) and there's some uncertainty about how good those proxies are, so they did test runs with both the full data and with just the 11-year solar cycle.



So overall, global climate properties are dominated by GHG's and stratospheric aerosols, and the latter dominate uncertainties, especially in the pre-satellite past.

Part 2: (Section 5) Modelling the Past

Now we start on the real meat. Section 5 runs the model for the period 1880-2003 and compares it against actual data, including runs where they only turn on a single forcing to see what it does. The first main result is global mean temperature, shown in figure 8 and discussed in section 5.1. Note that the lower stratosphere temperature is spot-on, including the Pinatubo eruption; as altitudes go down the fit is less precise, but the oscillation rate in observational data gets bigger as well. Surface temperatures oscillate broadly year-to-year, but the model's accuracy on the average trend is unmistakeable. (Including the Krakatau cold years, which the observational data misses) There is a warm period around 1940 (in the observational data) which remains unexplained; they conjecture that there was likely no forcing that triggered it, and this was a natural oscillation uncaptured by the model. (cf the last full paragraph on p. 13; they summarize it very neatly)

However, I'd note that the part of the graphs where everything is smooth (look at the general lack of bumps in figure 8a-e from 1920-1960) is the same point where there's a general lack of bumps in the forcing function. (cf figure 5b) The fact that the overall structural features track the forcing function so closely makes me suspect that there really is either a missing forcing in that time period (related to WWII?) or that the data is really off.

The one plot that looks a bit odd is 8f, ocean ice cover. They suspect bugs in their control runs and mark this as "under investigation." (This is, after all, a preprint... hopefully it will be fixed before final publication) Note also that sea ice observational data gets increasingly sparse as you go into the past, and the data before 1900 is basically nonexistent.

Section 5.1.2 and figure 9 show sanity checks of ocean A (experimental) vs ocean B (Q-flux). Ocean B has less jitter in it (which you'd expect for a model versus live data), but looks qualitatively very similar.

However, I think there's an error in figure 9: you'll notice that the observed surface temperatures are different in the ocean A and ocean B runs. If anyone can figure out what's going on here, I'd be interested. A hint may be a note in section 5.1.2 that ocean A predicted a larger surface temperature rise than observed, which they attribute to trouble in calculating air temperature at an altitude of 10m over the ocean. They have some possible bugfixes in mind, (appendix A3 and figure 26) but suggest that the right fix is to calculate "surface air temperatures" at 2m rather than 10m. (If we can't figure this out, maybe ask them?)

Figure 10/section 5.1.3 shows how the model responds to each forcing individually; no surprises here.

Figure 11/section 5.1.4 is more fun: it shows surface temperature change on a map, observed, modelled, and for each individual forcing. Overall, I'd call the model surprisingly good, with a few exceptions. Note that the observed sharp heating from 1979-2003 (top right) has a bright red splotch around the Larsen B ice shelf, where the model predicted slight cooling. The model similarly underpredicted heating in the Arctic. It's true that the standard deviations here are wider, but this makes me suspect that their ice model is really inadequate. (Which is important, because ice is a huge source of positive-feedback loops)

They do better on the longer runs than on the shorter ones. The ones with endpoints in 1940 seem a bit sketchy to me because of that unexplained warming peak right around then; the observed warming 1880-1940 and cooling 1940-1979 is probably completely an artifact of that, and if the model doesn't cover that particular spike I wouldn't expect anything intelligible there.

One interesting paragraph is from 5.2.1: "Almost all land areas warm more than 1C while most ocean areas warm between 0.5 and 1C. However, the Arctic warms more than 2C, while the circum-Antarctic ocean warms only about 0.2C. Large Arctic warming is an expected result of the positive ice/snow albedo feedback. The small response of the circum-Antarctic ocean surface is mainly a result of the inertia due to deep ocean mixing in that region, although deficient sea ice in the control run may contribute." (This refers to the 1880-2003 run, I think. The positive feedback loop they refer to is the fact that water absorbs more light than ice, so if you melt the top layer of an ice sheet, it gets a nice, heat-absorbing layer on top, which melts more of it, etc., ending with basically a bolt of warm water sinking its way to the bottom of the ice. I recently saw some film footage of these things opening up in Greenland -- they go down to the bedrock, they're about 10m across, and there are rivers of liquid water flowing into them. In the middle of Greenland. Alarming. Can't remember the name for those things, though.)

Anyway, that passage seems very reasonable to me, and explains something I've always wondered: why we hear so much more about warming activity in the Arctic than the Antarctic. Having a big open ocean seems like a great way to absorb local heating.

Figure 12 / section 5.2.2 is another good one. The left-hand column shows temperature change versus time in observations (top two diagrams), the model (next four diagrams), the year-to-year variation of temperature (second from the bottom, left) and the overall sigma (bottom left). The runs using ocean A capture heating at the poles somewhat better than oceans B and C, and the run on ocean A with observed sea ice ["SI" in the figure] is much better than ocean A without it, so that reinforces my hunch that their sea ice model isn't good. Note that the experimental and ocean A graphs have a pattern with ~10-year width which oceans B and C lack; that's El Niño.

The other neat thing on these plots is the top-right graph, where you can very visibly see the contribution of well-mixed GHG's to warming. In bright, really-damned-hard-to-mistake, colors. And then in the bottom right 3, you can see the countereffect of stratospheric aerosols, also in bright colors.

Figure 13 and section 5.2.3 show temperature change vs. latitude and altitude. The big lesson here seems to be that there are significant structural features in the plot all the way to their peak altitude of 0.1hPa. (This is important because GISS model D only went up to 10hPa or so, and so it missed out on a lot of things -- apparently this is considered the big win of model E) The text has some nice tidbits in it: for instance, "Generally the forcings that warm the troposphere inject more water vapor into the stratosphere, which allows the (optically thin) stratosphere to cool more effectively." That is, when the lower atmosphere heats, it sends more water up to the stratosphere; the stratosphere is almost transparent to light, so the water doesn't absorb much and make it more opaque, but instead it acts like an aerosol and cools things. They give a few other arguments explaining why things that warm the troposphere generally cool the stratosphere. (But not vice-versa)

Figure 14 shows temperature change versus latitude and season in the stratosphere and on the surface. The biggest thing I notice is that the standard deviations at the surface near the poles are absolutely terrible.

Figure 15 (section 5.2.4) compares this against experiment. The main difference they see is that this boundary between warming (troposphere) and cooling (stratosphere) happens at a lower altitude in experiments than in the model, but that may be due to some bugs in the technique used to measure temperature. (They cite a paper) There's also not enough polar warming at low altitudes, but they point to the lousy sigmas there as an explanation.

Section 5.3 and figure 16 show changes in various other climate variables. I'll leave you to leaf through it; no huge surprises. I think the main point of this is "look at what neat data you can download and plot! Come play with it!"

OK, so to summarize what this chapter says: It looks like, overall, the models are pretty good. The big unexplained things are the warming period in the 1940's and the general suckage of anything related to sea ice when they're using a non-experimentally-derived ocean model. A lot of tricky things seem to be modelled correctly on the global scale, especially the effects of volcanos on temperature. However, when you get down to regional behaviors, things are a bit more flaky, and if you spend time staring at the graphs you'll see all sorts of areas where the predictions are off.

The moral of the story: You can trust GISS-E for global-scale predictions except if the sea ice does something wonky. For regional-scale predictions, things are still a little iffy. This may be because the data is less good (note all the cavilling about the weakness of regional data in section 3) or because the model has bugs (note the warnings about coastal cloud cover, etc in section 2)

Next time: the rest of the paper, and predictions for the future! Crystal balls at the ready, gentlemen!
  • Post a new comment

    Error

    Anonymous comments are disabled in this journal

    default userpic
  • 3 comments