Recent from talks
Nothing was collected or created yet.
General circulation model
View on Wikipedia

A general circulation model (GCM) is a type of climate model. It employs a mathematical model of the general circulation of a planetary atmosphere or ocean. It uses the Navier–Stokes equations on a rotating sphere with thermodynamic terms for various energy sources (radiation, latent heat). These equations are the basis for computer programs used to simulate the Earth's atmosphere or oceans. Atmospheric and oceanic GCMs (AGCM and OGCM) are key components along with sea ice and land-surface components.
GCMs and global climate models are used for weather forecasting, understanding the climate, and forecasting climate change.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat)[2] combine the two models. The first general circulation climate model that combined both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory[3] AOGCMs represent the pinnacle of complexity in climate models and internalise as many processes as possible. However, they are still under development and uncertainties remain. They may be coupled to models of other processes, such as the carbon cycle, so as to better model feedback effects. Such integrated multi-system models are sometimes referred to as either "earth system models" or "global climate models."
Versions designed for decade to century time scale climate applications were created by Syukuro Manabe and Kirk Bryan at the Geophysical Fluid Dynamics Laboratory (GFDL) in Princeton, New Jersey.[1] These models are based on the integration of a variety of fluid dynamical, chemical and sometimes biological equations.
Terminology
[edit]The acronym GCM originally stood for General Circulation Model. Recently, a second meaning came into use, namely Global Climate Model. While these do not refer to the same thing, General Circulation Models are typically the tools used for modeling climate, and hence the two terms are sometimes used interchangeably. However, the term "global climate model" is ambiguous and may refer to an integrated framework that incorporates multiple components including a general circulation model, or may refer to the general class of climate models that use a variety of means to represent the climate mathematically.
Atmospheric and oceanic models
[edit]Atmospheric (AGCMs) and oceanic GCMs (OGCMs) can be coupled to form an atmosphere-ocean coupled general circulation model (CGCM or AOGCM). With the addition of submodels such as a sea ice model or a model for evapotranspiration over land, AOGCMs become the basis for a full climate model.[4]
Structure
[edit]General Circulation Models (GCMs) discretise the equations for fluid motion and energy transfer and integrate these over time. Unlike simpler models, GCMs divide the atmosphere and/or oceans into grids of discrete "cells", which represent computational units. Unlike simpler models which make mixing assumptions, processes internal to a cell—such as convection—that occur on scales too small to be resolved directly are parameterised at the cell level, while other functions govern the interface between cells.
Three-dimensional (more properly four-dimensional) GCMs apply discrete equations for fluid motion and integrate these forward in time. They contain parameterisations for processes such as convection that occur on scales too small to be resolved directly.
A simple general circulation model (SGCM) consists of a dynamic core that relates properties such as temperature to others such as pressure and velocity. Examples are programs that solve the primitive equations, given energy input and energy dissipation in the form of scale-dependent friction, so that atmospheric waves with the highest wavenumbers are most attenuated. Such models may be used to study atmospheric processes, but are not suitable for climate projections.
Atmospheric GCMs (AGCMs) model the atmosphere (and typically contain a land-surface model as well) using imposed sea surface temperatures (SSTs).[5] They may include atmospheric chemistry.
AGCMs consist of a dynamical core that integrates the equations of fluid motion, typically for:
- surface pressure
- horizontal components of velocity in layers
- temperature and water vapor in layers
- radiation, split into solar/short wave and terrestrial/infrared/long wave
- parameters for:
- convection
- land surface processes
- albedo
- hydrology
- cloud cover
A GCM contains prognostic equations that are a function of time (typically winds, temperature, moisture, and surface pressure) together with diagnostic equations that are evaluated from them for a specific time period. As an example, pressure at any height can be diagnosed by applying the hydrostatic equation to the predicted surface pressure and the predicted values of temperature between the surface and the height of interest. Pressure is used to compute the pressure gradient force in the time-dependent equation for the winds.
OGCMs model the ocean (with fluxes from the atmosphere imposed) and may contain a sea ice model. For example, the standard resolution of HadOM3 is 1.25 degrees in latitude and longitude, with 20 vertical levels, leading to approximately 1,500,000 variables.
AOGCMs (e.g. HadCM3, GFDL CM2.X) combine the two submodels. They remove the need to specify fluxes across the interface of the ocean surface. These models are the basis for model predictions of future climate, such as are discussed by the IPCC. AOGCMs internalise as many processes as possible. They have been used to provide predictions at a regional scale. While the simpler models are generally susceptible to analysis and their results are easier to understand, AOGCMs may be nearly as hard to analyse as the climate itself.
Grid
[edit]The fluid equations for AGCMs are made discrete using either the finite difference method or the spectral method. For finite differences, a grid is imposed on the atmosphere. The simplest grid uses constant angular grid spacing (i.e., a latitude/longitude grid). However, non-rectangular grids (e.g., icosahedral) and grids of variable resolution [6] are more often used.[7] The LMDz model can be arranged to give high resolution over any given section of the planet. HadGEM1 (and other ocean models) use an ocean grid with higher resolution in the tropics to help resolve processes believed to be important for the El Niño Southern Oscillation (ENSO). Spectral models generally use a Gaussian grid, because of the mathematics of transformation between spectral and grid-point space. Typical AGCM resolutions are between 1 and 5 degrees in latitude or longitude: HadCM3, for example, uses 3.75 in longitude and 2.5 degrees in latitude, giving a grid of 96 by 73 points (96 x 72 for some variables); and has 19 vertical levels. This results in approximately 500,000 "basic" variables, since each grid point has four variables (u,v, T, Q), though a full count would give more (clouds; soil levels). HadGEM1 uses a grid of 1.875 degrees in longitude and 1.25 in latitude in the atmosphere; HiGEM, a high-resolution variant, uses 1.25 x 0.83 degrees respectively.[8] These resolutions are lower than is typically used for weather forecasting.[9] Ocean resolutions tend to be higher, for example, HadCM3 has 6 ocean grid points per atmospheric grid point in the horizontal.
For a standard finite difference model, uniform gridlines converge towards the poles. This would lead to computational instabilities (see CFL condition) and so the model variables must be filtered along lines of latitude close to the poles. Ocean models suffer from this problem too, unless a rotated grid is used in which the North Pole is shifted onto a nearby landmass. Spectral models do not suffer from this problem. Some experiments use geodesic grids[10] and icosahedral grids, which (being more uniform) do not have pole-problems. Another approach to solving the grid spacing problem is to deform a Cartesian cube such that it covers the surface of a sphere.[11]
Flux buffering
[edit]Some early versions of AOGCMs required an ad hoc process of "flux correction" to achieve a stable climate. This resulted from separately prepared ocean and atmospheric models that each used an implicit flux from the other component different than that component could produce. Such a model failed to match observations. However, if the fluxes were 'corrected', the factors that led to these unrealistic fluxes might be unrecognised, which could affect model sensitivity. As a result, the vast majority of models used in the current round of IPCC reports do not use them. The model improvements that now make flux corrections unnecessary include improved ocean physics, improved resolution in both atmosphere and ocean, and more physically consistent coupling between the atmosphere and ocean submodels. Improved models now maintain stable, multi-century simulations of surface climate that are considered to be of sufficient quality to allow their use for climate projections.[12]
Convection
[edit]Moist convection releases latent heat and is important to the Earth's energy budget. Convection occurs on too small a scale to be resolved by climate models, and hence it must be handled via parameters. This has been done since the 1950s. Akio Arakawa did much of the early work, and variants of his scheme are still used,[13] although a variety of different schemes are now in use.[14][15][16] Clouds are also typically handled with a parameter, for a similar lack of scale. Limited understanding of clouds has limited the success of this strategy, but not due to some inherent shortcomings of the method.[17]
Software
[edit]Most models include software to diagnose a wide range of variables for comparison with observations or study of atmospheric processes. An example is the 2-metre temperature, which is the standard height for near-surface observations of air temperature. This temperature is not directly predicted from the model but is deduced from surface and lowest-model-layer temperatures. Other software is used for creating plots and animations.
Projections
[edit]Coupled AOGCMs use transient climate simulations to project/predict climate changes under various scenarios. These can be idealised scenarios (most commonly, CO2 emissions increasing at 1%/yr) or based on recent history (usually the "IS92a" or more recently the SRES scenarios). Which scenarios are most realistic remains uncertain.
The 2001 IPCC Third Assessment Report Figure 9.3 shows the global mean response of 19 different coupled models to an idealised experiment in which emissions increased at 1% per year.[19] Figure 9.5 shows the response of a smaller number of models to more recent trends. For the 7 climate models shown there, the temperature change to 2100 varies from 2 to 4.5 °C with a median of about 3 °C.
Future scenarios do not include unknown events – for example, volcanic eruptions or changes in solar forcing. These effects are believed to be small in comparison to greenhouse gas (GHG) forcing in the long term, but large volcanic eruptions, for example, can exert a substantial temporary cooling effect.
Human GHG emissions are a model input, although it is possible to include an economic/technological submodel to provide these as well. Atmospheric GHG levels are usually supplied as an input, though it is possible to include a carbon cycle model that reflects vegetation and oceanic processes to calculate such levels.
Emissions scenarios
[edit]
For the six SRES marker scenarios, IPCC (2007:7–8) gave a "best estimate" of global mean temperature increase (2090–2099 relative to the period 1980–1999) of 1.8 °C to 4.0 °C.[20] Over the same time period, the "likely" range (greater than 66% probability, based on expert judgement) for these scenarios was for a global mean temperature increase of 1.1 to 6.4 °C.[20]
In 2008 a study made climate projections using several emission scenarios.[21] In a scenario where global emissions start to decrease by 2010 and then decline at a sustained rate of 3% per year, the likely global average temperature increase was predicted to be 1.7 °C above pre-industrial levels by 2050, rising to around 2 °C by 2100. In a projection designed to simulate a future where no efforts are made to reduce global emissions, the likely rise in global average temperature was predicted to be 5.5 °C by 2100. A rise as high as 7 °C was thought possible, although less likely.
Another no-reduction scenario resulted in a median warming over land (2090–99 relative to the period 1980–99) of 5.1 °C. Under the same emissions scenario but with a different model, the predicted median warming was 4.1 °C.[22]
Model accuracy
[edit]This section needs to be updated. (August 2015) |



AOGCMs internalise as many processes as are sufficiently understood. However, they are still under development and significant uncertainties remain. They may be coupled to models of other processes in Earth system models, such as the carbon cycle, so as to better model feedback. Most recent simulations show "plausible" agreement with the measured temperature anomalies over the past 150 years, when driven by observed changes in greenhouse gases and aerosols. Agreement improves by including both natural and anthropogenic forcings.[23][24][25]
Imperfect models may nevertheless produce useful results. GCMs are capable of reproducing the general features of the observed global temperature over the past century.[23]
A debate over how to reconcile climate model predictions that upper air (tropospheric) warming should be greater than observed surface warming, some of which appeared to show otherwise,[26] was resolved in favour of the models, following data revisions.
Cloud effects are a significant area of uncertainty in climate models. Clouds have competing effects on climate. They cool the surface by reflecting sunlight into space; they warm it by increasing the amount of infrared radiation transmitted from the atmosphere to the surface.[27] In the 2001 IPCC report possible changes in cloud cover were highlighted as a major uncertainty in predicting climate.[28][29]
Climate researchers around the world use climate models to understand the climate system. Thousands of papers have been published about model-based studies. Part of this research is to improve the models.
In 2000, a comparison between measurements and dozens of GCM simulations of ENSO-driven tropical precipitation, water vapor, temperature, and outgoing longwave radiation found similarity between measurements and simulation of most factors. However, the simulated change in precipitation was about one-fourth less than what was observed. Errors in simulated precipitation imply errors in other processes, such as errors in the evaporation rate that provides moisture to create precipitation. The other possibility is that the satellite-based measurements are in error. Either indicates progress is required in order to monitor and predict such changes.[30]
The precise magnitude of future changes in climate is still uncertain;[31] for the end of the 21st century (2071 to 2100), for SRES scenario A2, the change of global average SAT change from AOGCMs compared with 1961 to 1990 is +3.0 °C (5.4 °F) and the range is +1.3 to +4.5 °C (+2.3 to 8.1 °F).
The IPCC's Fifth Assessment Report asserted "very high confidence that models reproduce the general features of the global-scale annual mean surface temperature increase over the historical period". However, the report also observed that the rate of warming over the period 1998–2012 was lower than that predicted by 111 out of 114 Coupled Model Intercomparison Project climate models.[32]
Relation to weather forecasting
[edit]The global climate models used for climate projections are similar in structure to (and often share computer code with) numerical models for weather prediction, but are nonetheless logically distinct.
Most weather forecasting is done on the basis of interpreting numerical model results. Since forecasts are typically a few days or a week and sea surface temperatures change relatively slowly, such models do not usually contain an ocean model but rely on imposed SSTs. They also require accurate initial conditions to begin the forecast – typically these are taken from the output of a previous forecast, blended with observations. Weather predictions are required at higher temporal resolutions than climate projections, often sub-hourly compared to monthly or yearly averages for climate. However, because weather forecasts only cover around 10 days the models can also be run at higher vertical and horizontal resolutions than climate mode. Currently the ECMWF runs at 9 km (5.6 mi) resolution[33] as opposed to the 100-to-200 km (62-to-124 mi) scale used by typical climate model runs. Often local models are run using global model results for boundary conditions, to achieve higher local resolution: for example, the Met Office runs a mesoscale model with an 11 km (6.8 mi) resolution[34] covering the UK, and various agencies in the US employ models such as the NGM and NAM models. Like most global numerical weather prediction models such as the GFS, global climate models are often spectral models[35] instead of grid models. Spectral models are often used for global models because some computations in modeling can be performed faster, thus reducing run times.
Computations
[edit]Climate models use quantitative methods to simulate the interactions of the atmosphere, oceans, land surface and ice.
All climate models take account of incoming energy as short wave electromagnetic radiation, chiefly visible and short-wave (near) infrared, as well as outgoing energy as long wave (far) infrared electromagnetic radiation from the earth. Any imbalance results in a change in temperature.
The most talked-about models of recent years relate temperature to emissions of greenhouse gases. These models project an upward trend in the surface temperature record, as well as a more rapid increase in temperature at higher altitudes.[36]
Three (or more properly, four since time is also considered) dimensional GCM's discretise the equations for fluid motion and energy transfer and integrate these over time. They also contain parametrisations for processes such as convection that occur on scales too small to be resolved directly.
Atmospheric GCMs (AGCMs) model the atmosphere and impose sea surface temperatures as boundary conditions. Coupled atmosphere-ocean GCMs (AOGCMs, e.g. HadCM3, EdGCM, GFDL CM2.X, ARPEGE-Climat[37]) combine the two models.
Models range in complexity:
- A simple radiant heat transfer model treats the earth as a single point and averages outgoing energy
- This can be expanded vertically (radiative-convective models), or horizontally
- Finally, (coupled) atmosphere–ocean–sea ice global climate models discretise and solve the full equations for mass and energy transfer and radiant exchange.
- Box models treat flows across and within ocean basins.
Other submodels can be interlinked, such as land use, allowing researchers to predict the interaction between climate and ecosystems.
Comparison with other climate models
[edit]Earth-system models of intermediate complexity (EMICs)
[edit]The Climber-3 model uses a 2.5-dimensional statistical-dynamical model with 7.5° × 22.5° resolution and time step of 1/2 a day. An oceanic submodel is MOM-3 (Modular Ocean Model) with a 3.75° × 3.75° grid and 24 vertical levels.[38]
Radiative-convective models (RCM)
[edit]One-dimensional, radiative-convective models were used to verify basic climate assumptions in the 1980s and 1990s.[39]
Earth system models
[edit]GCMs can form part of Earth system models, e.g. by coupling ice sheet models for the dynamics of the Greenland and Antarctic ice sheets, and one or more chemical transport models (CTMs) for species important to climate. Thus a carbon chemistry transport model may allow a GCM to better predict anthropogenic changes in carbon dioxide concentrations. In addition, this approach allows accounting for inter-system feedback: e.g. chemistry-climate models allow the effects of climate change on the ozone hole to be studied.[40]
History
[edit]In 1956, Norman Phillips developed a mathematical model that could realistically depict monthly and seasonal patterns in the troposphere. It became the first successful climate model.[41][42] Following Phillips's work, several groups began working to create GCMs.[43] The first to combine both oceanic and atmospheric processes was developed in the late 1960s at the NOAA Geophysical Fluid Dynamics Laboratory.[1] By the early 1980s, the United States' National Center for Atmospheric Research had developed the Community Atmosphere Model; this model has been continuously refined.[44] In 1996, efforts began to model soil and vegetation types.[45] Later the Hadley Centre for Climate Prediction and Research's HadCM3 model coupled ocean-atmosphere elements.[43] The role of gravity waves was added in the mid-1980s. Gravity waves are required to simulate regional and global scale circulations accurately.[46]
See also
[edit]References
[edit]- ^ a b c "The First Climate Model". NOAA 200th Celebration. 2007.
- ^ [1] Archived 27 September 2007 at the Wayback Machine
- ^ "NOAA 200th Top Tens: Breakthroughs: The First Climate Model". noaa.gov.
- ^ "Pubs.GISS: Sun and Hansen 2003: Climate simulations for 1951-2050 with a coupled atmosphere-ocean model". pubs.giss.nasa.gov. 2003. Retrieved 25 August 2015.
- ^ "Atmospheric Model Intercomparison Project". The Program for Climate Model Diagnosis and Intercomparison, Lawrence Livermore National Laboratory. Archived from the original on 22 August 2017. Retrieved 21 April 2010.
- ^ Jablonowski, Christiane; Herzog, M; Penner, JE; Oehmke, RC; Stout, QF; van Leer, B (2004). Adaptive grids for weather and climate models (Report). Boulder, Colorado, United States: National Center for Atmospheric Research (NCAR). Retrieved 13 October 2024. PDF create date 2004-10-28. See also Jablonowski, Christiane. "Adaptive Mesh Refinement (AMR) for Weather and Climate Models". Archived from the original on 28 August 2016. Retrieved 24 July 2010.
- ^ NCAR Command Language documentation: Non-uniform grids that NCL can contour Archived 3 March 2016 at the Wayback Machine (Retrieved 24 July 2010)
- ^ "High Resolution Global Environmental Modelling (HiGEM) home page". Natural Environment Research Council and Met Office. 18 May 2004. Archived from the original on 13 August 2010. Retrieved 5 October 2010.
- ^ "Mesoscale modelling". Archived from the original on 29 December 2010. Retrieved 5 October 2010.
- ^ "Climate Model Will Be First To Use A Geodesic Grid". Daly University Science News. 24 September 2001.
- ^ "Gridding the sphere". MIT GCM. Retrieved 9 September 2010.
- ^ "IPCC Third Assessment Report - Climate Change 2001 - Complete online versions". IPCC. Archived from the original on 12 January 2014. Retrieved 12 January 2014.
- ^ "Arakawa's Computation Device". Aip.org. Archived from the original on 15 June 2006. Retrieved 18 February 2012.
- ^ "COLA Report 27". Grads.iges.org. 1 July 1996. Archived from the original on 6 February 2012. Retrieved 18 February 2012.
- ^ "Table 2-10". Pcmdi.llnl.gov. Archived from the original on 13 June 2006. Retrieved 18 February 2012.
- ^ "Table of Rudimentary CMIP Model Features". Rainbow.llnl.gov. 2 December 2004. Archived from the original on 15 May 2006. Retrieved 18 February 2012.
- ^ "General Circulation Models of the Atmosphere". Aip.org. Archived from the original on 30 July 2012. Retrieved 18 February 2012.
- ^ a b NOAA Geophysical Fluid Dynamics Laboratory (GFDL) (9 October 2012), NOAA GFDL Climate Research Highlights Image Gallery: Patterns of Greenhouse Warming, NOAA GFDL
- ^ "Climate Change 2001: The Scientific Basis". Grida.no. Archived from the original on 18 February 2012. Retrieved 18 February 2012.
- ^ a b "Chapter 3: Projected climate change and its impacts". IPCC Fourth Assessment Report: Climate Change 2007: Synthesis Report: Synthesis Report Summary for Policymakers. Archived from the original on 9 March 2013. Retrieved 3 December 2013., in IPCC AR4 SYR 2007
- ^ Pope, V. (2008). "Met Office: The scientific evidence for early action on climate change". Met Office website. Archived from the original on 29 December 2010.
- ^ Sokolov, A.P.; et al. (2009). "Probabilistic Forecast for 21st century Climate Based on Uncertainties in Emissions (without Policy) and Climate Parameters" (PDF). Journal of Climate. 22 (19): 5175–5204. Bibcode:2009JCli...22.5175S. doi:10.1175/2009JCLI2863.1. hdl:1721.1/54833. S2CID 17270176.
- ^ a b IPCC, Summary for Policy Makers Archived 7 March 2016 at the Wayback Machine, Figure 4 Archived 21 October 2016 at the Wayback Machine, in IPCC TAR WG1 (2001), Houghton, J. T.; Ding, Y.; Griggs, D. J.; Noguer, M.; van der Linden, P. J.; Dai, X.; Maskell, K.; Johnson, C. A. (eds.), Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-80767-8, archived from the original on 15 December 2019 (pb: 0-521-01495-6).
- ^ "Simulated global warming 1860–2000". Archived from the original on 27 May 2006.
- ^ "Decadal Forecast 2013". Met Office. January 2014.
- ^ The National Academies Press website press release, 12 Jan. 2000: Reconciling Observations of Global Temperature Change.
- ^ Nasa Liftoff to Space Exploration Website: Greenhouse Effect. Archive.com. Recovered 1 October 2012.
- ^ "Climate Change 2001: The Scientific Basis" (PDF). IPCC. p. 90.
- ^ Soden, Brian J.; Held, Isaac M. (2006). "An Assessment of Climate Feedbacks in Coupled Ocean–Atmosphere Models". J. Climate. 19 (14): 3354–3360. Bibcode:2006JCli...19.3354S. doi:10.1175/JCLI3799.1.
- ^ Soden, Brian J. (February 2000). "The Sensitivity of the Tropical Hydrological Cycle to ENSO". Journal of Climate. 13 (3): 538–549. Bibcode:2000JCli...13..538S. doi:10.1175/1520-0442(2000)013<0538:TSOTTH>2.0.CO;2. S2CID 14615540.
- ^ Cubasch et al., Chapter 9: Projections of Future Climate Change Archived 16 April 2016 at the Wayback Machine, Executive Summary [dead link], in IPCC TAR WG1 (2001), Houghton, J. T.; Ding, Y.; Griggs, D. J.; Noguer, M.; van der Linden, P. J.; Dai, X.; Maskell, K.; Johnson, C. A. (eds.), Climate Change 2001: The Scientific Basis, Contribution of Working Group I to the Third Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-80767-8, archived from the original on 15 December 2019 (pb: 0-521-01495-6).
- ^ Flato, Gregory (2013). "Evaluation of Climate Models" (PDF). IPCC. pp. 768–769.
- ^ "ECMWF". Archived from the original on 3 May 2008. Retrieved 7 February 2016. ECMWF-Newsletter spring 2016
- ^ "Operational Numerical Modelling". Met Office. Archived from the original on 7 March 2005. Retrieved 28 March 2005.
- ^ "What are general circulation models (GCM)?". Das.uwyo.edu. Archived from the original on 26 December 2019. Retrieved 18 February 2012.
- ^ Meehl et al., Climate Change 2007 Chapter 10: Global Climate Projections Archived 15 April 2016 at the Wayback Machine,[page needed] in IPCC AR4 WG1 (2007), Solomon, S.; Qin, D.; Manning, M.; Chen, Z.; Marquis, M.; Averyt, K.B.; Tignor, M.; Miller, H.L. (eds.), Climate Change 2007: The Physical Science Basis, Contribution of Working Group I to the Fourth Assessment Report of the Intergovernmental Panel on Climate Change, Cambridge University Press, ISBN 978-0-521-88009-1 (pb: 978-0-521-70596-7)
- ^ ARPEGE-Climat homepage, Version 5.1 Archived 4 January 2016 at the Wayback Machine, 3 Sep 2009. Retrieved 1 October 2012. ARPEGE-Climat homepage Archived 19 February 2014 at the Wayback Machine, 6 August 2009. Retrieved 1 Oct 2012.
- ^ "emics1". www.pik-potsdam.de. Retrieved 25 August 2015.
- ^ Wang, W.C.; P.H. Stone (1980). "Effect of ice-albedo feedback on global sensitivity in a one-dimensional radiative-convective climate model". J. Atmos. Sci. 37 (3): 545–52. Bibcode:1980JAtS...37..545W. doi:10.1175/1520-0469(1980)037<0545:EOIAFO>2.0.CO;2.
- ^ Allen, Jeannie (February 2004). "Tango in the Atmosphere: Ozone and Climate Change". NASA Earth Observatory. Archived from the original on 11 October 2019. Retrieved 1 September 2005.
- ^ Phillips, Norman A. (April 1956). "The general circulation of the atmosphere: a numerical experiment". Quarterly Journal of the Royal Meteorological Society. 82 (352): 123–154. Bibcode:1956QJRMS..82..123P. doi:10.1002/qj.49708235202.
- ^ Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. p. 210. ISBN 978-0-471-38108-2.
- ^ a b Lynch, Peter (2006). "The ENIAC Integrations". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 206–208. ISBN 978-0-521-85729-1.
- ^ Collins, William D.; et al. (June 2004). "Description of the NCAR Community Atmosphere Model (CAM 3.0)" (PDF). University Corporation for Atmospheric Research.
- ^ Xue, Yongkang & Michael J. Fennessey (20 March 1996). "Impact of vegetation properties on U.S. summer weather prediction". Journal of Geophysical Research. 101 (D3). American Geophysical Union: 7419. Bibcode:1996JGR...101.7419X. CiteSeerX 10.1.1.453.551. doi:10.1029/95JD02169.
- ^ McGuffie, K. & A. Henderson-Sellers (2005). A climate modelling primer. John Wiley and Sons. p. 188. ISBN 978-0-470-85751-9.
- IPCC AR4 SYR (2007), Core Writing Team; Pachauri, R.K; Reisinger, A. (eds.), Climate Change 2007: Synthesis Report (SYR), Contribution of Working Groups I, II and III to the Fourth Assessment Report (AR4) of the Intergovernmental Panel on Climate Change, Geneva, Switzerland: IPCC, ISBN 978-92-9169-122-7.
Further reading
[edit]- Ian Roulstone & John Norbury (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton University Press. ISBN 978-0691152721.
External links
[edit]- IPCC AR5, Evaluation of Climate Models
- "High Resolution Climate Modeling". – with media including videos, animations, podcasts and transcripts on climate models
- "Flexible Modeling System (FMS)". Geophysical Fluid Dynamics Laboratory. – GFDL's Flexible Modeling System containing code for the climate models
- Program for climate model diagnosis and intercomparison (PCMDI/CMIP)
- National Operational Model Archive and Distribution System (NOMADS) Archived 30 January 2016 at the Wayback Machine
- Hadley Centre for Climate Prediction and Research – model info
- NCAR/UCAR Community Climate System Model (CESM)
- Climate prediction, community modeling
- NASA/GISS, primary research GCM model
- EDGCM/NASA: Educational Global Climate Modeling Archived 23 March 2015 at the Wayback Machine
- NOAA/GFDL Archived 4 March 2016 at the Wayback Machine
- MAOAM: Martian Atmosphere Observation and Modeling / MPI & MIPT
General circulation model
View on GrokipediaDefinition and Fundamentals
Terminology and Scope
A general circulation model (GCM) is a numerical representation that approximates the three-dimensional, time-dependent solutions to the equations governing fluid motion in planetary atmospheres or oceans, discretized on a global grid to compute variables such as temperature, velocity components, pressure, and precipitation.[12] These models incorporate physical laws derived from thermodynamics, fluid dynamics, and radiative transfer, driven primarily by spatial gradients in solar insolation, planetary rotation via the Coriolis effect, and surface boundary conditions like topography and land-ocean contrasts.[13] The terminology "general circulation" specifically denotes the simulation of large-scale, statistically steady patterns of mass, momentum, and energy transport, as opposed to localized or transient phenomena.[12] In scope, GCMs encompass global domains spanning from the surface to the upper atmosphere or ocean depths, resolving explicit dynamics for grid-scale processes while parameterizing unresolved subgrid-scale phenomena such as turbulence, convection, and cloud microphysics.[13] Atmospheric GCMs (AGCMs) focus solely on tropospheric and stratospheric circulation, often coupled to prescribed sea surface temperatures for climate studies; oceanic GCMs (OGCMs) analogously simulate currents, upwelling, and thermohaline circulation; and coupled atmosphere-ocean GCMs integrate these with land surface and sea ice components to capture feedbacks in the full climate system, emphasizing Earth's energy balance over multi-year to centennial timescales.[13] Unlike numerical weather prediction models, which apply similar dynamical cores but prioritize high-resolution initial-value forecasts over days using real-time observations, GCMs generate ensemble statistics for long-term means, variability, and projections under forcing scenarios, such as altered greenhouse gas concentrations.[13] This distinction arises from computational constraints and the chaotic nature of atmospheric flows, where GCMs average over initial condition ensembles to isolate forced responses from internal variability.[14] The foundational coupled GCM, developed at the Geophysical Fluid Dynamics Laboratory in the 1960s, marked the shift toward comprehensive Earth system simulations, enabling attribution of observed climate changes to natural versus anthropogenic drivers.[13] Modern GCMs, as used in assessments like those from the Intergovernmental Panel on Climate Change, typically feature horizontal resolutions of 50–250 km and vertical layers numbering 20–100, balancing fidelity to observations with feasible computation on supercomputers.[1]Governing Physical Principles
General circulation models (GCMs) derive their foundational dynamics from the conservation laws of physics, including mass, momentum, and energy, applied to fluid motion on a rotating sphere. These principles are encapsulated in the primitive equations, a set of partial differential equations that approximate the compressible Navier-Stokes equations under the hydrostatic balance assumption, which holds for large-scale flows where vertical accelerations are negligible compared to gravitational forces.[7][15] The primitive equations thus prioritize horizontal momentum balance influenced by Coriolis forces, pressure gradients, and frictional effects, while treating vertical structure through hydrostatic equilibrium: , where is pressure, is density, is gravity, and is height.[16] The horizontal momentum equations in the primitive set are: where is the horizontal velocity vector, is the material derivative, is the Coriolis parameter ( being Earth's rotation rate and latitude), is the horizontal gradient on pressure surfaces, is geopotential height, and represents viscous and other forces.[16][17] The continuity equation ensures mass conservation: , with as vertical velocity in pressure coordinates. The thermodynamic equation governs energy: , where is potential temperature and includes heating terms like latent heat release and radiation, linked via the equation of state (ideal gas law).[16] These equations neglect sound waves through the anelastic or hydrostatic approximations, enabling efficient computation of synoptic-to-global scales without resolving acoustic timescales.[18] For oceanic GCMs, analogous primitive equations apply, incorporating the Boussinesq approximation to filter gravity waves and treat density variations primarily through buoyancy: , where is buoyancy, alongside incompressibility and a temperature-salinity equation for density evolution.[19] Radiation and phase changes enter as source terms, but their explicit resolution is limited by grid scales, necessitating parameterizations elsewhere; the primitive framework ensures dynamical consistency with observed circulations like Hadley cells or gyres when forced by realistic boundary conditions. Empirical validations, such as numerical convergence studies to resolutions below 10 km, confirm that solutions approach physical limits under dry adiabatic conditions, underscoring the robustness of these principles despite computational constraints.[18][7]Model Architecture
Spatial Discretization and Grids
Spatial discretization in general circulation models (GCMs) involves approximating the continuous partial differential equations of atmospheric and oceanic dynamics on a discrete set of points, transforming the spherical domain of Earth into a computational grid. This process is essential for numerical integration, as it enables finite difference, finite volume, or spectral methods to solve the governing equations while preserving key properties like mass and energy conservation where possible. Horizontal discretization typically occurs on quasi-uniform or structured grids to handle the sphere's curvature, while vertical discretization uses coordinate transformations such as terrain-following sigma levels or hybrid pressure levels to resolve atmospheric layers from surface to upper troposphere.[20] The most traditional horizontal grid is the latitude-longitude (lat-lon) system, where points are spaced uniformly in longitude (e.g., 1° to 2.5° intervals) and at fixed latitudes, resulting in rectangular cells that converge toward the poles. This grid simplifies implementation for spectral transform methods but introduces the "pole problem": grid cells shrink to zero size at the poles, violating the Courant-Friedrichs-Lewy (CFL) stability criterion due to excessively short time steps required near the poles, and causing numerical noise from grid-point singularities. To mitigate this, models apply semi-Lagrangian advection, polar filtering, or reduced Gaussian grids that omit points near the poles, allowing resolutions like T159 (approximately 125 km) in operational GCMs.[21][22] Gaussian grids address some lat-lon limitations by selecting latitude points as roots of Legendre polynomials, enabling exact quadrature for spectral expansions in global GCMs and avoiding interpolation errors in transform methods. These grids pair with spherical harmonic basis functions for horizontal representation, computing derivatives analytically in spectral space before transforming to grid space for nonlinear terms, which enhances accuracy for smooth large-scale flows but can suffer from aliasing that requires dealiasing techniques. Spectral methods on Gaussian grids have been foundational in models like those from ECMWF, supporting resolutions up to T799 (about 25 km) while maintaining computational efficiency through fast Fourier transforms.[23][20] To overcome uniformity issues in lat-lon grids, quasi-uniform alternatives like icosahedral and cubed-sphere grids have gained adoption. Icosahedral grids subdivide the faces of a regular icosahedron projected onto the sphere, yielding hexagonal or triangular cells with nearly equal areas (e.g., spacing of 100 km), which eliminate pole singularities and support scalable parallel computing on Voronoi tessellations for finite-volume schemes. Cubed-sphere grids tile the sphere with six quadrilateral faces from a cube, providing quasi-uniform resolution (e.g., 0.25° effective spacing) and orthogonality benefits for advection, as used in NASA's GEOS model and CESM, though they introduce seams requiring careful flux reconstruction. These grids improve conservation and reduce anisotropy compared to lat-lon systems, particularly for high-resolution (sub-10 km) simulations, but demand more complex coding and higher memory for unstructured data.[24][25][26]Parameterizations for Subgrid-Scale Processes
In general circulation models (GCMs), spatial resolutions of approximately 50–250 km horizontally preclude explicit resolution of subgrid-scale processes, necessitating parameterizations to approximate their aggregate effects on resolved variables such as momentum, heat, and moisture fluxes. These processes, including deep convection, boundary-layer turbulence, and cloud formation, operate on scales of 1–10 km or smaller and exert critical influences on large-scale dynamics, yet their representation relies on empirical or heuristic closures rather than direct simulation. Traditional parameterizations introduce structural uncertainties, as evidenced by inter-model spreads in precipitation and cloud feedbacks, often requiring tuning to observational datasets for realism.[27][7] Convection parameterizations predominantly adopt the mass-flux approach, decomposing subgrid updrafts and downdrafts into organized transports with prescribed entrainment, detrainment, and closure assumptions like convective quasi-equilibrium, where instability is rapidly relieved. Schemes such as the original Arakawa-Schubert formulation or its derivatives, including Tiedtke's bulk mass-flux variant, compute cloud-base mass flux based on convective available potential energy and inhibition, thereby simulating vertical redistribution of heat and moisture. These methods capture essential features of organized convection but struggle with scale transitions in higher-resolution "gray-zone" simulations (around 10 km), where partial resolution of plumes leads to double-counting or underestimation of transports, prompting scale-aware modifications that reduce mass flux as grid spacing decreases.[28][29][30] Turbulence in the planetary boundary layer and free troposphere is parameterized via diffusion closures, with first-order K-theory schemes applying eddy viscosities for vertical mixing, often augmented by nonlocal terms for convective boundary layers. Higher-order closures, such as those prognosticating turbulent kinetic energy or using probability density functions (PDFs) for subgrid variability, provide more comprehensive representations; for instance, the Cloud Layers Unified By Binormals (CLUBB) scheme unifies treatment of turbulence, shallow convection, and boundary-layer clouds by modeling joint PDFs of velocity and buoyancy. These approaches address non-local mixing but remain computationally intensive and sensitive to stability functions, contributing to biases in surface fluxes and low-level winds when validated against large-eddy simulations.[31][32] Cloud and microphysics parameterizations handle subgrid condensate formation, often diagnostically linking cloud fraction to relative humidity exceedance or convectively detrained moisture, with overlap assumptions (e.g., random or maximum) affecting radiative transfer. Prognostic schemes track cloud water/ice paths, incorporating autoconversion and sedimentation for precipitation, but their coupling to convection and turbulence schemes frequently underpredicts low-cloud cover and optical depth, exacerbating shortwave radiation biases in midlatitudes. Overall, these parameterizations' heuristic foundations—relying on bulk assumptions rather than scale-invariant physics—underscore persistent challenges in faithfully reproducing observed variability, with ongoing refinements targeting improved process interactions for coupled atmosphere-ocean GCMs.[27][33]Numerical Methods and Flux Conservation
Finite difference methods, pioneered in early atmospheric models such as those developed by Phillips in 1956, approximate spatial derivatives via Taylor series expansions on structured grids like latitude-longitude or cubed-sphere configurations, enabling straightforward implementation but prone to issues like the pole problem in polar regions where grid points converge.[34] Finite volume methods, as implemented in dynamical cores like GFDL's, integrate the governing equations over discrete control volumes, computing fluxes across cell faces to inherently enforce local conservation of mass, momentum, and energy, which is essential for long-term stability in climate simulations.[34] Spectral methods transform variables into global basis functions, such as spherical harmonics or Fourier series, offering high accuracy for smooth flows and efficient handling of spherical geometry but requiring dealiasing techniques to mitigate Gibbs oscillations and ensure numerical stability.[35] Flux conservation in GCMs prevents artificial accumulation or depletion of conserved quantities, such as dry mass and total energy, which could otherwise induce spurious trends over multi-decadal runs; for instance, non-conservative schemes have been shown to cause energy drifts exceeding observational uncertainties in uncoupled atmospheric models.[36] In finite volume and finite difference approaches, conservation is achieved by designing monotonic, positivity-preserving flux limiters (e.g., van Leer or PPM schemes) that reconstruct variables at interfaces while satisfying the telescoping property of integrated fluxes, as demonstrated in operational models like ECMWF's IFS.[37] Spectral models enforce global conservation through quadrature rules that integrate exactly over the sphere and post-processing adjustments, though they may violate local conservation, necessitating hybrid schemes for coupled systems where ocean-atmosphere interfaces demand precise flux matching.[36] Advanced techniques, including discontinuous Galerkin methods, further enhance flux conservation by using flux integrals along element boundaries, reducing diffusion errors in high-resolution simulations.[38] Time-stepping schemes, typically explicit or semi-implicit, must couple with spatial discretization to maintain overall conservation; for example, leapfrog schemes with Asselin filters control computational modes in finite difference GCMs, while implicit treatments of gravity waves in spectral models (e.g., via the ECMWF semi-implicit scheme since 1975) allow larger time steps without violating flux balances.[35] Validation of these methods against benchmarks, such as Held-Suarez tests, confirms that conservative formulations yield statistically steady circulations with minimal drift, whereas non-conservative variants exhibit unphysical warming or cooling rates.[36] In coupled GCMs, interfacial flux conservation is often enforced via adjustments like those in OASIS coupling software, mitigating biases from mismatched grids and ensuring consistency with empirical energy budgets derived from satellite observations.[37]Types and Configurations
Atmospheric-Only GCMs
Atmospheric-only general circulation models (AGCMs) simulate the dynamics and physics of the Earth's atmosphere by numerically solving the Navier-Stokes equations in spherical coordinates, along with equations for thermodynamics, water vapor continuity, and radiative transfer, while prescribing time-varying lower boundary conditions such as observed or modeled sea surface temperatures (SSTs) and sea ice concentrations.[39] These models typically operate on global grids with horizontal resolutions ranging from 50 to 250 km and vertical levels extending from the surface to the mesosphere or lower thermosphere, incorporating parameterizations for sub-grid processes like convection, cloud formation, and turbulence.[40] By excluding interactive ocean and land components, AGCMs enable controlled experiments to isolate atmospheric responses to specified forcings, such as SST anomalies associated with El Niño-Southern Oscillation (ENSO).[41] AGCMs trace their origins to early numerical weather prediction models developed in the 1950s, evolving into comprehensive atmospheric simulations by the 1960s through efforts at institutions like the National Center for Atmospheric Research and the Geophysical Fluid Dynamics Laboratory (GFDL).[5] Notable early examples include the GFDL spectral models, which advanced from barotropic to primitive equation formulations, enabling the first multi-year integrations of global atmospheric circulation in the late 1960s.[42] Modern implementations, such as NASA's GEOS-5 AGCM, build on these foundations with enhanced resolution and physics, supporting configurations for both free-running and nudged simulations aligned to reanalysis data.[43] Key examples of operational AGCMs include the Australian Community Climate and Earth-System Simulator (ACCESS) version 1.0 atmosphere-only configuration, which uses prescribed SSTs to constrain 70% of the surface temperature field to observations, and the UCLA AGCM, employed in coupled and uncoupled modes for ENSO prediction experiments since the 1990s.[44][45] These models often employ finite-volume or spectral dynamical cores to ensure conservation of mass, momentum, and energy, with horizontal resolutions as fine as 25 km in high-resolution variants for studying phenomena like tropical cyclones.[40] AGCMs are applied in seasonal-to-interannual forecasting by forcing ensembles with predicted or observed SSTs, revealing atmospheric teleconnections such as the Pacific-North American pattern during ENSO events, and in paleoclimate studies by imposing proxy-reconstructed SSTs to assess atmospheric circulation shifts.[41] They also facilitate attribution studies, such as evaluating the atmospheric impact of volcanic aerosols or greenhouse gas forcings under fixed oceanic boundaries.[46] Despite their utility, AGCMs exhibit limitations due to the absence of ocean-atmosphere coupling, resulting in unrealistic surface energy flux biases in midlatitudes and inadequate representation of coupled modes like the Madden-Julian Oscillation's full variability.[41] For instance, AGCM predictions of midlatitude oceanic fluxes diverge from coupled general circulation models (CGCMs) by up to 20 W/m² in seasonal means, underscoring the need for coupled systems in long-term climate projections.[41] Validation against satellite-derived cloud fields and reanalyses often highlights systematic errors in tropical precipitation and stratospheric circulation, attributable to parameterization uncertainties.[46]Oceanic GCMs
Oceanic general circulation models (OGCMs) numerically simulate the three-dimensional movement of seawater, including velocity fields, temperature, and salinity distributions, to represent basin-scale to global ocean dynamics.[47] These models solve the primitive equations of motion, comprising prognostic equations for horizontal momentum, tracer conservation (temperature and salinity), and a diagnostic equation for hydrostatic pressure, typically under the Boussinesq approximation that treats seawater density as constant except in buoyancy terms.[48] The hydrostatic approximation assumes vertical accelerations are negligible compared to gravity, simplifying the vertical momentum equation to a balance between pressure gradient and weight.[49] OGCMs discretize the ocean domain on structured grids, such as latitude-longitude or curvilinear coordinates, with vertical levels using z-coordinates (fixed depth), terrain-following sigma coordinates, or hybrid schemes to resolve topography and stratification.[50] Sub-grid-scale processes, including turbulent mixing, mesoscale eddies, and air-sea fluxes, are parameterized due to resolution limits that prevent explicit simulation; for instance, eddy viscosities and diffusivities are applied to mimic unresolved lateral and vertical transports.[47] Initial spin-up integrates the model from rest under climatological forcing to achieve quasi-equilibrium circulation, often requiring decades of simulated time.[50] Key implementations include the Modular Ocean Model (MOM), a flexible hydrostatic primitive equation code supporting generalized vertical coordinates and mass-conserving formulations, developed at NOAA's Geophysical Fluid Dynamics Laboratory for process to planetary-scale studies.[51][52] The Parallel Ocean Program (POP) version 2 uses a z-level grid with an implicit free surface, optimized for high-performance computing in global simulations.[53] NEMO (Nucleus for European Modelling of the Ocean) provides a primitive equation framework configurable for regional or global domains, incorporating advanced options for biogeochemical tracers and sea-ice coupling.[54] These models have evolved since early global efforts in the late 1970s, with refinements in resolution and physics enabling hindcasts of observed circulations like the thermohaline conveyor.[55]
