Recent from talks
Nothing was collected or created yet.
Atmospheric model
View on WikipediaIn atmospheric science, an atmospheric model is a mathematical model constructed around the full set of primitive, dynamical equations which govern atmospheric motions. It can supplement these equations with parameterizations for turbulent diffusion, radiation, moist processes (clouds and precipitation), heat exchange, soil, vegetation, surface water, the kinematic effects of terrain, and convection. Most atmospheric models are numerical, i.e. they discretize equations of motion. They can predict microscale phenomena such as tornadoes and boundary layer eddies, sub-microscale turbulent flow over buildings, as well as synoptic and global flows. The horizontal domain of a model is either global, covering the entire Earth (or other planetary body), or regional (limited-area), covering only part of the Earth. Atmospheric models also differ in how they compute vertical fluid motions; some types of models are thermotropic,[1] barotropic, hydrostatic, and non-hydrostatic. These model types are differentiated by their assumptions about the atmosphere, which must balance computational speed with the model's fidelity to the atmosphere it is simulating.
Forecasts are computed using mathematical equations for the physics and dynamics of the atmosphere. These equations are nonlinear and are impossible to solve exactly. Therefore, numerical methods obtain approximate solutions. Different models use different solution methods. Global models often use spectral methods for the horizontal dimensions and finite-difference methods for the vertical dimension, while regional models usually use finite-difference methods in all three dimensions. For specific locations, model output statistics use climate information, output from numerical weather prediction, and current surface weather observations to develop statistical relationships which account for model bias and resolution issues.
Types
[edit]Thermotropic
[edit]The main assumption made by the thermotropic model is that while the magnitude of the thermal wind may change, its direction does not change with respect to height, and thus the baroclinicity in the atmosphere can be simulated using the 500 mb (15 inHg) and 1,000 mb (30 inHg) geopotential height surfaces and the average thermal wind between them.[2][3]
Barotropic
[edit]Barotropic models assume the atmosphere is nearly barotropic, which means that the direction and speed of the geostrophic wind are independent of height. In other words, no vertical wind shear of the geostrophic wind. It also implies that thickness contours (a proxy for temperature) are parallel to upper level height contours. In this type of atmosphere, high and low pressure areas are centers of warm and cold temperature anomalies. Warm-core highs (such as the subtropical ridge and Bermuda-Azores high) and cold-core lows have strengthening winds with height, with the reverse true for cold-core highs (shallow arctic highs) and warm-core lows (such as tropical cyclones).[4] A barotropic model tries to solve a simplified form of atmospheric dynamics based on the assumption that the atmosphere is in geostrophic balance; that is, that the Rossby number of the air in the atmosphere is small.[5] If the assumption is made that the atmosphere is divergence-free, the curl of the Euler equations reduces into the barotropic vorticity equation. This latter equation can be solved over a single layer of the atmosphere. Since the atmosphere at a height of approximately 5.5 kilometres (3.4 mi) is mostly divergence-free, the barotropic model best approximates the state of the atmosphere at a geopotential height corresponding to that altitude, which corresponds to the atmosphere's 500 mb (15 inHg) pressure surface.[6]
Hydrostatic
[edit]Hydrostatic models filter out vertically moving acoustic waves from the vertical momentum equation, which significantly increases the time step used within the model's run. This is known as the hydrostatic approximation. Hydrostatic models use either pressure or sigma-pressure vertical coordinates. Pressure coordinates intersect topography while sigma coordinates follow the contour of the land. Its hydrostatic assumption is reasonable as long as horizontal grid resolution is not small, which is a scale where the hydrostatic assumption fails.
Nonhydrostatic
[edit]Models which use the entire vertical momentum equation are known as nonhydrostatic. A nonhydrostatic model can be solved anelastically, meaning it solves the complete continuity equation for air assuming it is incompressible, or elastically, meaning it solves the complete continuity equation for air and is fully compressible. Nonhydrostatic models use altitude or sigma altitude for their vertical coordinates. Altitude coordinates can intersect land while sigma-altitude coordinates follow the contours of the land.[7]
History
[edit]
The history of numerical weather prediction began in the 1920s through the efforts of Lewis Fry Richardson who utilized procedures developed by Vilhelm Bjerknes.[8][9] It was not until the advent of the computer and computer simulation that computation time was reduced to less than the forecast period itself. ENIAC created the first computer forecasts in 1950,[6][10] and more powerful computers later increased the size of initial datasets and included more complicated versions of the equations of motion.[11] In 1966, West Germany and the United States began producing operational forecasts based on primitive-equation models, followed by the United Kingdom in 1972 and Australia in 1977.[8][12] The development of global forecasting models led to the first climate models.[13][14] The development of limited area (regional) models facilitated advances in forecasting the tracks of tropical cyclone as well as air quality in the 1970s and 1980s.[15][16]
Because the output of forecast models based on atmospheric dynamics requires corrections near ground level, model output statistics (MOS) were developed in the 1970s and 1980s for individual forecast points (locations).[17][18] Even with the increasing power of supercomputers, the forecast skill of numerical weather models only extends to about two weeks into the future, since the density and quality of observations—together with the chaotic nature of the partial differential equations used to calculate the forecast—introduce errors which double every five days.[19][20] The use of model ensemble forecasts since the 1990s helps to define the forecast uncertainty and extend weather forecasting farther into the future than otherwise possible.[21][22][23]
Initialization
[edit]
The atmosphere is a fluid. As such, the idea of numerical weather prediction is to sample the state of the fluid at a given time and use the equations of fluid dynamics and thermodynamics to estimate the state of the fluid at some time in the future. The process of entering observation data into the model to generate initial conditions is called initialization. On land, terrain maps available at resolutions down to 1 kilometer (0.6 mi) globally are used to help model atmospheric circulations within regions of rugged topography, in order to better depict features such as downslope winds, mountain waves and related cloudiness that affects incoming solar radiation.[24] One main source of input is observations from devices (called radiosondes) in weather balloons which rise through the troposphere and well into the stratosphere that measure various atmospheric parameters and transmits them to a fixed receiver.[25] Another main input is data from weather satellites. The World Meteorological Organization acts to standardize the instrumentation, observing practices and timing of these observations worldwide. Stations either report hourly in METAR reports,[26] or every six hours in SYNOP reports.[27] These observations are irregularly spaced, so they are processed by data assimilation and objective analysis methods, which perform quality control and obtain values at locations usable by the model's mathematical algorithms.[28] The data are then used in the model as the starting point for a forecast.[29]
Commercial aircraft provide pilot reports along travel routes[30] and ship reports along shipping routes.[31] Commercial aircraft also submit automatic reports via the WMO's Aircraft Meteorological Data Relay (AMDAR) system, using VHF radio to ground stations or satellites. Research projects use reconnaissance aircraft to fly in and around weather systems of interest, such as tropical cyclones.[32][33] Reconnaissance aircraft are also flown over the open oceans during the cold season into systems which cause significant uncertainty in forecast guidance, or are expected to be of high impact from three to seven days into the future over the downstream continent.[34] Sea ice began to be initialized in forecast models in 1971.[35] Efforts to involve sea surface temperature in model initialization began in 1972 due to its role in modulating weather in higher latitudes of the Pacific.[36]
Computation
[edit]A model is a computer program that produces meteorological information for future times at given locations and altitudes. Within any model is a set of equations, known as the primitive equations, used to predict the future state of the atmosphere.[37] These equations are initialized from the analysis data and rates of change are determined. These rates of change predict the state of the atmosphere a short time into the future, with each time increment known as a time step. The equations are then applied to this new atmospheric state to find new rates of change, and these new rates of change predict the atmosphere at a yet further time into the future. Time stepping is repeated until the solution reaches the desired forecast time. The length of the time step chosen within the model is related to the distance between the points on the computational grid, and is chosen to maintain numerical stability.[38] Time steps for global models are on the order of tens of minutes,[39] while time steps for regional models are between one and four minutes.[40] The global models are run at varying times into the future. The UKMET Unified model is run six days into the future,[41] the European Centre for Medium-Range Weather Forecasts model is run out to 10 days into the future,[42] while the Global Forecast System model run by the Environmental Modeling Center is run 16 days into the future.[43]
The equations used are nonlinear partial differential equations which are impossible to solve exactly through analytical methods,[44] with the exception of a few idealized cases.[45] Therefore, numerical methods obtain approximate solutions. Different models use different solution methods: some global models use spectral methods for the horizontal dimensions and finite difference methods for the vertical dimension, while regional models and other global models usually use finite-difference methods in all three dimensions.[44] The visual output produced by a model solution is known as a prognostic chart, or prog.[46]
Parameterization
[edit]Weather and climate model gridboxes have sides of between 5 kilometres (3.1 mi) and 300 kilometres (190 mi). A typical cumulus cloud has a scale of less than 1 kilometre (0.62 mi), and would require a grid even finer than this to be represented physically by the equations of fluid motion. Therefore, the processes that such clouds represent are parameterized, by processes of various sophistication. In the earliest models, if a column of air in a model gridbox was unstable (i.e., the bottom warmer than the top) then it would be overturned, and the air in that vertical column mixed. More sophisticated schemes add enhancements, recognizing that only some portions of the box might convect and that entrainment and other processes occur. Weather models that have gridboxes with sides between 5 kilometres (3.1 mi) and 25 kilometres (16 mi) can explicitly represent convective clouds, although they still need to parameterize cloud microphysics.[47] The formation of large-scale (stratus-type) clouds is more physically based, they form when the relative humidity reaches some prescribed value. Still, sub grid scale processes need to be taken into account. Rather than assuming that clouds form at 100% relative humidity, the cloud fraction can be related to a critical relative humidity of 70% for stratus-type clouds, and at or above 80% for cumuliform clouds,[48] reflecting the sub grid scale variation that would occur in the real world.
The amount of solar radiation reaching ground level in rugged terrain, or due to variable cloudiness, is parameterized as this process occurs on the molecular scale.[49] Also, the grid size of the models is large when compared to the actual size and roughness of clouds and topography. Sun angle as well as the impact of multiple cloud layers is taken into account.[50] Soil type, vegetation type, and soil moisture all determine how much radiation goes into warming and how much moisture is drawn up into the adjacent atmosphere. Thus, they are important to parameterize.[51]
Domains
[edit]The horizontal domain of a model is either global, covering the entire Earth, or regional, covering only part of the Earth. Regional models also are known as limited-area models, or LAMs. Regional models use finer grid spacing to resolve explicitly smaller-scale meteorological phenomena, since their smaller domain decreases computational demands. Regional models use a compatible global model for initial conditions of the edge of their domain. Uncertainty and errors within LAMs are introduced by the global model used for the boundary conditions of the edge of the regional model, as well as within the creation of the boundary conditions for the LAMs itself.[52]
The vertical coordinate is handled in various ways. Some models, such as Richardson's 1922 model, use geometric height () as the vertical coordinate. Later models substituted the geometric coordinate with a pressure coordinate system, in which the geopotential heights of constant-pressure surfaces become dependent variables, greatly simplifying the primitive equations.[53] This follows since pressure decreases with height through the Earth's atmosphere.[54] The first model used for operational forecasts, the single-layer barotropic model, used a single pressure coordinate at the 500-millibar (15 inHg) level,[6] and thus was essentially two-dimensional. High-resolution models—also called mesoscale models—such as the Weather Research and Forecasting model tend to use normalized pressure coordinates referred to as sigma coordinates.[55]
Global versions
[edit]Some of the better known global numerical models are:
- GFS Global Forecast System (previously AVN) – developed by NOAA
- NOGAPS – developed by the US Navy to compare with the GFS
- GEM Global Environmental Multiscale Model – developed by the Meteorological Service of Canada (MSC)
- IFS Integrated Forecast System developed by the European Centre for Medium-Range Weather Forecasts
- UM – Unified Model developed by the UK Met Office
- ICON developed by the German Weather Service, DWD, jointly with the Max-Planck-Institute (MPI) for Meteorology, Hamburg, NWP Global model of DWD
- ARPEGE developed by the French Weather Service, Météo-France
- IGCM Intermediate General Circulation Model[41]
- PLAV Vorticity-divergence semi-Lagrangian global atmospheric model – developed by Hydrometeorological Centre of Russia
Regional versions
[edit]Some of the better known regional numerical models are:
- WRF The Weather Research and Forecasting model was developed cooperatively by NCEP, NCAR, and the meteorological research community. WRF has several configurations, including:
- WRF-NMM The WRF Nonhydrostatic Mesoscale Model is the primary short-term weather forecast model for the U.S., replacing the Eta model.
- WRF-ARW Advanced Research WRF developed primarily at the U.S. National Center for Atmospheric Research (NCAR)
- HARMONIE-Climate (HCLIM) is a limited area climate model based on the HARMONIE model developed by a large consortium of European weather forecastign and research institutes . It is a model system that like WRF can be run in many configurations, including at high resolution with the non-hydrostatic Arome physics or at lower resolutions with hydrostatic physics based on the ALADIN physical schemes. It has mostly been used in Europe and the Arctic for climate studies including 3km downscaling over Scandinavia and in studies looking at extreme weather events.
- RACMO was developed at the Netherlands Meteorological Institute, KNMI and is based on the dynamics of the HIRLAM model with physical schemes from the IFS
- RACMO2.3p2 is a polar version of the model used in many studies to provide surface mass balance of the polar ice sheets that was developed at the University of Utrecht
- MAR (Modele Atmospherique Regionale) is a regional climate model developed at the University of Grenoble in France and the University of Liege in Belgium.
- HIRHAM5 is a regional climate model developed at the Danish Meteorological Institute and the Alfred Wegener Institute in Potsdam. It is also based on the HIRLAM dynamics with physical schemes based on those in the ECHAM model. Like the RACMO model HIRHAM has been used widely in many different parts of the world under the CORDEX scheme to provide regional climate projections. It also has a polar mode that has been used for polar ice sheet studies in Greenland and Antarctica
- NAM The term North American Mesoscale model refers to whatever regional model NCEP operates over the North American domain. NCEP began using this designation system in January 2005. Between January 2005 and May 2006 the Eta model used this designation. Beginning in May 2006, NCEP began to use the WRF-NMM as the operational NAM.
- RAMS the Regional Atmospheric Modeling System developed at Colorado State University for numerical simulations of atmospheric meteorology and other environmental phenomena on scales from meters to hundreds of kilometers – now supported in the public domain
- MM5 The Fifth Generation Penn State/NCAR Mesoscale Model
- ARPS the Advanced Region Prediction System developed at the University of Oklahoma is a comprehensive multi-scale nonhydrostatic simulation and prediction system that can be used for regional-scale weather prediction up to the tornado-scale simulation and prediction. Advanced radar data assimilation for thunderstorm prediction is a key part of the system..
- HIRLAM High Resolution Limited Area Model, is developed by the European NWP research consortia[56] co-funded by 10 European weather services. The meso-scale HIRLAM model is known as HARMONIE and developed in collaboration with Meteo France and ALADIN consortia.
- GEM-LAM Global Environmental Multiscale Limited Area Model, the high resolution 2.5 km (1.6 mi) GEM by the Meteorological Service of Canada (MSC)
- ALADIN The high-resolution limited-area hydrostatic and non-hydrostatic model developed and operated by several European and North African countries under the leadership of Météo-France[41]
- COSMO The COSMO Model, formerly known as LM, aLMo or LAMI, is a limited-area non-hydrostatic model developed within the framework of the Consortium for Small-Scale Modelling (Germany, Switzerland, Italy, Greece, Poland, Romania, and Russia).[57]
- Meso-NH The Meso-NH Model[58] is a limited-area non-hydrostatic model developed jointly by the Centre National de Recherches Météorologiques and the Laboratoire d'Aérologie (France, Toulouse) since 1998.[59] Its application is from mesoscale to centimetric scales weather simulations.
Model output statistics
[edit]Because forecast models based upon the equations for atmospheric dynamics do not perfectly determine weather conditions near the ground, statistical corrections were developed to attempt to resolve this problem. Statistical models were created based upon the three-dimensional fields produced by numerical weather models, surface observations, and the climatological conditions for specific locations. These statistical models are collectively referred to as model output statistics (MOS),[60] and were developed by the National Weather Service for their suite of weather forecasting models.[17] The United States Air Force developed its own set of MOS based upon their dynamical weather model by 1983.[18]
Model output statistics differ from the perfect prog technique, which assumes that the output of numerical weather prediction guidance is perfect.[61] MOS can correct for local effects that cannot be resolved by the model due to insufficient grid resolution, as well as model biases. Forecast parameters within MOS include maximum and minimum temperatures, percentage chance of rain within a several hour period, precipitation amount expected, chance that the precipitation will be frozen in nature, chance for thunderstorms, cloudiness, and surface winds.[62]
Applications
[edit]Climate modeling
[edit]In 1956, Norman Phillips developed a mathematical model that realistically depicted monthly and seasonal patterns in the troposphere. This was the first successful climate model.[13][14] Several groups then began working to create general circulation models.[63] The first general circulation climate model combined oceanic and atmospheric processes and was developed in the late 1960s at the Geophysical Fluid Dynamics Laboratory, a component of the U.S. National Oceanic and Atmospheric Administration.[64]
By 1975, Manabe and Wetherald had developed a three-dimensional global climate model that gave a roughly accurate representation of the current climate. Doubling CO2 in the model's atmosphere gave a roughly 2 °C rise in global temperature.[65] Several other kinds of computer models gave similar results: it was impossible to make a model that gave something resembling the actual climate and not have the temperature rise when the CO2 concentration was increased.
By the early 1980s, the U.S. National Center for Atmospheric Research had developed the Community Atmosphere Model (CAM), which can be run by itself or as the atmospheric component of the Community Climate System Model. The latest update (version 3.1) of the standalone CAM was issued on 1 February 2006.[66][67][68] In 1986, efforts began to initialize and model soil and vegetation types, resulting in more realistic forecasts.[69] Coupled ocean-atmosphere climate models, such as the Hadley Centre for Climate Prediction and Research's HadCM3 model, are being used as inputs for climate change studies.[63] Meta-analyses of past climate change models show that they have generally been accurate, albeit conservative, under-predicting levels of warming.[70][71]
Limited area modeling
[edit]
Air pollution forecasts depend on atmospheric models to provide fluid flow information for tracking the movement of pollutants.[72] In 1970, a private company in the U.S. developed the regional Urban Airshed Model (UAM), which was used to forecast the effects of air pollution and acid rain. In the mid- to late-1970s, the United States Environmental Protection Agency took over the development of the UAM and then used the results from a regional air pollution study to improve it. Although the UAM was developed for California, it was during the 1980s used elsewhere in North America, Europe, and Asia.[16]
The Movable Fine-Mesh model, which began operating in 1978, was the first tropical cyclone forecast model to be based on atmospheric dynamics.[15] Despite the constantly improving dynamical model guidance made possible by increasing computational power, it was not until the 1980s that numerical weather prediction (NWP) showed skill in forecasting the track of tropical cyclones. And it was not until the 1990s that NWP consistently outperformed statistical or simple dynamical models.[73] Predicting the intensity of tropical cyclones using NWP has also been challenging. As of 2009, dynamical guidance remained less skillful than statistical methods.[74]
See also
[edit]References
[edit]- ^ "Thermotropic model - Glossary of Meteorology". glossarystaging.ametsoc.net. American Meteorological Society. Retrieved 24 October 2024.
- ^ Gates, W. Lawrence (August 1955). Results Of Numerical Forecasting With The Barotropic And Thermotropic Atmospheric Models. Hanscom Air Force Base: Air Force Cambridge Research Laboratories. Archived from the original on July 22, 2011.
- ^ Thompson, P. D.; W. Lawrence Gates (April 1956). "A Test of Numerical Prediction Methods Based on the Barotropic and Two-Parameter Baroclinic Models". Journal of Meteorology. 13 (2): 127–141. Bibcode:1956JAtS...13..127T. doi:10.1175/1520-0469(1956)013<0127:ATONPM>2.0.CO;2. ISSN 1520-0469.
- ^ Wallace, John M. & Peter V. Hobbs (1977). Atmospheric Science: An Introductory Survey. Academic Press, Inc. pp. 384–385. ISBN 978-0-12-732950-5.
- ^ Marshall, John; Plumb, R. Alan (2008). "Balanced flow". Atmosphere, ocean, and climate dynamics : an introductory text. Amsterdam: Elsevier Academic Press. pp. 109–12. ISBN 978-0-12-558691-7.
- ^ a b c Charney, Jule; Fjörtoft, Ragnar; von Neumann, John (November 1950). "Numerical Integration of the Barotropic Vorticity Equation". Tellus. 2 (4): 237–254. Bibcode:1950Tell....2..237C. doi:10.3402/tellusa.v2i4.8607.
- ^ Jacobson, Mark Zachary (2005). Fundamentals of atmospheric modeling. Cambridge University Press. pp. 138–143. ISBN 978-0-521-83970-9.
- ^ a b Lynch, Peter (2008-03-20). "The origins of computer weather prediction and climate modeling" (PDF). Journal of Computational Physics. 227 (7): 3431–44. Bibcode:2008JCoPh.227.3431L. doi:10.1016/j.jcp.2007.02.034. Archived from the original (PDF) on 2010-07-08. Retrieved 2010-12-23.
- ^ Lynch, Peter (2006). "Weather Prediction by Numerical Process". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 1–27. ISBN 978-0-521-85729-1.
- ^ Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. p. 208. ISBN 978-0-471-38108-2.
- ^ Harper, Kristine; Uccellini, Louis W.; Kalnay, Eugenia; Carey, Kenneth; Morone, Lauren (May 2007). "2007: 50th Anniversary of Operational Numerical Weather Prediction". Bulletin of the American Meteorological Society. 88 (5): 639–650. Bibcode:2007BAMS...88..639H. doi:10.1175/BAMS-88-5-639.
- ^ Leslie, L.M.; Dietachmeyer, G.S. (December 1992). "Real-time limited area numerical weather prediction in Australia: a historical perspective" (PDF). Australian Meteorological Magazine. 41 (SP). Bureau of Meteorology: 61–77. Retrieved 2011-01-03.
- ^ a b Norman A. Phillips (April 1956). "The general circulation of the atmosphere: a numerical experiment" (PDF). Quarterly Journal of the Royal Meteorological Society. 82 (352): 123–154. Bibcode:1956QJRMS..82..123P. doi:10.1002/qj.49708235202.
- ^ a b John D. Cox (2002). Storm Watchers. John Wiley & Sons, Inc. p. 210. ISBN 978-0-471-38108-2.
- ^ a b Shuman, Frederick G. (September 1989). "History of Numerical Weather Prediction at the National Meteorological Center". Weather and Forecasting. 4 (3): 286–296. Bibcode:1989WtFor...4..286S. doi:10.1175/1520-0434(1989)004<0286:HONWPA>2.0.CO;2. ISSN 1520-0434.
- ^ a b Steyn, D. G. (1991). Air pollution modeling and its application VIII, Volume 8. Birkhäuser. pp. 241–242. ISBN 978-0-306-43828-8.
- ^ a b Harry Hughes (1976). Model output statistics forecast guidance. United States Air Force Environmental Technical Applications Center. pp. 1–16.
- ^ a b L. Best, D. L. & S. P. Pryor (1983). Air Weather Service Model Output Statistics Systems. Air Force Global Weather Central. pp. 1–90.
- ^ Cox, John D. (2002). Storm Watchers. John Wiley & Sons, Inc. pp. 222–224. ISBN 978-0-471-38108-2.
- ^ Weickmann, Klaus, Jeff Whitaker, Andres Roubicek and Catherine Smith (2001-12-01). The Use of Ensemble Forecasts to Produce Improved Medium Range (3–15 days) Weather Forecasts. Climate Diagnostics Center. Retrieved 2007-02-16.
- ^ Toth, Zoltan; Kalnay, Eugenia (December 1997). "Ensemble Forecasting at NCEP and the Breeding Method". Monthly Weather Review. 125 (12): 3297–3319. Bibcode:1997MWRv..125.3297T. CiteSeerX 10.1.1.324.3941. doi:10.1175/1520-0493(1997)125<3297:EFANAT>2.0.CO;2. ISSN 1520-0493.
- ^ "The Ensemble Prediction System (EPS)". ECMWF. Archived from the original on 25 January 2011. Retrieved 2011-01-05.
- ^ Molteni, F.; Buizza, R.; Palmer, T.N.; Petroliagis, T. (January 1996). "The ECMWF Ensemble Prediction System: Methodology and validation". Quarterly Journal of the Royal Meteorological Society. 122 (529): 73–119. Bibcode:1996QJRMS.122...73M. doi:10.1002/qj.49712252905.
- ^ Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 56. ISBN 978-0-521-86540-1.
- ^ Gaffen, Dian J. (2007-06-07). "Radiosonde Observations and Their Use in SPARC-Related Investigations". Archived from the original on 2007-06-07.
- ^ National Climatic Data Center (2008-08-20). "Key to METAR Surface Weather Observations". National Oceanic and Atmospheric Administration. Archived from the original on 2002-11-01. Retrieved 2011-02-11.
- ^ "SYNOP Data Format (FM-12): Surface Synoptic Observations". UNISYS. 2008-05-25. Archived from the original on 2007-12-30.
- ^ Krishnamurti, T. N. (January 1995). "Numerical Weather Prediction". Annual Review of Fluid Mechanics. 27 (1): 195–225. Bibcode:1995AnRFM..27..195K. doi:10.1146/annurev.fl.27.010195.001211. S2CID 122230747.
- ^ "The WRF Variational Data Assimilation System (WRF-Var)". University Corporation for Atmospheric Research. 2007-08-14. Archived from the original on 2007-08-14.
- ^ Ballish, Bradley A.; V. Krishna Kumar (November 2008). "Systematic Differences in Aircraft and Radiosonde Temperatures" (PDF). Bulletin of the American Meteorological Society. 89 (11): 1689–1708. Bibcode:2008BAMS...89.1689B. doi:10.1175/2008BAMS2332.1. Retrieved 2011-02-16.
- ^ National Data Buoy Center (2009-01-28). "The WMO Voluntary Observing Ships (VOS) Scheme". National Oceanic and Atmospheric Administration. Retrieved 2011-02-15.
- ^ 403rd Wing (2011). "The Hurricane Hunters". 53rd Weather Reconnaissance Squadron. Archived from the original on 2012-05-30. Retrieved 2006-03-30.
{{cite web}}: CS1 maint: numeric names: authors list (link) - ^ Lee, Christopher (2007-10-08). "Drone, Sensors May Open Path Into Eye of Storm". The Washington Post. Retrieved 2008-02-22.
- ^ National Oceanic and Atmospheric Administration (2010-11-12). "NOAA Dispatches High-Tech Research Plane to Improve Winter Storm Forecasts". Retrieved 2010-12-22.
- ^ Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 137. ISBN 978-0-521-86540-1.
- ^ Houghton, John Theodore (1985). The Global Climate. Cambridge University Press archive. pp. 49–50. ISBN 978-0-521-31256-1.
- ^ Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 48–49. ISBN 978-0-12-554766-6.
- ^ Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. pp. 285–287. ISBN 978-0-12-554766-6.
- ^ Sunderam, V. S.; G. Dick van Albada; Peter M. A. Sloot; J. J. Dongarra (2005). Computational Science – ICCS 2005: 5th International Conference, Atlanta, GA, USA, May 22–25, 2005, Proceedings, Part 1. Springer. p. 132. ISBN 978-3-540-26032-5.
- ^ Zwieflhofer, Walter; Norbert Kreitz; European Centre for Medium Range Weather Forecasts (2001). Developments in teracomputing: proceedings of the ninth ECMWF Workshop on the Use of High Performance Computing in Meteorology. World Scientific. p. 276. ISBN 978-981-02-4761-4.
- ^ a b c Chan, Johnny C. L. & Jeffrey D. Kepert (2010). Global Perspectives on Tropical Cyclones: From Science to Mitigation. World Scientific. pp. 295–301. ISBN 978-981-4293-47-1.
- ^ Holton, James R. (2004). An introduction to dynamic meteorology, Volume 1. Academic Press. p. 480. ISBN 978-0-12-354015-7.
- ^ Brown, Molly E. (2008). Famine early warning systems and remote sensing data. Springer. p. 121. ISBN 978-3-540-75367-4.
- ^ a b Strikwerda, John C. (2004). Finite difference schemes and partial differential equations. SIAM. pp. 165–170. ISBN 978-0-89871-567-5.
- ^ Pielke, Roger A. (2002). Mesoscale Meteorological Modeling. Academic Press. p. 65. ISBN 978-0-12-554766-6.
- ^ Ahrens, C. Donald (2008). Essentials of meteorology: an invitation to the atmosphere. Cengage Learning. p. 244. ISBN 978-0-495-11558-8.
- ^ Narita, Masami & Shiro Ohmori (2007-08-06). "3.7 Improving Precipitation Forecasts by the Operational Nonhydrostatic Mesoscale Model with the Kain-Fritsch Convective Parameterization and Cloud Microphysics" (PDF). 12th Conference on Mesoscale Processes. American Meteorological Society. Retrieved 2011-02-15.
- ^ Frierson, Dargan (2000-09-14). "The Diagnostic Cloud Parameterization Scheme" (PDF). University of Washington. pp. 4–5. Archived from the original (PDF) on 1 April 2011. Retrieved 2011-02-15.
- ^ Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. p. 6. ISBN 978-0-521-86540-1.
- ^ Melʹnikova, Irina N. & Alexander V. Vasilyev (2005). Short-wave solar radiation in the earth's atmosphere: calculation, observation, interpretation. Springer. pp. 226–228. ISBN 978-3-540-21452-6.
- ^ Stensrud, David J. (2007). Parameterization schemes: keys to understanding numerical weather prediction models. Cambridge University Press. pp. 12–14. ISBN 978-0-521-86540-1.
- ^ Warner, Thomas Tomkins (2010). Numerical Weather and Climate Prediction. Cambridge University Press. p. 259. ISBN 978-0-521-51389-0.
- ^ Lynch, Peter (2006). "The Fundamental Equations". The Emergence of Numerical Weather Prediction. Cambridge University Press. pp. 45–46. ISBN 978-0-521-85729-1.
- ^ Ahrens, C. Donald (2008). Essentials of meteorology: an invitation to the atmosphere. Cengage Learning. p. 10. ISBN 978-0-495-11558-8.
- ^ Janjic, Zavisa; Gall, Robert; Pyle, Matthew E. (February 2010). "Scientific Documentation for the NMM Solver" (PDF). National Center for Atmospheric Research. pp. 12–13. Archived from the original (PDF) on 2011-08-23. Retrieved 2011-01-03.
- ^ "HIRLAM". Archived from the original on April 30, 2018.
- ^ Consortium on Small Scale Modelling. Consortium for Small-scale Modeling. Retrieved on 2008-01-13.
- ^ Lac, C., Chaboureau, P., Masson, V., Pinty, P., Tulet, P., Escobar, J., ... & Aumond, P. (2018). Overview of the Meso-NH model version 5.4 and its applications. Geoscientific Model Development, 11, 1929-1969.
- ^ Lafore, Jean Philippe, et al. "The Meso-NH atmospheric simulation system. Part I: Adiabatic formulation and control simulations." Annales geophysicae. Vol. 16. No. 1. Copernicus GmbH, 1998.
- ^ Baum, Marsha L. (2007). When nature strikes: weather disasters and the law. Greenwood Publishing Group. p. 189. ISBN 978-0-275-22129-4.
- ^ Gultepe, Ismail (2007). Fog and boundary layer clouds: fog visibility and forecasting. Springer. p. 1144. ISBN 978-3-7643-8418-0.
- ^ Barry, Roger Graham & Richard J. Chorley (2003). Atmosphere, weather, and climate. Psychology Press. p. 172. ISBN 978-0-415-27171-4.
- ^ a b Peter Lynch (2006). "The ENIAC Integrations". The Emergence of Numerical Weather Prediction: Richardson's Dream. Cambridge University Press. p. 208. ISBN 978-0-521-85729-1. Retrieved 6 February 2018.
- ^ National Oceanic and Atmospheric Administration (22 May 2008). "The First Climate Model". Retrieved 8 January 2011.
- ^ Manabe S.; Wetherald R. T. (1975). "The Effects of Doubling the CO2 Concentration on the Climate of a General Circulation Model". Journal of the Atmospheric Sciences. 32 (3): 3–15. Bibcode:1975JAtS...32....3M. doi:10.1175/1520-0469(1975)032<0003:teodtc>2.0.co;2.
- ^ "CAM 3.1 Download". www.cesm.ucar.edu. Retrieved 2019-06-25.
- ^ William D. Collins; et al. (June 2004). "Description of the NCAR Community Atmosphere Model (CAM 3.0)" (PDF). University Corporation for Atmospheric Research. Archived from the original (PDF) on 26 September 2019. Retrieved 3 January 2011.
- ^ "CAM3.0 COMMUNITY ATMOSPHERE MODEL". University Corporation for Atmospheric Research. Retrieved 6 February 2018.
- ^ Yongkang Xue & Michael J. Fennessey (20 March 1996). "Impact of vegetation properties on U. S. summer weather prediction" (PDF). Journal of Geophysical Research. 101 (D3): 7419. Bibcode:1996JGR...101.7419X. CiteSeerX 10.1.1.453.551. doi:10.1029/95JD02169. Archived from the original (PDF) on 10 July 2010. Retrieved 6 January 2011.
- ^ Hausfather, Zeke; Drake, Henri F.; Abbott, Tristan; Schmidt, Gavin A. (16 January 2020). "Evaluating the Performance of Past Climate Model Projections". Geophysical Research Letters. 47 (1). doi:10.1029/2019GL085378. ISSN 0094-8276. Retrieved 28 September 2025.
- ^ Carvalho, D.; Rafael, S.; Monteiro, A.; Rodrigues, V.; Lopes, M.; Rocha, A. (14 July 2022). "How well have CMIP3, CMIP5 and CMIP6 future climate projections portrayed the recently observed warming" (PDF). Scientific Reports. 12 (1). Springer Science and Business Media LLC. doi:10.1038/s41598-022-16264-6. ISSN 2045-2322. Retrieved 28 September 2025.
- ^ Alexander Baklanov; Alix Rasmussen; Barbara Fay; Erik Berge; Sandro Finardi (September 2002). "Potential and Shortcomings of Numerical Weather Prediction Models in Providing Meteorological Data for Urban Air Pollution Forecasting". Water, Air, & Soil Pollution: Focus. 2 (5): 43–60. doi:10.1023/A:1021394126149. S2CID 94747027.
- ^ James Franklin (20 April 2010). "National Hurricane Center Forecast Verification". National Hurricane Center. Archived from the original on 2 January 2011. Retrieved 2 January 2011.
- ^ Edward N. Rappaport; James L. Franklin; Lixion A. Avila; Stephen R. Baig; John L. Beven II; Eric S. Blake; Christopher A. Burr; Jiann-Gwo Jiing; Christopher A. Juckins; Richard D. Knabb; Christopher W. Landsea; Michelle Mainelli; Max Mayfield; Colin J. McAdie; Richard J. Pasch; Christopher Sisko; Stacy R. Stewart; Ahsha N. Tribble (April 2009). "Advances and Challenges at the National Hurricane Center". Weather and Forecasting. 24 (2): 395–419. Bibcode:2009WtFor..24..395R. CiteSeerX 10.1.1.207.4667. doi:10.1175/2008WAF2222128.1.
Further reading
[edit]- Roulstone, Ian; Norbury, John (2013). Invisible in the Storm: the role of mathematics in understanding weather. Princeton: Princeton University Press. ISBN 978-0-691-15272-1.
External links
[edit]Atmospheric model
View on GrokipediaDefinition and Fundamentals
Governing Equations and Physical Principles
Atmospheric models derive their governing equations from the conservation laws of mass, momentum, and energy, treating the atmosphere as a stratified, rotating, compressible fluid subject to gravitational forces and thermodynamic processes. These laws, rooted in Newtonian mechanics and the first law of thermodynamics, yield the primitive equations—a set of nonlinear partial differential equations that approximate large-scale atmospheric dynamics without deriving secondary variables like vorticity or geopotential. The primitive equations consist of prognostic equations for horizontal velocities, temperature (or potential temperature), and moisture, alongside diagnostic relations for pressure and density, enabling time-dependent simulations of atmospheric evolution.[10][11] The momentum conservation equations, analogous to the Navier-Stokes equations for fluids, describe the rate of change of velocity components following an air parcel. In spherical coordinates on a rotating Earth, the horizontal momentum equations for zonal (u) and meridional (v) winds include terms for local acceleration, advection, pressure gradient force (-\frac{1}{\rho} \nabla p), Coriolis force (f \times \mathbf{v}, where f = 2 \Omega \sin \phi is the Coriolis parameter), and viscous dissipation, while gravity is balanced vertically. Vertical momentum is often neglected under the hydrostatic approximation, valid for synoptic scales where vertical accelerations (order 10^{-2} m/s²) are dwarfed by buoyancy and gravity (order 10 m/s²), yielding \frac{\partial p}{\partial z} = -\rho g. This approximation holds for horizontal scales exceeding 10 km and reduces computational demands in global models, though it fails for convective scales below 1 km where nonhydrostatic terms like w \frac{\partial w}{\partial z} become significant.[12][13][14] Mass conservation is expressed via the continuity equation, \frac{\partial \rho}{\partial t} + \nabla \cdot (\rho \mathbf{v}) = 0, which in pressure coordinates (common for models due to fixed upper boundaries) becomes \nabla \cdot \mathbf{v} + \frac{\partial \omega}{\partial p} = 0, where \omega = \frac{dp}{dt} approximates vertical motion. Energy conservation follows the thermodynamic equation, \frac{d \theta}{dt} = \frac{\theta}{T} \left( Q + \frac{\kappa T}{p} \nabla \cdot \mathbf{v} \right), where \theta is potential temperature, Q represents diabatic heating (e.g., from radiation, latent heat release), and \kappa = R/c_p \approx 0.286 for dry air; this couples dynamics to thermodynamics via buoyancy. The ideal gas law, p = \rho R T (with R the specific gas constant), closes the system, while a water vapor continuity equation, \frac{d q}{dt} = C - P (q specific humidity, C condensation, P precipitation), accounts for moist processes essential for cloud and precipitation simulation. Friction and subgrid turbulence are parameterized, as direct resolution exceeds current computational limits.[15][16][10]Scales, Resolutions, and Approximations
Atmospheric phenomena encompass a vast range of spatial scales, from planetary circulations exceeding 10,000 km to microscale turbulence below 1 km, with temporal scales varying from seconds to years. Global models primarily resolve synoptic scales (1,000–5,000 km horizontally) and larger, capturing mid-latitude cyclones and jet streams, while regional and convection-permitting models target mesoscales (10–1,000 km) to simulate thunderstorms and fronts. Microscale processes, such as individual cloud droplets or boundary-layer eddies, remain unresolved in operational models and require parameterization.[17] Model resolution refers to the grid spacing that discretizes the governing equations, determining the smallest explicitly resolvable features. Horizontal resolutions in global numerical weather prediction models have improved to 5–25 km by 2023, with the European Centre for Medium-Range Weather Forecasts (ECMWF) operational high-resolution forecast using a 9 km grid on a cubic octahedral projection, and its ensemble at 18 km. The U.S. Global Forecast System (GFS) employs a 13 km grid for forecasts up to 0.25° latitude-longitude equivalents in some products. Effective resolution—accounting for numerical smoothing—is typically 3–7 times coarser than nominal grid spacing, limiting fidelity for sub-grid phenomena. Vertical resolution involves 40–137 hybrid levels, with finer spacing near the surface (e.g., 50–100 m in the boundary layer) and coarser aloft up to 0.1 hPa. Regional models achieve 1–5 km horizontal spacing for mesoscale forecasting, enabling explicit resolution of deep convection without cumulus parameterizations.[18][19][20] To manage computational demands, atmospheric models employ approximations that filter or simplify dynamics across scales. The hydrostatic approximation assumes negligible vertical momentum acceleration, enforcing local balance between gravity and pressure gradients (∂p/∂z = -ρg), which holds for shallow flows where horizontal scales greatly exceed vertical ones (aspect ratio ≪ 1, typically valid above ~10 km horizontal resolution). This reduces the prognostic equations from three to two effective dimensions, enabling efficient global simulations but introducing errors in vertically accelerated flows like deep convection or orographic gravity waves. Nonhydrostatic models relax this by retaining full vertical momentum, essential for kilometer-scale resolutions where vertical velocities approach 10–50 m/s.[21][22] Additional approximations include the anelastic formulation, which filters acoustic waves by assuming density variations are small relative to perturbations, and semi-implicit time-stepping schemes that treat fast linear terms implicitly for stability with time steps of 10–30 minutes. Subgrid-scale processes, unresolved due to finite resolution, are parameterized via empirical closures for turbulence (e.g., eddy diffusivities), cloud microphysics, and radiation, introducing uncertainties that scale inversely with resolution. Scale-selective filtering, such as divergence damping, suppresses computational modes and small-scale noise. These approximations preserve causal fidelity for resolved dynamics but necessitate validation against observations, as coarser resolutions amplify parameterization errors in energy-containing scales.[23][24]Model Types and Classifications
Barotropic Models
Barotropic models constitute a foundational simplification in atmospheric dynamics, assuming that atmospheric density varies solely as a function of pressure, such that isobaric surfaces coincide with isosteric surfaces and baroclinicity—arising from horizontal temperature gradients—is neglected. This approximation reduces the three-dimensional primitive equations to a two-dimensional framework, often focusing on the evolution of streamfunction or geopotential height at a single mid-tropospheric level, such as 500 hPa, where geostrophic balance predominates. Under these conditions, the models solve the barotropic vorticity equation (BVE), which describes the conservation and advection of absolute vorticity (relative vorticity plus planetary vorticity ) by the non-divergent wind field: , where is the material derivative. This equation captures large-scale, quasi-geostrophic motions like Rossby waves but excludes vertical variations in wind shear or thermodynamic processes.[25][26] The BVE underpins equivalent barotropic models, which approximate the atmosphere's horizontal flow as uniform with height, effectively representing motions at a characteristic level while incorporating -effects (the meridional gradient of planetary vorticity, ) for wave propagation. These models employ spectral or finite-difference methods for numerical integration on a sphere or beta-plane, enabling simulations of planetary-scale instabilities and teleconnections without resolving baroclinic energy conversion. Limitations include the inability to generate new vorticity sources or convert potential to kinetic energy, rendering them unsuitable for phenomena driven by latent heat release or frontal dynamics; forecast skill degrades beyond 24-48 hours due to unrepresented baroclinic amplification of errors.[27][28][25] Historically, barotropic models marked the inception of numerical weather prediction (NWP). On January 1, 1950, Jule Charney, Ragnar Fjörtoft, and John von Neumann executed the first viable 24-hour forecast of a North Atlantic cyclone using the quasi-geostrophic BVE on the ENIAC computer, initializing with hand-analyzed 500 hPa height fields and achieving errors comparable to human prognoses after filtering small-scale noise. This success validated computational NWP, transitioning from empirical synoptic methods to dynamical integration, though early implementations assumed non-divergence and ignored diabatic effects, restricting applicability to extratropical mid-latitude flows. By the mid-1950s, barotropic integrations informed operational guidance at centers like the Joint Numerical Weather Prediction Unit, but were supplanted by multilevel baroclinic models as computing power grew.[26][29][30] In contemporary research, barotropic frameworks persist as idealized tools for isolating dynamical mechanisms, such as eddy-momentum fluxes or annular mode variability, often implemented in spectral codes for global simulations on spherical geometry. For instance, GFDL's barotropic model evolves non-divergent flow under forced-dissipative conditions to study upscale energy cascades, while empirical variants predict streamfunction tendencies from lagged fields to probe low-frequency atmospheric variability. These applications underscore their utility in causal analysis of barotropic decay phases in life cycles, free from confounding subgrid parameterizations.[27][31][32]Hydrostatic Models
Hydrostatic models approximate atmospheric dynamics by assuming hydrostatic balance in the vertical momentum equation, where the pressure gradient force exactly counters gravitational force, expressed as . This neglects vertical accelerations (), valid for flows where horizontal scales greatly exceed vertical scales, with aspect ratios typically below 0.1, as in synoptic-scale weather systems spanning hundreds of kilometers horizontally but only kilometers vertically. The approximation simplifies the primitive equations by diagnosing vertical velocity from the continuity equation rather than prognosticating it, reducing computational demands and enabling efficient simulations on coarser grids.[13][33][21] In practice, hydrostatic models solve the horizontal momentum, thermodynamic, and continuity equations alongside the hydrostatic relation, often using sigma or hybrid vertical coordinates to handle terrain. They excel in global circulation models (GCMs) and medium-resolution numerical weather prediction (NWP), such as early versions of the ECMWF Integrated Forecasting System (IFS) or NOAA's Global Forecast System (GFS), where vertical motions remain sub-grid and order centimeters per second. For instance, these models accurately capture mid-latitude cyclones and jet stream evolution, with errors in pressure fields minimized under hydrostatic conditions, as vertical accelerations contribute less than 1% to the force balance in large-scale flows. Parameterizations for convection and turbulence compensate for unresolved vertical processes, maintaining realism in forecasts up to 10-day ranges.[34][35][21] Limitations arise in regimes with significant vertical accelerations, such as deep moist convection or orographic flows, where nonhydrostatic effects generate gravity waves with wavelengths under 10 km. Hydrostatic models filter these, potentially underestimating precipitation in thunderstorms or downslope winds, prompting transitions to nonhydrostatic frameworks in high-resolution applications below 5 km grid spacing. Despite this, hydrostatic cores persist in operational models for computational efficiency; for example, MPAS hydrostatic configurations match nonhydrostatic performance in baroclinic wave tests at resolutions above 10 km, with runtime savings of 20-30%. Ongoing assessments confirm their adequacy for climate simulations, where global mean circulations dominate over fine-scale transients.[36][21][37]Nonhydrostatic Models
Nonhydrostatic models explicitly account for vertical accelerations in the momentum equations, rejecting the hydrostatic approximation that vertical motion derivatives are negligible relative to buoyancy forces. This formulation derives from the full primitive equations in compressible or anelastic forms, enabling resolution of three-dimensional dynamical processes where the Rossby number indicates significant departure from balance, such as in convective updrafts exceeding 10 m/s or orographic flows with Froude numbers greater than 1.[38][39] Such models prove indispensable for mesoscale and cloud-resolving simulations at horizontal grid spacings of 1-5 km, where hydrostatic assumptions fail to capture vertical propagation of gravity waves or explicit moist convection without parameterization. For instance, in supercell thunderstorms, nonhydrostatic dynamics permit simulation of rotating updrafts and downdrafts with realistic aspect ratios, improving precipitation forecasts by up to 20-30% in verification studies compared to hydrostatic counterparts.[40][41] Development traces to the 1970s mesoscale efforts, evolving into operational systems by the 1990s; the Penn State/NCAR Mesoscale Model version 5 (MM5), released in 1993, introduced nonhydrostatic options for limited-area domains, while the Weather Research and Forecasting (WRF) model, operationalized around 2000 by NCAR and NCEP, employs a conservative, time-split Arakawa-C grid for fully compressible flows.[42][40] The Japan Meteorological Agency implemented its nonhydrostatic mesoscale model in 2004 for 5-km resolution forecasts, enhancing typhoon track predictions.[43] Numerical frameworks address acoustic wave computational burdens—propagating at ~300 m/s—via split-explicit schemes that advance slow modes (advection, buoyancy) with larger time steps (e.g., 10-20 s) and fast modes implicitly, or anelastic filtering to eliminate sound waves entirely for sub-10-km scales. Global extensions, like NICAM since 2005, leverage icosahedral grids for uniform resolution up to 3.5 km, tested on supercomputers for convection-permitting climate simulations.[44][45] Despite superior fidelity for vertical vorticity and divergence fields, nonhydrostatic models demand 2-5 times more computation than hydrostatic equivalents due to finer vertical levels (often 50+ eta levels) and stability constraints, restricting them primarily to regional domains with lateral boundary nesting. They also require robust initialization to mitigate initial imbalances, as unfiltered acoustic noise can amplify errors exponentially without damping.[46][47] Operational examples include the U.S. NAM system, using WRF-NMM core for 12-km forecasts since 2006, and ECMWF's ongoing transition to nonhydrostatic Integrated Forecasting System upgrades by 2025 for kilometer-scale global runs.[48][34]Historical Development
Pre-Numerical Era Foundations (Pre-1950)
The foundations of atmospheric modeling prior to 1950 were rooted in the recognition that atmospheric phenomena could be described deterministically through the laws of physics, particularly fluid dynamics, thermodynamics, and hydrostatics, rather than empirical pattern-matching or qualitative analogies. In the late 19th century, meteorologists such as Cleveland Abbe emphasized that weather prediction required solving the differential equations governing air motion, heat transfer, and mass conservation, drawing from advances in continuum mechanics.[49] This shifted meteorology toward mathematical formalization, though practical implementation lagged due to computational limitations. Vilhelm Bjerknes, a Norwegian physicist, formalized these ideas in his 1904 paper "On the Application of Hydrodynamics to the Theory of the Elementary Parts of Meteorology," proposing a systematic framework for weather prediction via numerical integration of governing equations.[50][51] Bjerknes outlined a model requiring simultaneous solution of seven partial differential equations: three for horizontal and vertical momentum (derived from Newton's laws adapted to rotating spherical coordinates with Coriolis effects), one for mass continuity, one for energy conservation (including adiabatic processes and latent heat), one for water vapor continuity, and the ideal gas law for state relations.[52] These equations, collectively known as the primitive equations, formed the core of later atmospheric models, emphasizing causal chains from initial conditions to future states without ad hoc parameterizations. Bjerknes advocated for observational networks to provide initial data and iterative graphical or numerical integration, though he focused on theoretical validation over computation.[53] The first practical attempt to apply this framework came from Lewis Fry Richardson in his 1922 monograph Weather Prediction by Numerical Process, where he manually computed a six-hour forecast for a region in western Europe using data from April 20, 1910.[54][55] Richardson discretized Bjerknes' equations on a finite-difference grid, incorporating approximations for pressure tendency via the barotropic vorticity equation and hydrostatic balance, but his results produced unrealistically rapid pressure changes (e.g., a 145 hPa rise in three hours at one grid point), attributed to errors in initial divergence fields and the inherent sensitivity of nonlinear equations to small inaccuracies—foreshadowing chaos theory.[56] To address scalability, Richardson envisioned a "forecast factory" employing thousands of human "computers" for parallel arithmetic, highlighting the era's reliance on manual methods over automated ones.[57] Despite the failure, his work validated the conceptual soundness of integrating hydrodynamic equations forward in time, provided initial conditions were accurate and computations swift.[49] Between the 1920s and 1940s, theoretical refinements built on these foundations without widespread numerical implementation, focusing on simplified analytical models amenable to pencil-and-paper solutions. Researchers like Carl-Gustaf Rossby developed the equivalent-barotropic model in the 1930s, treating the atmosphere as a single layer with height-varying potential vorticity conserved along streamlines, enabling predictions of planetary-scale wave propagation (Rossby waves) observed in upper-air charts.[50] These quasi-geostrophic approximations, balancing geostrophic winds with ageostrophic corrections, laid groundwork for later barotropic and baroclinic models by reducing the full primitive equations to tractable forms under assumptions of hydrostatic equilibrium and small Rossby numbers.[58] Efforts during World War II, such as upper-air modeling at the University of Chicago under Rossby, emphasized empirical validation of theoretical constructs against radiosonde data, though predictions remained qualitative or short-range due to absent high-speed computation.[58] By 1949, models like Eady's baroclinic instability framework analytically explained cyclone development from infinitesimal perturbations, confirming the predictive power of linearized primitive equations for synoptic scales.[59] These pre-numerical efforts established the physical principles—conservation laws, scale separations, and boundary conditions—that numerical models would later operationalize, underscoring the gap between theoretical determinism and practical feasibility.[4]Emergence of Numerical Weather Prediction (1950s-1960s)
The emergence of numerical weather prediction (NWP) in the 1950s marked a pivotal shift from subjective manual forecasting to computational methods grounded in the primitive equations of atmospheric dynamics. Following World War II, John von Neumann initiated the Meteorology Project at the Institute for Advanced Study in Princeton, aiming to leverage electronic computers for weather simulation. Jule Charney, leading the theoretical efforts, developed a filtered quasi-geostrophic model to simplify the nonlinear hydrodynamic equations, focusing on large-scale mid-tropospheric flows while filtering out computationally infeasible high-frequency gravity waves.[50][60] In April 1950, Charney's team executed the first successful numerical forecasts using the ENIAC computer, solving the barotropic vorticity equation for 24-hour predictions over North America. These retrospective forecasts, initialized from observed data, demonstrated skill comparable to or exceeding human forecasters, validating the approach despite requiring about 24 hours of computation per forecast due to ENIAC's limited speed and the need for manual setup with punched cards. The results were published in November 1950, confirming that digital computation could integrate atmospheric equations forward in time effectively.[61][62][60] By 1954, advancements in computing enabled operational NWP. Sweden pioneered the first routine operational forecasts using the BESK computer at the Swedish Meteorological and Hydrological Institute, applying a barotropic model derived from Charney's work to predict large-scale flows over the North Atlantic three times weekly. In the United States, the Joint Numerical Weather Prediction Unit (JNWPU) was established in 1955 by the Weather Bureau, Navy, and Air Force, utilizing the IBM 701 to produce baroclinic multi-level forecasts with the quasi-geostrophic model, achieving real-time 24-hour predictions by the late 1950s.[50][63][64] During the 1960s, NWP expanded with faster computers like the CDC 6600, enabling primitive equation models that relaxed quasi-geostrophic approximations for better handling of frontal systems and jet streams. The U.S. transitioned to nine-level models by 1966, improving forecast accuracy for synoptic-scale features, while international efforts, including in Japan and the UK, adopted similar baroclinic frameworks. These developments laid the groundwork for global circulation models, emphasizing empirical verification against observations to refine parameterizations of friction and heating.[50][65]Development of General Circulation Models (1970s-1990s)
In the 1970s, general circulation models (GCMs) advanced from rudimentary simulations to more comprehensive representations of atmospheric dynamics, incorporating hydrologic cycles, radiation schemes, and rudimentary ocean coupling. At the Geophysical Fluid Dynamics Laboratory (GFDL), Syukuro Manabe and colleagues published a GCM simulation including a hydrologic cycle in 1970, demonstrating realistic precipitation patterns driven by moist convection and large-scale circulation.[66] By 1975, Manabe and Richard Wetherald extended this to assess equilibrium climate sensitivity, using a nine-level atmospheric model with a mixed-layer ocean, predicting a global surface warming of approximately 2.3°C for doubled CO2 concentrations, alongside amplified Arctic warming and increased tropospheric humidity.[67] Concurrently, the UK Met Office implemented its first GCM in 1972, building on developments since 1963, which emphasized synoptic-scale features and marked a shift toward operational climate simulations.[68] These models typically operated on coarse grids (e.g., 1000 km horizontal spacing) with finite-difference schemes, relying on parameterizations for subgrid processes like cumulus convection, as direct resolution of small-scale phenomena remained computationally infeasible.[4] The late 1970s saw validation of GCM utility through expert assessments, such as the 1979 Charney Report by the National Academy of Sciences, which analyzed multiple models and affirmed their capability to simulate observed climate features while estimating CO2-doubled sensitivity in the 1.5–4.5°C range, attributing uncertainties primarily to cloud feedbacks.[67] Into the 1980s, computational advances enabled spectral transform methods, which represented variables via spherical harmonics for efficient global integration and reduced polar filtering issues inherent in grid-point models; GFDL adopted this approach, enhancing accuracy in representing planetary waves and topography.[4] The National Center for Atmospheric Research (NCAR) released the Community Climate Model version 1 (CCM1) in 1983, a spectral model at T42 resolution (roughly 300 km grid equivalent) with improved radiation and boundary layer parameterizations, distributed openly to foster community-wide refinements.[68] Coupling efforts progressed, as in Manabe and Kirk Bryan's 1975 work simulating centuries-long ocean-atmosphere interactions without ad hoc flux corrections, revealing emergent phenomena like mid-latitude deserts and thermohaline circulation stability.[67] James Hansen's NASA Goddard GCM, operational by the mid-1980s, incorporated transient simulations with ocean heat diffusion, projecting delayed warming due to thermal inertia.[67] By the 1990s, GCMs emphasized intermodel comparisons and refined subgrid parameterizations amid growing IPCC assessments. The IPCC's First Assessment Report in 1990 synthesized outputs from several GCMs, including GFDL and NCAR variants, to evaluate radiative forcings and regional patterns, though it noted persistent biases in tropical precipitation and cloud representation.[69] NCAR's CCM3 (1996) introduced advanced moist physics and aerosol schemes at T42/T63 resolutions, improving simulation of the annual cycle and El Niño variability.[70] The Coupled Model Intercomparison Project (CMIP), initiated in 1995, standardized experiments across global centers, revealing systematic errors like excessive trade wind biases but confirming robust signals in zonal-mean temperature responses.[68] Innovations included reduced reliance on flux adjustments in coupled systems and incorporation of volcanic aerosols, as validated by post-Pinatubo cooling simulations.[67] Horizontal resolutions reached ~200 km equivalents, with vertical levels expanding to 19–30, enabling better stratosphere-troposphere interactions, though computational limits constrained full explicit treatment of convection until later decades.[4] These developments solidified GCMs as tools for causal inference in climate dynamics, grounded in primitive equations and empirical tuning against reanalyses.Modern Computational Advances (2000s-Present)
Since the early 2000s, exponential increases in computational power have enabled atmospheric models to achieve finer spatial resolutions and incorporate more explicit physical processes, reducing reliance on parameterizations for subgrid phenomena. For instance, the European Centre for Medium-Range Weather Forecasts (ECMWF) upgraded its Integrated Forecasting System (IFS) high-resolution deterministic forecasts from 25 km grid spacing in 2006 to 18 km by around 2010, and further to 9 km in March 2016 via Cycle 41r2, which introduced a cubic octahedral grid reducing costs by 25% relative to equivalent spectral truncations.[18] [71] These enhancements improved representation of mesoscale features like fronts and cyclones, with ensemble forecasts similarly refined from 50 km to 18 km over the same period.[71] NOAA's Global Forecast System (GFS) paralleled these developments by adopting the Finite-Volume Cubed-Sphere (FV3) dynamical core in operational use from 2019 onward, replacing the prior spectral core to better handle variable-resolution grids and nonhydrostatic dynamics on modern parallel architectures.[72] [73] This transition supported horizontal resolutions of approximately 13-22 km and vertical levels expanding from 64 to 127 in upgrades by 2021, coupled with wave models for enhanced surface interactions.[74] Such scalable finite-volume methods facilitated simulations on petascale supercomputers, improving forecast skill for tropical cyclones and mid-latitude storms.[75] The 2010s introduced global cloud-resolving models (GCRMs) operating at 1-5 km resolutions to explicitly resolve convective clouds without parameterization, as demonstrated by systems like NICAM (Nonhydrostatic Icosahedral Atmospheric Model) and MPAS (Model for Prediction Across Scales), which leverage nonhydrostatic equations on icosahedral or cubed-sphere grids.[76] Projects such as DYAMOND (DYnamics and cloud-resOlving simulations of boreal summers and winters) in the late 2010s validated these models' fidelity in reproducing observed tropical variability using high-performance computing resources exceeding 10 petaflops.[77] Emerging in the 2020s, machine learning (ML) integrations have accelerated computations by emulating complex parameterizations; for example, neural networks trained on cloud-resolving simulations have parameterized subgrid convection in community atmosphere models, achieving stable multi-year integrations with reduced error in radiative fluxes compared to traditional schemes.[78] These data-driven approaches, combined with exaflop-scale systems explored since around 2020, promise further scalability for probabilistic ensemble predictions and coupled Earth system modeling.[79]Core Components and Methods
Initialization and Data Assimilation
Initialization in atmospheric models involves establishing the starting state of variables such as temperature, pressure, humidity, and wind fields that approximate the observed atmosphere while ensuring dynamical balance to minimize spurious oscillations, such as gravity waves, during forecast integration.[80] Imbalanced initial conditions historically led to rapid error growth in early numerical weather prediction (NWP) systems, prompting the development of techniques like nonlinear normal mode initialization (NNMI), which removes high-frequency modes by iteratively adjusting fields to satisfy model equations.[81] Dynamic initialization, introduced in the 1970s, assimilates observations by adding forcing terms to the model equations, gradually nudging the model state toward data without abrupt shocks.[82] Data assimilation refines these initial conditions by statistically combining sparse, noisy observations—from satellites, radiosondes, radar, and surface stations—with short-range model forecasts (background states) to produce an optimal analysis that minimizes estimation errors under uncertainty.[83] The process exploits consistency constraints from the model dynamics and observation operators, addressing the ill-posed inverse problem of inferring a high-dimensional atmospheric state from limited measurements.[84] Key challenges include handling observation errors, model biases, and non-Gaussian distributions, with methods evolving from optimal interpolation in the 1970s to advanced variational and ensemble approaches.[85] Three-dimensional variational (3D-Var) data assimilation solves a cost function minimizing discrepancies between observations and the analysis in a single time step, assuming stationary background error covariances derived from ensembles or statistics; it has been foundational in operational systems like those at the National Centers for Environmental Prediction (NCEP).[86] Four-dimensional variational (4D-Var) extends this over a 6-12 hour assimilation window, incorporating time-dependent error evolution via adjoint models to better capture mesoscale features and improve forecast skill, as implemented operationally at the European Centre for Medium-Range Weather Forecasts (ECMWF) since 1997.[87][88] Ensemble Kalman Filter (EnKF) methods use ensembles of model states to estimate flow-dependent error covariances without adjoints, enabling parallel computation and hybrid variants that blend with variational techniques for enhanced stability in convective-scale predictions.[86][89] In coupled atmosphere-ocean models, initialization shocks arise from mismatches between uncoupled analyses and forecast phases, often manifesting as rapid surface temperature drifts that degrade predictions; mitigation strategies include weakly coupled assimilation cycles.[90] Operational systems assimilate diverse data types, with satellite radiances contributing over 90% of inputs in global models, though cloud contamination remains a limitation requiring bias correction.[91] Advances like 4D-EnVar hybridize ensemble information with variational minimization to reduce computational costs while preserving 4D-Var benefits, reflecting ongoing efforts to balance accuracy and efficiency in high-resolution NWP.[92]Parameterization of Subgrid-Scale Processes
Subgrid-scale processes in atmospheric models encompass physical phenomena, such as turbulent eddies, convective updrafts, cloud formation, and radiative transfer through unresolved inhomogeneities, that operate on spatial and temporal scales smaller than the model's computational grid, typically ranging from tens of meters to kilometers.[93] These processes cannot be explicitly simulated due to computational constraints, necessitating parameterization schemes that approximate their statistical effects on resolved larger-scale variables like temperature, humidity, and momentum.[94] The goal of such parameterizations is to represent the net energy, momentum, and moisture fluxes induced by subgrid activity, ensuring conservation properties where possible and consistency with observed climatological behaviors.[95] Parameterization schemes for moist convection, a primary subgrid process, often employ mass-flux approaches that model updrafts and downdrafts as ensembles of plumes with entrainment and detrainment rates derived from buoyancy sorting or spectral cloud models. The Arakawa-Schubert scheme, introduced in 1974, exemplifies this by closing a quasi-equilibrium assumption where convective available potential energy (CAPE) is balanced by consumption in a spectrum of cloud types, influencing global circulation models (GCMs) for decades.[96] Modern variants, such as hybrid mass-flux-adjustment methods, incorporate triggers based on low-level moisture convergence or instability measures to initiate convection, reducing biases in precipitation simulation but remaining sensitive to grid resolution, with coarser grids (>100 km) requiring more aggressive closures.[97] Boundary layer turbulence parameterization addresses vertical mixing in the planetary boundary layer (PBL), typically using first-order K-theory diffusion or higher-order turbulence kinetic energy (TKE) schemes to compute eddy diffusivities for heat, moisture, and momentum. Non-local mixing schemes, like those in the Yonsei University PBL model, account for counter-gradient transport in unstable conditions driven by surface heating, while local schemes such as Monin-Obukhov similarity theory apply near the surface.[98] Unified schemes like the Cloud Layers Unified By Binormals (CLUBB) integrate turbulence, shallow convection, and stratiform clouds via a probability density function (PDF) approach for subgrid variability, improving representation of transitions between regimes but introducing computational overhead.[99] Cloud microphysics and radiation parameterizations handle subgrid hydrometeor distributions and radiative interactions, often via statistical closures assuming beta or gamma distributions for water content to compute fractional cloud cover and optical properties. PDF-based methods parameterize subgrid variability in total water and liquid potential temperature, enabling better simulation of boundary layer clouds, though they struggle with multimodal distributions in complex environments.[100] Subgrid orographic effects, parameterized through drag formulations, account for drag from unresolved mountains by estimating form drag and wave breaking, crucial for surface wind and precipitation patterns in GCMs with horizontal resolutions around 25-100 km.[98] Emerging approaches incorporate stochastic elements to capture intermittency and uncertainty, perturbing tendencies from deterministic schemes with noise drawn from autoregressive processes, as reviewed in 2017 analyses of GCM applications. Machine learning-based parameterizations, trained on high-resolution large-eddy simulations (LES), predict subgrid heating and moistening rates, outperforming traditional physics in targeted tests but raising concerns over generalization and physical interpretability across climates.[101][102] Despite advances, persistent challenges include scale-awareness—where schemes degrade from global to regional models—and systematic biases, such as excessive tropical convection or underestimation of marine stratocumulus, underscoring the empirical tuning often required for operational fidelity.[95][103]Numerical Discretization and Computational Frameworks
Atmospheric models discretize the continuous primitive equations governing fluid dynamics into solvable algebraic systems via spatial and temporal approximations. Horizontal spatial discretization frequently utilizes spectral methods, expanding variables in spherical harmonics and employing fast spectral transforms for efficient computation of derivatives and nonlinear interactions, as implemented in the European Centre for Medium-Range Weather Forecasts (ECMWF) Integrated Forecasting System (IFS) with triangular truncation up to spectral resolution T1279 in operational cycles as of 2023.[104] Alternatively, finite-volume methods on quasi-uniform cubed-sphere grids, such as the FV3 dynamical core adopted by the National Oceanic and Atmospheric Administration (NOAA) Global Forecast System (GFS) since 2019, ensure local conservation of mass, momentum, and energy while mitigating singularities at the poles through six interconnected spherical panels.[105] Vertical discretization typically employs coordinate transformations to hybrid terrain-following levels, with finite-difference or finite-element schemes applied to resolve pressure gradients and buoyancy. The IFS uses a cubic B-spline finite-element method across 137 levels up to 0.1 hPa, enhancing accuracy in the upper atmosphere and reducing Gibbs oscillations compared to traditional finite-difference Lorenz staggering.[104] [106] In contrast, FV3 incorporates a generalized vertical coordinate with finite-volume staggering to maintain tracer positivity and handle deep-atmospheric nonhydrostatic effects.[107] Temporal integration addresses stability constraints from fast acoustic and gravity waves using semi-implicit schemes, where linear terms are treated implicitly to permit timesteps of 10-20 minutes, far exceeding explicit limits dictated by the Courant-Friedrichs-Lewy (CFL) condition. Horizontally explicit, vertically implicit (HEVI) splitting, common in both IFS and FV3, advances horizontal advection explicitly while solving vertical acoustics implicitly via Helmholtz equations, often with iterative solvers for scalability.[108] [109] Semi-Lagrangian advection, trajectory-based and combined with semi-implicit corrections in GFS and IFS, further relaxes CFL restrictions by interpolating variables from departure points, enabling efficient handling of large-scale flows.[110] Computational frameworks for these discretizations rely on distributed-memory parallel architectures, partitioning grids or spectral modes across thousands of cores via Message Passing Interface (MPI). Spectral transforms in IFS demand global communications for Legendre and Fourier operations but achieve high efficiency on massively parallel processors through optimized fast Fourier transforms (FFTs), with performance scaling to over 10,000 processors for T799 resolutions.[111] Finite-volume cores like FV3 emphasize local stencil operations, minimizing inter-processor data exchange on cubed-sphere decompositions and supporting hybrid MPI/OpenMP for shared-memory nodes, which has enabled GFS forecasts at 13 km resolution with reduced wall-clock times on petascale systems.[105] Experimental finite-volume variants in IFS further reduce communication volumes compared to spectral counterparts, facilitating scalability to exascale computing.[112]Model Domains and Configurations
Global Circulation Models
Global circulation models (GCMs), also known as general circulation models, are comprehensive numerical simulations of the Earth's atmospheric dynamics, solving the primitive equations derived from conservation of momentum, mass, energy, and water vapor on a global scale.[113] These models represent planetary-scale processes such as the Hadley, Ferrel, and polar cells, incorporating Earth's rotation via the Coriolis effect and radiative forcing from solar and terrestrial radiation.[114] Unlike regional models, which require lateral boundary conditions from external data and focus on limited domains to achieve finer resolution for mesoscale features, GCMs operate without such boundaries, enabling self-consistent simulation of teleconnections like the Madden-Julian oscillation and El Niño-Southern Oscillation influences.[115] [116] GCM configurations typically employ spherical geometry with horizontal grids such as latitude-longitude (with pole problems mitigated by filtering) or quasi-uniform icosahedral-hexagonal tilings for reduced distortion.[117] Vertical discretization uses hybrid sigma-pressure coordinates, spanning from the surface to the mesosphere in some setups, with 50-137 levels depending on the application; for instance, operational weather GCMs prioritize tropospheric resolution for forecast accuracy up to 10-15 days.[76] Horizontal resolutions range from 100-250 km for coupled climate simulations to 5-10 km for high-resolution weather prediction, balancing computational cost against explicit resolution of baroclinic waves and fronts; coarser grids necessitate parameterization of subgrid convection and turbulence, introducing uncertainties tied to empirical tuning.[118] Time steps are on the order of minutes to hours, advancing via explicit or semi-implicit schemes to maintain numerical stability under the Courant-Friedrichs-Lewy criterion.[113] Prominent examples include the NOAA Global Forecast System (GFS), which integrates spectral methods for dynamical core evolution and supports ensemble predictions out to 16 days with ~13 km resolution in its operational cycle as of 2023, and the ECMWF Integrated Forecasting System (IFS), running at 9 km for deterministic forecasts and incorporating coupled ocean-wave components for enhanced medium-range skill.[119] These models initialize from assimilated observations via 4D-Var or ensemble Kalman filters, differing from regional counterparts by internally generating boundary forcings through global mass and energy conservation.[120] For climate applications, GCMs extend to Earth system models by coupling atmospheric components with dynamic ocean, sea ice, and land surface schemes, as in CMIP6 configurations simulating multi-decadal responses to greenhouse gas forcings with equilibrium climate sensitivities averaging 3°C per CO2 doubling across ensembles.[121] Empirical validation against reanalyses like ERA5 reveals GCMs' fidelity in reproducing observed zonal mean winds and precipitation patterns, though systematic biases persist in tropical intraseasonal variability due to convective parameterization limitations.[122][123]Regional and Mesoscale Models
Regional atmospheric models, often termed limited-area models (LAMs), focus on simulating weather and climate over specific geographic domains, such as continents or subcontinental regions, with horizontal resolutions typically ranging from 5 to 50 kilometers.[124] These models incorporate detailed representations of local topography, land surface characteristics, and coastlines, which global models with coarser grids of 10-100 kilometers cannot resolve adequately.[124] Unlike global circulation models that encompass the entire planet and apply periodic boundary conditions, regional models rely on time-dependent lateral boundary conditions derived from global model outputs to capture influences from surrounding large-scale circulations.[125] Mesoscale models represent a subset of regional models optimized for scales between 1 and 1,000 kilometers, employing non-hydrostatic dynamics and grid spacings of 1-10 kilometers to explicitly resolve phenomena like thunderstorms, sea breezes, and orographic precipitation without excessive parameterization of subgrid processes.[126] This finer resolution enables better simulation of convective initiation and mesoscale convective systems, which are critical for short-range forecasting of severe weather events.[127] Operational mesoscale models often use nested grid configurations, where inner domains achieve higher resolutions through one-way or two-way coupling with coarser outer grids, enhancing computational efficiency while maintaining accuracy in boundary forcing.[128] Prominent examples include the Weather Research and Forecasting (WRF) model, a community-developed system released in 2000 by the National Center for Atmospheric Research (NCAR) and collaborators, supporting both research and operational applications across various domains worldwide.[126] The North American Mesoscale Forecast System (NAM), operated by NOAA's National Centers for Environmental Prediction (NCEP), utilizes the WRF Advanced Research core with a 12-kilometer outer domain and nested 3-4 kilometer inner nests for high-impact weather prediction out to 84 hours.[128] Similarly, the High-Resolution Rapid Refresh (HRRR) model provides hourly updated forecasts at 3-kilometer resolution over the contiguous United States, emphasizing rapid cycling of data assimilation for nowcasting convective activity.[127] These models have demonstrated superior skill over global counterparts in regional forecasting, particularly for precipitation and wind patterns influenced by terrain, though they remain sensitive to boundary condition accuracy and require robust initialization to mitigate spin-up errors in mesoscale features.[129] Advances in computing have enabled convection-permitting configurations at kilometer scales, improving the depiction of extreme events, but persistent challenges include computational demands and the need for high-quality lateral forcing from upstream global predictions.[130]Verification and Empirical Evaluation
Metrics and Model Output Statistics
Common verification metrics for atmospheric models quantify discrepancies between forecasts and observations, enabling systematic evaluation of model performance. Scalar measures such as mean bias error (MBE) assess systematic over- or under-prediction by computing the average difference between forecasted and observed values, while root mean square error (RMSE) captures the overall error magnitude, emphasizing larger deviations through squaring before averaging. These metrics are applied across variables like temperature, pressure, and wind speed, with RMSE often normalized against climatological variability for comparability.[131] For spatial and pattern-based evaluation, particularly in global circulation models, the anomaly correlation coefficient (ACC) measures similarity between forecast and observed anomalies relative to climatology, commonly used for 500 hPa geopotential height fields in medium-range forecasts; values above 0.6 typically indicate useful skill beyond persistence.[131] Correlation coefficients evaluate pattern fidelity, while metrics like standard deviation ratios compare variability reproduction. In probabilistic contexts, the Brier score evaluates forecast reliability for binary events such as precipitation occurrence, penalizing both overconfidence and incorrect probabilities.[131]| Metric | Description | Typical Application |
|---|---|---|
| Mean Bias Error (MBE) | Average (forecast - observation); positive values indicate over-forecasting | Surface temperature, sea-level pressure |
| Root Mean Square Error (RMSE) | Square root of mean squared differences; sensitive to outliers | Wind speed, precipitation totals |
| Anomaly Correlation Coefficient (ACC) | Correlation of anomalies from climatology; ranges -1 to 1 | Upper-air geopotential heights, global patterns |
| Brier Score | Mean squared error for probabilistic forecasts; lower is better | Precipitation probability, extreme events |