Recent from talks
Nothing was collected or created yet.
Economic forecasting
View on WikipediaThis article needs additional citations for verification. (December 2009) |
Economic forecasting is the process of making predictions about the economy. Forecasts can be made at a high level of aggregation—for example, for GDP, inflation, unemployment, or the fiscal deficit. They can also be made at a more disaggregated level, targeting specific economic sectors or even individual firms. This practice is a fundamental part of economic analysis, providing a measure of a potential investment's future prospects and helping shape policy decisions. Many institutions engage in economic forecasting: national governments, banks and central banks, consultants and private sector entities such as think-tanks, and companies or international organizations such as the International Monetary Fund, World Bank and the OECD. A broad range of forecasts are collected and compiled by "Consensus Economics". Some forecasts are produced annually, but many are updated more frequently.
The economist typically considers risks (i.e., events or conditions that can cause the result to vary from their initial estimates). These risks help illustrate the reasoning process used in arriving at the final forecast numbers. Economists typically use commentary along with data visualization tools such as tables and charts to communicate their forecast.[1] In preparing economic forecasts a variety of information has been used in an attempt to increase the accuracy.
Everything from macroeconomic,[2] microeconomic,[3] market data from the future,[4] machine-learning (neural networks),[5] and human behavioral studies[6] have all been used to achieve better forecasts. Forecasts are used for a variety of purposes. Governments and businesses use economic forecasts to help them determine their strategy, multi-year plans, and budgets for the upcoming year. Stock market analysts use forecasts to help them estimate the valuation of a company and its stock.
Economists select which variables are important to the subject material under discussion. Economists may use statistical analysis of historical data to determine the apparent relationships between particular independent variables and their relationship to the dependent variable under study. For example, to what extent did changes in housing prices affect the net worth of the population overall in the past? This relationship can then be used to forecast the future. That is, if housing prices are expected to change in a particular way, what effect would that have on the future net worth of the population? Forecasts are generally based on sample data rather than a complete population, which introduces uncertainty. The economist conducts statistical tests and develops statistical models (often using regression analysis) to determine which relationships best describe or predict the behavior of the variables under study. Historical data and assumptions about the future are applied to the model in arriving at a forecast for particular variables.[7]
Sources of forecasts
[edit]Global scope
[edit]The Economic Outlook is the OECD's twice-yearly analysis of the major economic trends and prospects for the next two years.[8] The IMF publishes the World Economic Outlook report twice annually, which provides comprehensive global coverage.[9] The IMF and World Bank also produce Regional Economic Outlook for various parts of the world.[10]
There are also private companies such as The Conference Board and Lombard Street Research that provide global economic forecasts.[11]
As of April 2024, the World Trade Organization (WTO) projects a rebound in global merchandise trade, forecasting a growth of 2.6% for the year, and an anticipated increase to 3.3% in 2025, following a 1.2% decline in 2023. During 2023, there was a significant reduction in merchandise exports, which fell by 5% to US$ 24.01 trillion, contrasting sharply with the commercial services sector, which saw a 9% increase in exports to US$ 7.54 trillion. The global GDP is expected to stabilize, maintaining a growth rate of 2.6% in 2024 and 2.7% in 2025. From a regional perspective, Africa is forecasted to experience the highest export growth at 5.3% in 2024, closely followed by the CIS region at nearly the same rate. Moderate growth is expected in North America, the Middle East, and Asia, with rates projected at 3.6%, 3.5%, and 3.4%, respectively, while European exports are anticipated to grow by only 1.7%. Import growth will likely be robust in Asia (5.6%) and Africa (4.4%), with Europe showing almost no growth at 0.1%. Digital services trade remains resilient, reaching US$ 4.25 trillion in exports in 2023, and accounting for 13.8% of global exports of goods and services, with significant growth observed in Africa (13%) and South and Central America and the Caribbean (11%). Additionally, the WTO has launched the Global Services Trade Data Hub to provide detailed insights into the evolving landscape of services trade, with a particular focus on digitalization.[12][13]
U.S. forecasts
[edit]The U.S. Congressional Budget Office (CBO) publishes a report titled "The Budget and Economic Outlook" annually, which primarily covers the following ten-year period.[14] The U.S. Federal Reserve Board of Governors members also give speeches, provide testimony, and issue reports throughout the year that cover the economic outlook.[15][16] Regional Federal Reserve Banks, such as the St Louis Federal Reserve Bank also provide forecasts.[17]
Large banks such as Wells Fargo and JP Morgan Chase provide economic reports and newsletters.[18][19]
European forecasts
[edit]The European Commission also publishes comprehensive macroeconomic forecasts for its member countries on a quarterly basis - Spring, Summer, Autumn and Winter.[20]
Combining Forecasts
[edit]Forecasts from multiple sources may be arithmetically combined and the result is often referred to as a consensus forecast. Private firms, central banks, and government agencies publish a large volume of forecast information to meet the strong demand for economic forecast data. Consensus Economics compiles the macroeconomic forecasts prepared by a variety of forecasters, and publishes them on a weekly and monthly basis. The Economist magazine regularly provides such a snapshot as well, for a narrower range of countries and variables.
Econometric studies have demonstrated that the use of past errors of each original forecast to determine the weights assigned to each forecast in the creation of a combined forecast results in a composite set of forecasts that generally yields lower mean-square errors compared to either of the individual original forecasts.[21] However, it has been found that the entry and exit of forecasters can have a substantial impact on the real-time effectiveness of conventional combination methods.[22] The dynamic nature of the forecasting combination and adjusting weighting techniques is not neutral.
Forecast methods
[edit]The process of economic forecasting is similar to data analysis and results in estimated values for key economic variables in the future. An economist applies the techniques of econometrics in their forecasting process. Typical steps may include:
- Scope: Key economic variables and topics for forecast commentary are determined based on the needs of the forecast audience.
- Literature review: Commentary from sources with summary-level perspective, such as the IMF, OECD, U.S. Federal Reserve, and CBO helps with identifying key economic trends, issues and risks. Such commentary can also help the forecaster with their own assumptions while also giving them other forecasts to compare against.
- Obtain data inputs: Historical data is gathered on key economic variables. This data is contained in print as well as electronic sources such as the FRED database or Eurostat, which allow users to query historical values for variables of interest.
- Determine historical relationships: Historical data is used to determine the relationships between one or more independent variables and the dependent variable under study, often by using regression analysis.
- Model: Historical data inputs and assumptions are used to develop an econometric model. Models typically apply a computation to a series of inputs to generate an economic forecast for one or more variables.
- Report: The outputs of the model are included in reports that typically include information graphics and commentary to help the reader understand the forecast.
Forecasters may use computational general equilibrium models or dynamic stochastic general equilibrium models. The latter are often used by central banks.
Methods of forecasting include Econometric models, Consensus forecasts, Economic base analysis, Shift-share analysis, Input-output model and the Grinold and Kroner Model. See also Land use forecasting, Reference class forecasting, Transportation planning and Calculating Demand Forecast Accuracy.
The World Bank provides a means for individuals and organizations to run their own simulations and forecasts using its iSimulate platform.[23]
Issues in forecasting
[edit]Forecast accuracy
[edit]There are many studies on the subject of forecast accuracy. Accuracy is one of the main, if not the main, criteria used to judge forecast quality. Some of the references below relate to academic studies of forecast accuracy. Forecasting performance appears to be time-dependent, where some exogenous events affect forecast quality. As expert forecasts are generally better than market-based forecasts, forecast performance depends on several factors: model, political economy (terrorism), financial stability etc.
In early 2014 the OECD carried out a self-analysis of its projections.[24] "The OECD also found that it was too optimistic for countries that were most open to trade and foreign finance, that had the most tightly regulated markets and weak banking systems" according to the Financial Times.[25]
In 2012, Consensus Economics launched its Forecast Accuracy Award, and each year publishes a list of winners who have most accurately predicted the final outcome of GDP and CPI for the prior year for over 40 countries. "Consensus Economics Forecast Accuracy Award"
In recent years, research has demonstrated that behavioral biases play a significant role in affecting the accuracy of forecasts. The education and working experience of forecasters influence the accuracy and boldness of their predictions.[26] Forecasting accuracy is also impacted by the forecaster's experience with high inflation rates.[27] Additionally, political events such as terrorism have been shown to influence the accuracy of both expert- and market-based forecasts of inflation and exchange rates.[28] This highlights the range of external factors and biases that should be considered when evaluating the accuracy of forecasts and making informed decisions.
Forecasts and the Great Recession
[edit]The financial and economic crisis that erupted in 2007—arguably the worst since the Great Depression of the 1930s—was not foreseen by most forecasters, though many analysts had been predicting it for some time (for example, Stephen Roach, Meredith Whitney, Gary Shilling, Peter Schiff, Marc Faber, Nouriel Roubini, Brooksley Born, and Robert Shiller).[29] The failure of the majority of them to forecast the "Great Recession" caused soul searching in the profession. The UK's Queen Elizabeth herself asked why had “nobody” noticed that the credit crunch was on its way, and a group of economists—experts from business, the City, its regulators, academia, and government—tried to explain in a letter.[30]
It was not just forecasting the Great Recession, but also forecasting its impact, where it was clear that economists struggled.
For example, in Singapore Citi argued the country would experience "the most severe recession in Singapore’s history". The economy grew in 2009 by 3.1% and in 2010, the nation saw a 15.2% growth rate.[31][32] Similarly, Nouriel Roubini predicted in January 2009 that oil prices would stay below $40 for all of 2009. By the end of 2009, however, oil prices were at $80.[33][34] In March 2009, he predicted the S&P 500 would fall below 600 that year, and possibly plummet to 200.[33][35] It closed at over 1,115, up 24%, the largest single year gain since 2003.[36] In 2009, he also predicted that the US government would take over and nationalize a number of large banks; it did not happen.[37][38] In October 2009, he predicted that gold "can go above $1,000, but it can’t move up 20-30%”; he was wrong, as the price of gold rose over the next 18 months, breaking through the $1,000 barrier to over $1,400.[38] Although in May 2010 he predicted a 20% decline in the stock market, the S&P actually rose about 20% over the course of the next year (even excluding returns from dividends).[39]
List of regularly published surveys based on polling economists on their forecasts
[edit]| Organization name | Forecast name | Number of individuals surveyed | Number of countries covered | List of countries/regions covered | Frequency | How far ahead are the forecasts made for | Start date |
|---|---|---|---|---|---|---|---|
| Blue Chip Publications division of Aspen Publishers | Blue Chip Economic Indicators[40] | 50+[40] | 1 | United States | Monthly[40] | ? | 1976[40] |
| Consensus Economics | Consensus Forecasts | over 1000[41][42] | 115[41][42] | Member countries of the G-7 industrialized nations, the Eurozone region as well as various economies in Western Europe, the Middle East, Central Asia, Africa, Asia Pacific, Eastern Europe, Latin America and the Nordic countries.[41][42] | Daily, Weekly and monthly[41][42] | 1 month to 10 years | 1989[43] |
| Federal Reserve Bank of Philadelphia | Livingston Survey[44] | ? | 1 | United States[44] | Bi-annually (June and December every year)[44] | Two bi-annual periods (6 months and 12 months from now), plus some forecasts for two years | 1946[44] |
| European Central Bank | ECB Survey of Professional Forecasters[45][46] | 55 | ? | Euro zone | Quarterly[45] | Two quarters and six quarters from now, plus the current and next two years | 1999[45][46] |
| RFE | Resources for Economists | ? | ? | Global Economic Outlook | Quarterly | Two quarters and six quarters from now, plus the current and next two years | 1949 |
See also
[edit]Footnotes
[edit]- ^ Wells Fargo Economics-Multiple Examples of Reports Using Data Visualization-Retrieved July 15, 2015
- ^ French, J (1 March 2017). "Macroeconomic Forces and Arbitrage Pricing Theory". Journal of Comparative Asian Development. 16 (1): 1–20. doi:10.1080/15339114.2017.1297245. S2CID 157510462.
- ^ French, J (2016). "Economic determinants of wine consumption in Thailand". International Journal of Economics and Business Research. 12 (4): 334. doi:10.1504/IJEBR.2016.081229.
- ^ French, J (11 Dec 2016). "The time traveller's CAPM". Investment Analysts Journal. 46 (2): 81–96. doi:10.1080/10293523.2016.1255469. S2CID 157962452.
- ^ Barkan, Oren; Benchimol, Jonathan; Caspi, Itamar; Cohen, Eliya; Hammer, Allon; Koenigstein, Noam (July 1, 2023). "Forecasting CPI inflation components with Hierarchical Recurrent Neural Networks". International Journal of Forecasting. 39 (3): 1145–1162. doi:10.1016/j.ijforecast.2022.04.009. hdl:10419/323613.
- ^ French, J (December 2017). "Asset pricing with investor sentiment: On the use of investor group behavior to forecast ASEAN markets". Research in International Business and Finance. 42: 124–148. doi:10.1016/j.ribaf.2017.04.037.
- ^ Ramanathan, Ramu (1995). Introductory Econometrics with Applications-Third Edition. The Dryden Press. ISBN 978-0-03-094922-7.
- ^ "Forecasting methods and analytical tools - OECD".
- ^ "IMF World Economic Outlook (WEO), April 2015: Uneven Growth: Short- and Long-Term Factors". IMF.
- ^ "Regional Economic Outlook". IMF. Retrieved 2020-11-22.
- ^ "TS Lombard". Economics Politics Markets. Retrieved 2020-11-22.
- ^ "WTO forecasts rebound in global trade but warns of downside risks". www.wto.org. Retrieved 2024-04-12.
- ^ Global trade outlook and statistics (PDF). Geneva, Switzerland: World Trade Organization. 2024. ISBN 978-92-870-7633-5.
- ^ "The Budget and Economic Outlook: 2015 to 2025 | Congressional Budget Office". www.cbo.gov. January 26, 2015.
- ^ "Speech by Chair Yellen on recent developments and the outlook for the economy". Board of Governors of the Federal Reserve System.
- ^ Federal Reserve-Monetary Policy Report-Retrieved July 2015
- ^ "Tracking the Recession - St. Louis Fed". research.stlouisfed.org. Retrieved 2020-11-22.
- ^ Wells Fargo Economics-Retrieved July 2015
- ^ JP Morgan Chase-Guide to the Markets Q3 2015 - Retrieved July 2015
- ^ "Economic forecasts". European Commission - European Commission. Retrieved 2020-11-22.
- ^ Bates, J. M.; Granger, C. W. J. (1969). "The Combination of Forecasts". Journal of the Operational Research Society. 20 (4): 451–468. doi:10.1057/jors.1969.103.
- ^ Capistrán, Carlos; Timmermann, Allan (2009). "Forecast Combination With Entry and Exit of Experts". Journal of Business & Economic Statistics. 27 (4): 428–440. doi:10.1198/jbes.2009.07211. S2CID 116453612.
- ^ "ISimulate @ World Bank".
- ^ OECD forecasts during and after the financial crisis: a post mortem
- ^ Giles, Chris (2014-02-11). "OECD admits to forecasting errors during eurozone crisis". Financial Times. Retrieved 2023-02-12.
- ^ Benchimol, Jonathan; El-Shagi, Makram; Saadon, Yossi (2022). "Do Expert experience and characteristics affect inflation forecasts?". Journal of Economic Behavior & Organization. 201 (C). Elsevier: 205–226. doi:10.1016/j.jebo.2022.06.025. ISSN 0167-2681. S2CID 229274267.
- ^ Malmendier, Ulrike; Nagel, Stefan (2016). "Learning from Inflation Experiences". Quarterly Journal of Economics. 131 (1). Oxford University Press: 53–87. doi:10.1093/qje/qjv037.
- ^ Benchimol, Jonathan; El-Shagi, Makram (2020). "Forecast performance in times of terrorism". Economic Modelling. 91 (C). Elsevier: 386–402. doi:10.1016/j.econmod.2020.05.018. ISSN 0264-9993.
- ^ Levitt, Arthur (2009-10-20), Michael Kirk (ed.), The Warning: Interviews- Arthur Levitt (Broadcast documentary transcript), Frontlines, Public Broadcasting Service
- ^ British Academy-The Global Financial Crisis Why Didn't Anybody Notice?-Retrieved July 27, 2015 Archived July 7, 2015, at the Wayback Machine
- ^ Chen, Xiaoping; Shao, Yuchen (2017-09-11). "Trade policies for a small open economy: The case of Singapore". The World Economy. doi:10.1111/twec.12555. ISSN 0378-5920. S2CID 158182047.
- ^ Subler, Jason (2009-01-02). "Factories slash output, jobs around world". Reuters. Retrieved 2020-09-20.
- ^ a b Joe Keohane (January 9, 2011). "That guy who called the big one? Don’t listen to him." The Boston Globe.
- ^ Eric Tyson (2018). Personal Finance For Dummies
- ^ Maneet Ahuja (2014). The Alpha Masters; Unlocking the Genius of the World's Top Hedge Funds
- ^ "Roubini to Cramer: ‘Just shut up’", The Los Angeles Times, April 8, 2009.
- ^ Joseph Lazzaro (March 26, 2009). "'Dr. Doom' predicts some big banks will be nationalized," AOL.com.
- ^ a b Alice Guy (January 16, 2023). "Seven times the experts got it very wrong on the economy," Interactive Investor.
- ^ Larry Swedroe (May 20, 2011). "Nouriel Roubini Misses Another Prediction," CBS News.
- ^ a b c d Moore, Randell E. "Blue Chip Economic Indicators". Retrieved April 13, 2014.
- ^ a b c d "Consensus Economics". Consensus Economics. Retrieved April 14, 2014.
- ^ a b c d "Consensus Economics (about page)". Consensus Economics. Retrieved April 14, 2014.
- ^ "Data For Institutional Investors". Consensus Economics. Retrieved April 14, 2014.
- ^ a b c d "Livingston Survey". Federal Reserve Bank of Philadelphia. Retrieved April 17, 2014.
- ^ a b c "ECB Survey of Professional Forecasters". European Central Bank. Retrieved April 13, 2014.
- ^ a b Juan Angel Garcia (September 2003). "An Introduction to the ECB's Survey of Professional Forecasters" (PDF). European Central Bank.
Further reading
[edit]- Walter A. Friedman, Fortune Tellers: The Story of America's First Economic Forecasters. Princeton, NJ: Princeton University Press, 2013. ISBN 978-0691169194.
- "The IMF and OECD versus Consensus Forecasts", August 2000. Applied Economics, 33(2), p. 225-235 Roy Batchelor (City University Business School) The IMF and OECD versus Consensus Forecasts
- "Assessment of Consensus Forecasts Accuracy: The Czech National Bank Perspective", Working Paper Series 14, 2010, December 2010 Filip Novotný and Marie Raková (both Czech National Bank) "Assessment of Consensus Forecasts Accuracy: The Czech national Bank Perspective"
- IMF forecasts can be found here: IMF World Economic Outlook, World Bank forecasts are here World Bank Global Economic Prospects, and OECD forecasts here: OECD Economic Outlook
- Two of the leading journals in the field of economic forecasting are the Journal of Forecasting and the International Journal of Forecasting
- For a comprehensive but quite technical compendium, see Handbook of Economic Forecasting, North-Holland: Elsevier, 2006
- A more compact and more accessible, but pre-crisis overview is provided in Elements of Forecasting, 4th edn, Cincinnati, OH: South-Western College Publishing, 2007
- A recent, comprehensive and accessible guide to forecasting is Economic Forecasting and Policy, 2d edn, Palgrave, 2011
External links
[edit]Economic forecasting
View on GrokipediaDefinition and Fundamentals
Core Principles and Scope
Economic forecasting involves the systematic projection of future macroeconomic conditions through the integration of theoretical models, historical data patterns, and statistical inference to estimate variables such as gross domestic product (GDP) growth, inflation rates, and unemployment levels.[10] At its foundation, the practice assumes identifiable causal mechanisms—rooted in supply-demand dynamics, monetary transmission, and fiscal impacts—can be quantified to anticipate aggregate outcomes, though these relationships are tested against empirical deviations rather than presumed invariant.[11] Key principles include the use of leading indicators (e.g., stock market trends, consumer confidence indices) to signal directional changes, coincident indicators (e.g., industrial production) for current assessment, and lagging indicators (e.g., unemployment duration) for validation, with forecasts calibrated via time-series analysis to minimize errors like mean squared prediction error.[10] Probabilistic framing is essential, as deterministic predictions ignore stochastic shocks, such as supply disruptions or policy shifts, which empirical studies show amplify forecast errors beyond model assumptions.[12] The scope delineates economic forecasting from narrower financial predictions by emphasizing economy-wide aggregates over asset prices or corporate earnings, encompassing horizons from short-term (1-4 quarters) tactical outlooks for central bank rate decisions to medium-term (2-5 years) strategic views for budgetary planning, with long-term efforts (beyond a decade) rare due to escalating uncertainty from compounding structural changes.[13] Applications span policymaking, where entities like the Federal Reserve employ forecasts to gauge output gaps and inflation pressures for interest rate targeting, and private sector uses for inventory management and investment allocation, though institutional outputs often exhibit systematic optimism or conservatism tied to prevailing paradigms.[9] Critically, the field's empirical track record reveals persistent challenges in anticipating turning points, as evidenced by the collective failure of major forecasters to predict the 2008 financial crisis or the 2020 pandemic contraction, underscoring that model-based projections falter when confronted with non-stationarities like regulatory upheavals or technological disruptions not captured in training data.[14] This necessitates meta-principles of iterative model refinement and scenario analysis to hedge against overreliance on historical equilibria.[15]Economic vs. Financial Forecasting Distinctions
Economic forecasting primarily involves predicting aggregate macroeconomic indicators, such as gross domestic product (GDP) growth, inflation rates, and unemployment levels, to assess overall economic health and inform policy decisions.[16][10] These forecasts rely on low-frequency data, often quarterly or annual aggregates, and employ structural models like dynamic stochastic general equilibrium (DSGE) frameworks to capture causal relationships between policy variables and economic outcomes.[17] In contrast, financial forecasting targets asset-specific variables, including stock returns, bond yields, exchange rates, and volatility measures, which reflect market expectations and pricing dynamics.[3][18] A core distinction lies in scope and objectives: economic forecasts address broad cyclical trends and structural shifts for uses like monetary policy or fiscal planning, where accuracy supports decisions with asymmetric costs (e.g., overpredicting inflation may delay rate hikes).[17] Financial forecasts, however, prioritize investment allocation and risk management, often optimizing under well-defined loss functions tied to investor utility, such as mean-variance portfolio criteria.[3] This leads to financial methods emphasizing high-frequency, high-dimensional data from markets—like intraday prices or realized variances—enabling techniques such as GARCH models for volatility or machine learning ensembles to navigate low signal-to-noise environments.[3] Economic approaches, by comparison, grapple with data revisions and model instability from policy regime changes, favoring out-of-sample validation over real-time trading profitability tests.[17] Methodological differences further highlight divergence: economic forecasting integrates judgmental adjustments and hybrid models to account for rare events or breaks, given the infrequency of observations.[17] Financial forecasting contends with efficient market hypotheses, where predictability is fleeting due to arbitrage, prompting reliance on reduced-form regressions or predictive regressions evaluated via economic value metrics like Sharpe ratios rather than pure statistical fit.[3][18] Time horizons also vary, with economic projections spanning years to capture business cycles, while financial ones often focus on shorter windows to exploit transient mispricings in assets like currencies or equities.[19] Despite overlaps in time-series tools, financial forecasts incorporate forward-looking asset prices as leading indicators for economic variables, underscoring their role in bridging the two domains.[20]Historical Evolution
Precursors and Early 20th-Century Foundations
The systematic study of economic fluctuations emerged in the 19th century as precursors to modern forecasting, driven by observations of recurrent business cycles. French economist Clément Juglar identified commercial crises occurring roughly every 7 to 11 years in his 1860 work Des Crises Commerciales et de Leur Retour Périodique en France, en Angleterre et aux États-Unis, attributing them to credit expansions and contractions rather than external shocks like harvests. Similarly, British economist William Stanley Jevons proposed in 1875 that sunspot cycles influenced agricultural output and thus economic activity, linking solar phenomena to harvest variations and price fluctuations through empirical correlations of sunspot data with corn prices. These early efforts emphasized periodicity but lacked predictive models, relying instead on historical pattern recognition amid industrialization's volatility. In the early 20th century, American economists and statisticians shifted toward quantitative indicators for anticipating cycles, spurred by growing stock markets and corporate needs for investment guidance. Yale economist Irving Fisher advanced foundational tools in 1910 by publishing index charts tracking money supply, prices, and trade balances to signal future economic turns, building on his quantity theory of money to quantify velocity and predict inflation pressures.[21] Roger Babson established one of the first forecasting services in 1904, aggregating disparate data such as bank clearings, immigration figures, commodity prices, and railroad freight ton-miles into composite indices to gauge business conditions, emphasizing leading indicators like stock prices for downturn warnings.[22] A pivotal development occurred in 1919 when Warren M. Persons at Harvard University introduced the "Harvard A-B-C Barometer," a precursor to modern leading indicators, categorizing series as A (speculative, e.g., stock prices, leading by months), B (contemporaneous, e.g., bank clearings), and C (lagging, e.g., business failures).[23] This framework, disseminated via the Harvard Economic Service starting in 1921, enabled probabilistic forecasts by diverging trends among groups, such as A declining before B and C, and influenced investment decisions despite mixed accuracy during the 1920s boom.[24] Concurrently, the National Bureau of Economic Research (NBER), founded in 1920, institutionalized empirical analysis under Wesley Clair Mitchell, who prioritized exhaustive data compilation over short-term predictions to delineate cycle phases. Mitchell's Business Cycles: The Problem and Its Setting (1927) cataloged annals and statistics from 1854 onward, identifying expansions and contractions via reference turning points, while his later Measuring Business Cycles (1946, with Arthur F. Burns) refined amplitude, duration, and diffusion metrics using hundreds of series, establishing benchmarks for cycle dating that underpin contemporary forecasting despite initial aversion to numerical prognostication.[25][26] These foundations transitioned economic forecasting from anecdotal cycles to data-driven anticipation, though early practitioners like Fisher and Babson faced skepticism for overreliance on mechanical indices amid the era's speculative excesses.[27]Post-World War II Formalization
The formalization of economic forecasting after World War II was driven by advances in econometric theory and the availability of systematic national income data, enabling the construction of large-scale structural models for policy analysis and prediction. Trygve Haavelmo's 1944 paper introduced a probabilistic framework for econometrics, emphasizing that economic relationships should be treated as stochastic processes amenable to statistical estimation, which shifted the field from deterministic correlations to inference under uncertainty.[28] This approach, formalized during the war, laid the groundwork for postwar model-building by addressing identification and simultaneity issues in interdependent economic variables.[29] The Cowles Commission for Research in Economics, under Jacob Marschak's direction from 1943, became a central hub for these developments, producing seminal monographs on simultaneous equation estimation methods like limited information maximum likelihood (LIML) and two-stage least squares (2SLS) by the early 1950s.[30] Researchers including Lawrence Klein, Tjalling Koopmans, and Carl Christ advanced practical applications, with Klein constructing his first macroeconometric model of the U.S. economy in 1945 to assess postwar transition risks, estimating 16 equations using annual data from 1929 onward.[31] Klein's 1950 publication of Model I and subsequent 1955 book, An Econometric Model of the United States, 1929-1952, demonstrated forecasting capabilities for GDP, consumption, and investment, influencing institutional adoption despite initial overestimation of postwar recession probabilities.[32] In Europe, Jan Tinbergen extended prewar modeling to postwar planning, directing the Netherlands' Central Planning Bureau from 1945 and applying dynamic econometric systems to forecast growth and stabilize employment under reconstruction policies.[33] These efforts aligned with Keynesian frameworks dominant in the immediate postwar era, where governments institutionalized forecasting: the U.S. Employment Act of 1946 established the Council of Economic Advisers, which integrated early Klein-Goldberger models for annual projections, while Scandinavian agencies like Sweden's Economic Planning Commission produced regular official forecasts from the late 1940s.[34] By the mid-1950s, such models typically featured 20-50 equations capturing aggregate demand, supply, and monetary linkages, though they often underperformed in anticipating supply shocks due to rigid structural assumptions.[29] This era marked a convergence of theoretical rigor and empirical application, with the Econometric Society's promotion of standardized estimation techniques fostering replicability, yet critiques emerged over parameter instability and omitted variables, as evidenced by Klein's models revising postwar U.S. growth estimates from 4-5% to actual 2-3% averages in the 1950s.[35] Institutionalization extended to the UK Treasury's quarterly forecasting from 1953, relying on hybrid Keynesian-econometric setups, solidifying forecasting as a tool for fiscal and monetary stabilization amid Bretton Woods stability.[34]Late 20th-Century Shifts and Critiques
In the 1970s, economic forecasting faced profound challenges from stagflation, where persistent high inflation coexisted with rising unemployment, contradicting the expectations-augmented Phillips curve embedded in prevailing large-scale Keynesian models. These models, such as those used by the Federal Reserve and academic forecasters, generated systematic errors by underpredicting inflation surges following the 1973 oil shock and overestimating output stability, with mean absolute percentage errors for U.S. GDP forecasts exceeding 2% in several years.[36][37] Empirical evaluations revealed that naive extrapolative benchmarks often outperformed structural models during this volatile period, highlighting the instability of historical relationships under supply shocks and policy regime changes.[38] A pivotal shift occurred with Robert Lucas's 1976 critique, which argued that econometric models calibrated on past data are unreliable for evaluating policy changes because they neglect agents' forward-looking rational expectations and resulting behavioral adjustments. Lucas demonstrated using examples from business cycle models that altering policy rules, such as monetary targets, would alter decision rules in ways not captured by reduced-form equations, rendering traditional simulations invalid for counterfactual analysis. This "Lucas critique" spurred the rational expectations revolution, integrating optimizing microfoundations into macroeconomic models and diminishing reliance on adaptive expectations, though it initially complicated short-term forecasting by emphasizing equilibrium dynamics over disequilibrium paths.[39] Further methodological evolution followed Christopher Sims's 1980 paper "Macroeconomics and Reality," which condemned the "incredible identifying restrictions" in structural econometric models—arbitrary zero constraints on coefficients that lacked empirical justification and masked model fragility. Sims advocated vector autoregression (VAR) models, treating variables as jointly endogenous without imposing a priori theory-driven exclusions, enabling data-driven impulse response analysis for forecasting and policy identification. VAR approaches gained traction in the 1980s for their flexibility in capturing dynamic interdependencies, improving out-of-sample predictions in stable environments compared to overparameterized Keynesian systems, though they faced criticism for lacking interpretability absent structural assumptions.[40][41] Concurrently, real business cycle (RBC) models, pioneered by Finn Kydland and Edward Prescott in 1982, shifted emphasis from nominal shocks to real technology disturbances as primary cycle drivers, calibrated to match U.S. data moments like volatility and persistence. These dynamic stochastic general equilibrium frameworks enhanced long-run forecasting by prioritizing supply-side fundamentals but underperformed in predicting short-term fluctuations during demand-driven episodes, prompting critiques of their neglect of nominal rigidities and financial frictions. Overall, late-20th-century critiques underscored forecasting's vulnerability to structural breaks, fostering hybrid approaches while revealing persistent accuracy gaps, with professional forecasters' errors for inflation remaining above 1% RMSE in the 1980s.[42][37]Methodological Approaches
Traditional Econometric and Time-Series Models
Traditional econometric models, rooted in economic theory, construct systems of equations representing causal relationships between variables, such as consumption functions or investment equations derived from frameworks like Keynesian macroeconomics. These structural models are estimated using techniques like ordinary least squares (OLS) or generalized method of moments (GMM) on historical data to forecast aggregates like GDP or inflation, often incorporating policy simulations to assess exogenous shocks. For instance, large-scale models such as the Federal Reserve's FRB/US incorporate hundreds of behavioral equations to project U.S. economic paths under baseline assumptions.[43][44] In contrast, pure time-series models emphasize statistical extrapolation of patterns without explicit theoretical priors, prioritizing univariate or multivariate autoregressive structures. The autoregressive integrated moving average (ARIMA) model, developed by Box and Jenkins in 1970, achieves stationarity through differencing non-stationary series and combines autoregressive (AR) terms—capturing dependence on lagged values—with moving average (MA) components to model residuals, enabling short-term forecasts of variables like unemployment rates. Empirical applications, such as forecasting U.S. unemployment using ARIMA(p,d,q) specifications, demonstrate its utility for trend and cycle decomposition, though performance degrades beyond one-year horizons due to unmodeled structural breaks.[1][45] Vector autoregression (VAR) models extend this to multivariate settings, treating endogenous variables symmetrically to trace impulse responses and forecast jointly, as in Sims' 1980 critique of overidentified structural systems. Traditional VAR implementations, estimated via OLS on lag polynomials, have been applied to predict recessions by extrapolating yield spreads or industrial production indices, often outperforming univariate ARIMA for correlated series like GDP and GNP in emerging economies. However, both econometric and time-series approaches yield comparable one- to two-year accuracy in structural stability periods, with time-series methods excelling in data-rich environments absent strong causal theory.[46][47]Judgmental and Hybrid Techniques
Judgmental forecasting techniques rely on the expertise, intuition, and qualitative assessments of individuals or groups to predict economic variables, particularly when historical data is limited, unreliable, or subject to structural breaks that econometric models may overlook.[48] These methods incorporate forecasters' knowledge of special events, market nuances, and causal factors not easily captured quantitatively, such as policy shifts or geopolitical risks.[49] Empirical studies indicate that unaided expert judgments often underperform pure statistical models for stable time-series data but can enhance accuracy by adjusting for anomalies or low-data scenarios.[50] Prominent judgmental approaches include the Delphi method, which involves iterative rounds of anonymous questionnaires among a panel of experts to converge on consensus forecasts while minimizing groupthink and dominance by influential participants.[51] Originating from RAND Corporation research in the 1950s for technological impact assessment, it has been applied to economic projections like inflation trends and GDP growth, with evidence showing improved forecast reliability through feedback loops that refine initial estimates.[52] Scenario planning represents another key technique, wherein forecasters construct multiple plausible future narratives based on critical uncertainties—such as varying interest rate paths or trade policy outcomes—to stress-test economic trajectories rather than pinpointing a single point estimate.[53] This method gained traction post-1970s oil crises and has been used by institutions like central banks to evaluate resilience against recessions, though its qualitative nature demands rigorous assumption vetting to avoid over-speculation.[54] Hybrid techniques integrate judgmental inputs with quantitative models, such as overlaying expert adjustments onto econometric or time-series outputs to address model shortcomings like omitted variables or nonlinear dynamics.[55] Private-sector forecasters predominantly employ hybrids, blending baseline model predictions (e.g., ARIMA) with qualitative overrides informed by real-time indicators, which empirical comparisons show outperform pure model-based approaches during volatile periods like the 2008 financial crisis.[56] For instance, judgmental corrections to vector autoregression models have reduced mean absolute errors in GDP forecasts by 10-20% in select studies, particularly when experts incorporate forward-looking data like surveys of business sentiment.[57] However, hybrids risk introducing systematic biases if judgments stem from overconfident or correlated expert views, underscoring the need for debiasing protocols like aggregating diverse opinions.[50] Overall, while econometric models excel in data-rich, stationary environments, hybrids leverage judgment's strength in capturing causal disruptions, with meta-analyses affirming their edge in long-horizon macroeconomic predictions amid uncertainty.[58]Emerging Computational Methods
Machine learning techniques have gained prominence in economic forecasting by leveraging large datasets and capturing nonlinear relationships that traditional linear models often overlook. Algorithms such as random forests, support vector machines, and ridge regression process high-dimensional data, including alternative sources like satellite imagery and text from news, to generate nowcasts and short-term predictions. For instance, in forecasting UK GDP growth, machine learning methods like ridge and support vector regression demonstrated gains over benchmarks, particularly at shorter horizons, by incorporating multiple large-scale predictors.[59] These approaches excel in handling big data volumes, with bibliometric reviews indicating a surge in applications since 2020, driven by improved computational efficiency and access to unstructured data.[60] Deep learning models, including recurrent neural networks (RNNs) and long short-term memory (LSTM) networks, address sequential dependencies in economic time series, offering advantages in volatile environments. LSTM variants with attention mechanisms have been applied to GDP forecasting, adjusting for economic cycles and achieving lower errors than baseline autoregressive models during crises like COVID-19, with reported inaccuracies under 2% in per capita GDP predictions.[61] [62] Ensemble neural network approaches further enhance accuracy; for example, combining dynamic factor models with RNNs reduced one-quarter-ahead GDP forecast errors in multi-country settings.[63] Recent studies on Chinese macroeconomic variables using machine learning reported superior performance in capturing nonlinearities under uncertainty, outperforming econometric benchmarks by integrating features from big data sources.[64] Hybrid methods blending machine learning with causal inference and traditional econometrics address interpretability concerns, enabling counterfactual analysis while maintaining predictive power. Bi-LSTM models fused with big data feature extraction have predicted economic cycles with high precision, as demonstrated in 2025 analyses incorporating financial frictions.[65] However, empirical evaluations reveal limitations: while machine learning reduces root mean squared errors by 15-19% in some GDP growth forecasts compared to linear counterparts, gains diminish at longer horizons due to structural breaks and overfitting risks.[66] Real-time AI applications, such as generative adversarial networks for Gambian GDP prediction, highlight potential for emerging economies but underscore data quality dependencies, with neural networks outperforming trees in growth trends yet requiring validation against ground-truth metrics.[67] Overall, these methods' efficacy stems from empirical outperformance in data-rich scenarios, though systematic reviews caution against overreliance without rigorous cross-validation.[68]Institutional Sources
Government and Central Bank Forecasts
Government agencies responsible for fiscal policy and budgeting routinely produce economic forecasts to inform legislative decisions, revenue projections, and expenditure planning. In the United States, the Congressional Budget Office (CBO), a nonpartisan entity established in 1974, generates baseline economic projections as part of its annual Budget and Economic Outlook reports. These include estimates for real GDP growth, inflation (measured by the PCE price index), unemployment rates, interest rates, and fiscal aggregates such as deficits and public debt. For example, CBO's January 2025 report projected real GDP growth averaging 1.8% annually from 2025 to 2035, with the federal budget deficit reaching $1.9 trillion in fiscal year 2025 and federal debt held by the public climbing to 118% of GDP by 2035, driven by rising interest costs and mandatory spending.[69] CBO's methodology integrates macroeconomic models, demographic trends, and legislative assumptions, updating projections semi-annually or in response to new data; however, these forecasts have historically exhibited upward biases in revenue estimates, potentially understating long-term fiscal pressures due to optimistic growth assumptions amid structural challenges like aging populations.[70] Central banks issue economic projections primarily to support monetary policy formulation, communicate policy intentions, and manage inflation and output expectations. The U.S. Federal Reserve publishes the Summary of Economic Projections (SEP) quarterly in conjunction with Federal Open Market Committee (FOMC) meetings, aggregating anonymous forecasts from participants on key variables including real GDP growth, the unemployment rate, core PCE inflation, and the federal funds rate target. The June 18, 2025 SEP, for instance, median-projected 2025 real GDP growth at approximately 1.4-2.1% (revised downward from prior estimates), unemployment at 4.2%, and core PCE inflation at 2.6%, with longer-run projections converging to a neutral federal funds rate of around 2.5-3.0%.[71] These projections draw from a suite of econometric models, including dynamic stochastic general equilibrium (DSGE) frameworks and time-series analyses, supplemented by staff judgment to incorporate qualitative factors like geopolitical risks; European Central Bank staff macroeconomic projections follow analogous processes, releasing biannual outlooks using models calibrated to euro area data.[72] While these institutional forecasts enhance policy transparency and market signaling, empirical evaluations reveal limitations in predictive power, particularly for turning points and during crises. A 2015 Federal Reserve Board analysis of FOMC forecasts from 1979-2014 found reasonable accuracy for aggregate GDP growth but larger errors in disaggregated components like residential investment and imports, with root-mean-square errors often exceeding one percentage point for quarterly horizons.[73] Studies of central bank projections during the 2008 global financial crisis documented systematic underestimation of downturn severity, attributing errors to model instabilities and unforeseen shocks rather than inherent bias, though repeated misses can erode credibility if not accompanied by adaptive revisions.[74] Government forecasts similarly face critiques for procyclical tendencies, where assumptions align with prevailing policy narratives, yet they remain benchmarks for accountability, outperforming naive extrapolations in stable periods per historical comparisons.[75]Private and Academic Providers
Private sector economic forecasting is dominated by specialized consulting firms, investment banks, and research organizations that deliver proprietary macroeconomic projections, often tailored to client needs in finance, business strategy, and risk management. Oxford Economics, founded in 1981, provides global coverage of over 200 countries with quarterly macroeconomic forecasts incorporating scenario analysis and bespoke modeling for corporate and institutional subscribers.[76] S&P Global's Market Intelligence division offers U.S. national, state, and metro-level forecasts updated quarterly, alongside tools for stress testing and industry performance measurement, drawing on integrated datasets for predictive analytics.[77] The Economist Intelligence Unit (EIU), part of The Economist Group, produces detailed country and sector forecasts; in 2024, it secured 41 first-place accuracy rankings across various metrics, outperforming peers in GDP and inflation predictions.[78] Other prominent providers include ITR Economics, which emphasizes practical business intelligence through proprietary cycles-based forecasting, and Beacon Economics, issuing quarterly outlooks on employment and regional growth.[79][80] These entities typically blend econometric models with qualitative judgments, charging fees for access while competing on empirical track records validated by third-party evaluations like Bloomberg rankings. Non-profit organizations like The Conference Board also contribute private-sector style forecasts, such as its monthly Consumer Confidence Index and Leading Economic Index, which aggregate business cycle indicators to signal U.S. expansions or contractions; as of October 2025, its Expectations Index has highlighted recession risks amid softening jobs data.[81] Deloitte, through its Insights practice, releases quarterly U.S. economic outlooks projecting variables like employment growth, with September 2025 forecasts anticipating moderated private-sector hiring into 2026.[82] Private forecasters often surpass public benchmarks in flexibility, updating models in real-time response to data releases, though their commercial incentives may prioritize short-term market signals over long-horizon structural analysis. Academic providers, housed within universities and research centers, focus on econometric modeling, regional analyses, and public dissemination of forecasts to advance scholarly understanding and inform policy without direct commercial mandates. The UCLA Anderson School of Management's Forecast program, established in 1946, generates semiannual projections for California and national economies, emphasizing sector-specific drivers like technology and housing over seven decades of operation.[83] Chapman's Gary Anderson Center for Economic Research has produced Southern California forecasts since 1977, claiming superior accuracy in regional GDP and employment predictions based on historical validations against actual outcomes.[84] The University of Central Florida's Institute for Economic Forecasting delivers national, state, and metro-level estimates, integrating time-series models with local data for timely analyses updated multiple times annually.[85] Other university centers include Florida State University's Center for Economic Forecasting and Analysis, which conducts Florida-specific projections using vector autoregression techniques, and Georgia State University's Economic Forecasting Center, providing metro Atlanta outlooks on trade, prices, and interest rates through integrated econometric frameworks.[86][87] The University at Albany offers specialized training and forecasts via its certificate program, emphasizing survey-based and econometric methods for macroeconomic variables.[88] Academic efforts prioritize transparency in methodologies, often publishing model specifications and error metrics, but face constraints from grant funding and data access compared to private counterparts; nonetheless, they contribute to baseline benchmarks like the Philadelphia Fed's Survey of Professional Forecasters, which polls academic and private economists quarterly for consensus GDP and inflation views.[89]International Organizations and Consensus Aggregates
The International Monetary Fund (IMF) produces global economic forecasts through its World Economic Outlook (WEO), published biannually in April and October, with updates in January and July; for instance, the July 2025 update projected global growth at 3.0% for 2025 and 3.1% for 2026.[90] These forecasts cover GDP growth, inflation, and fiscal indicators for 190 countries, relying on a combination of econometric models, scenario analysis, and staff consultations with national authorities to inform policy advice and surveillance.[91] Comparative evaluations indicate that IMF forecasts for G7 countries have historically underperformed those of the OECD in accuracy, with systematic biases observed in emerging markets due to data revisions and external shocks.[92] The World Bank's Global Economic Prospects (GEP) report, released in January and June each year, provides three-year-ahead forecasts emphasizing developing economies, which account for over 60% of global growth; the June 2025 edition forecasted steady 2.3% growth for Latin America and the Caribbean in 2025, rising to 2.5% in 2026-2027.[93][94] These projections incorporate regional commodity price assumptions and vulnerability assessments, but empirical analysis from 1999-2019 across 130 countries reveals average same-year forecast errors of 1.3 percentage points globally between 2010 and 2020, often optimistic in low-income settings due to limited real-time data availability.[95][92] The Organisation for Economic Co-operation and Development (OECD) issues its Economic Outlook twice yearly, with interim updates, projecting variables like GDP and inflation for member states, the euro area, and aggregates; the September 2025 interim report revised global GDP growth to 3.2% for 2025 and 2.9% for 2026, citing policy uncertainty and tariffs as downward risks.[96][97] OECD forecasts emphasize structural reforms and employ multi-country models calibrated to historical data, showing superior short-term accuracy for advanced economies compared to IMF counterparts in G7 GDP predictions.[92][98] Consensus aggregates compile predictions from diverse forecasters to mitigate individual errors, with Consensus Economics surveying over 250 economists monthly since 1989 for G7 and Western Europe, yielding mean, high, and low estimates for GDP, inflation, and interest rates.[99][100] These aggregates, disseminated via publications like Consensus Forecasts, aim to reflect market-implied probabilities but exhibit inefficiencies in information aggregation, as forecasters underweight peers' data, leading to persistent biases during cycles like the 2008 crisis.[101][102] Providers like FocusEconomics similarly aggregate hundreds of sources for broader coverage, including emerging markets, though studies from 1996-2006 highlight variable bias reduction, with consensus outperforming individuals in stable periods but converging to herd errors amid uncertainty.[103][102] Such mechanisms serve as benchmarks for central banks and investors, prioritizing breadth over proprietary models.Empirical Performance and Evaluation
Metrics of Accuracy and Benchmarking
Accuracy in economic forecasting is typically assessed using error metrics that quantify the deviation between predicted and realized values, often applied to variables such as GDP growth, inflation rates, or unemployment. Scale-dependent measures include the mean absolute error (MAE), which calculates the average magnitude of errors without considering direction, and the root mean squared error (RMSE), which squares errors before averaging and taking the square root, thereby penalizing larger deviations more heavily.[104][105] These metrics are scale-dependent, meaning their values vary with the units of the forecasted variable, necessitating comparisons within similar contexts like quarterly macroeconomic aggregates.[104] Percentage-based metrics address scaling issues by expressing errors relative to actual outcomes. The mean absolute percentage error (MAPE) computes the average absolute error as a percentage of the actual value, offering interpretability for variables with stable magnitudes but prone to instability when actuals approach zero, as seen in low-inflation or recessionary periods.[104][106] Alternatives like the mean absolute scaled error (MASE) normalize errors against the MAE of a naive in-sample forecast, providing a scale-independent benchmark suitable for intermittent or volatile economic series.[107] Relative metrics facilitate benchmarking against simple baselines. Theil's U statistic compares the RMSE of a forecast to that of a naive no-change model (assuming the future equals the most recent observation), yielding a value less than 1 for superior performance, equal to 1 for equivalence, and greater than 1 for inferiority; it decomposes errors into bias, variance, and covariance components to diagnose sources of inaccuracy.[108][109] In macroeconomic evaluations, such as those by central banks, forecasts are routinely benchmarked against random walk or seasonal naive models, where complex econometric models often fail to consistently outperform these baselines, particularly over short horizons.[110][111]| Metric | Formula | Interpretation | Common Use in Economics |
|---|---|---|---|
| MAE | $\frac{1}{n} \sum | f_t - a_t | $ |
| RMSE | Emphasizes large errors; scale-dependent. | Penalizing forecast misses in recessions.[105] | |
| MAPE | $\frac{1}{n} \sum \frac{ | f_t - a_t | }{ |
| Theil's U | Relative to naive; U<1 indicates improvement. | Benchmarking vs. no-change for policy variables.[108] |
