Hubbry Logo
Efficient-market hypothesisEfficient-market hypothesisMain
Open search
Efficient-market hypothesis
Community hub
Efficient-market hypothesis
logo
8 pages, 0 posts
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Efficient-market hypothesis
Efficient-market hypothesis
from Wikipedia

Stock prices quickly incorporate information from earnings announcements, making it difficult to beat the market by trading on these events. A replication of Martineau (2022).

The efficient-market hypothesis (EMH)[a] is a hypothesis in financial economics that states that asset prices reflect all available information. A direct implication is that it is impossible to "beat the market" consistently on a risk-adjusted basis since market prices should only react to new information.

Because the EMH is formulated in terms of risk adjustment, it only makes testable predictions when coupled with a particular model of risk.[2] As a result, research in financial economics since at least the 1990s has focused on market anomalies, that is, deviations from specific models of risk.[3]

The idea that financial market returns are difficult to predict goes back to Bachelier,[4] Mandelbrot,[5] and Samuelson,[6] but is closely associated with Eugene Fama, in part due to his influential 1970 review of the theoretical and empirical research.[2] The EMH provides the basic logic for modern risk-based theories of asset prices, and frameworks such as consumption-based asset pricing and intermediary asset pricing can be thought of as the combination of a model of risk with the EMH.[7]

Theoretical background

[edit]

Suppose that a piece of information about the value of a stock (say, about a future merger) is widely available to investors. If the price of the stock does not already reflect that information, then investors can trade on it, thereby moving the price until the information is no longer useful for trading.

Note that this thought experiment does not necessarily imply that stock prices are unpredictable. For example, suppose that the piece of information in question says that a financial crisis is likely to come soon. Investors typically do not like to hold stocks during a financial crisis, and thus investors may sell stocks until the price drops enough so that the expected return compensates for this risk.

How efficient markets are (and are not) linked to the random walk theory can be described through the fundamental theorem of asset pricing. This theorem provides mathematical predictions regarding the price of a stock, assuming that there is no arbitrage, that is, assuming that there is no risk-free way to trade profitably. Formally, if arbitrage is impossible, then the theorem predicts that the price of a stock is the discounted value of its future price and dividend:

where is the expected value given information at time , is the stochastic discount factor, and is the dividend the stock pays next period.

Note that this equation does not generally imply a random walk. However, if we assume the stochastic discount factor is constant and the time interval is short enough so that no dividend is being paid, we have

.

Taking logs and assuming that the Jensen's inequality term is negligible, we have

which implies that the log of stock prices follows a random walk (with a drift).

Although the concept of an efficient market is similar to the assumption that stock prices follow:

which follows a martingale, the EMH does not always assume that stocks follow a martingale.

Empirical studies

[edit]

Research by Alfred Cowles in the 1930s and 1940s suggested that professional investors were in general unable to outperform the market. During the 1930s-1950s empirical studies focused on time-series properties, and found that US stock prices and related financial series followed a random walk model in the short-term.[8] While there is some predictability over the long-term, the extent to which this is due to rational time-varying risk premia as opposed to behavioral reasons is a subject of debate. In their seminal paper,[9] propose the event study methodology and show that stock prices on average react before a stock split, but have no movement afterwards.

Weak, semi-strong, and strong-form tests

[edit]

In Fama's influential 1970 review paper, he categorized empirical tests of efficiency into "weak-form", "semi-strong-form", and "strong-form" tests.[2]

These categories of tests refer to the information set used in the statement "prices reflect all available information." Weak-form tests study the information contained in historical prices. Semi-strong form tests study information (beyond historical prices) which is publicly available. Strong-form tests regard private information.[2]

Historical background

[edit]

Benoit Mandelbrot claimed the efficient markets theory was first proposed by the French mathematician Louis Bachelier in 1900 in his PhD thesis "The Theory of Speculation" describing how prices of commodities and stocks varied in markets.[10] It has been speculated that Bachelier drew ideas from the random walk model of Jules Regnault, but Bachelier did not cite him,[11] and Bachelier's thesis is now considered pioneering in the field of financial mathematics.[12][11] It is commonly thought that Bachelier's work gained little attention and was forgotten for decades until it was rediscovered in the 1950s by Leonard Savage, and then become more popular after Bachelier's thesis was translated into English in 1964. But the work was never forgotten in the mathematical community, as Bachelier published a book in 1912 detailing his ideas,[11] which was cited by mathematicians including Joseph L. Doob, William Feller[11] and Andrey Kolmogorov.[13] The book continued to be cited, but then starting in the 1960s the original thesis by Bachelier began to be cited more than his book when economists started citing Bachelier's work.[11]

The concept of market efficiency had been anticipated at the beginning of the century in the dissertation submitted by Bachelier (1900) to the Sorbonne for his PhD in mathematics. In his opening paragraph, Bachelier recognizes that "past, present and even discounted future events are reflected in market price, but often show no apparent relation to price changes".[14]

The efficient markets theory was not popular until the 1960s when the advent of computers made it possible to compare calculations and prices of hundreds of stocks more quickly and effortlessly. In 1945, F.A. Hayek argued in his article The Use of Knowledge in Society that markets were the most effective way of aggregating the pieces of information dispersed among individuals within a society. Given the ability to profit from private information, self-interested traders are motivated to acquire and act on their private information. In doing so, traders contribute to more and more efficient market prices. In the competitive limit, market prices reflect all available information and prices can only move in response to news. Thus there is a very close link between EMH and the random walk hypothesis.[15]

Early theories posited that predicting stock prices is unfeasible, as they depend on fresh information or news rather than existing or historical prices. Therefore, stock prices are thought to fluctuate randomly, and their predictability is believed to be no better than a 50% accuracy rate.[16]

The efficient-market hypothesis emerged as a prominent theory in the mid-1960s. Paul Samuelson had begun to circulate Bachelier's work among economists. In 1964 Bachelier's dissertation along with the empirical studies mentioned above were published in an anthology edited by Paul Cootner.[17] In 1965, Eugene Fama published his dissertation arguing for the random walk hypothesis.[18] Also, Samuelson published a proof showing that if the market is efficient, prices will exhibit random-walk behavior.[19] This is often cited in support of the efficient-market theory, by the method of affirming the consequent,[20][21] however in that same paper, Samuelson warns against such backward reasoning, saying "From a nonempirical base of axioms you never get empirical results."[22] In 1970, Fama published a review of both the theory and the evidence for the hypothesis. The paper extended and refined the theory, included the definitions for three forms of financial market efficiency: weak, semi-strong and strong (see above).[23]

Criticism

[edit]
Price-Earnings ratios as a predictor of twenty-year returns based upon the plot by Robert Shiller (Figure 10.1,[24] source). The horizontal axis shows the real price-earnings ratio of the S&P Composite Stock Price Index as computed in Irrational Exuberance (inflation adjusted price divided by the prior ten-year mean of inflation-adjusted earnings). The vertical axis shows the geometric average real annual return on investing in the S&P Composite Stock Price Index, reinvesting dividends, and selling twenty years later. Data from different twenty-year periods is color-coded as shown in the key. See also ten-year returns. Shiller states that this plot "confirms that long-term investors—investors who commit their money to an investment for ten full years—did do well when prices were low relative to earnings at the beginning of the ten years. Long-term investors would be well advised, individually, to lower their exposure to the stock market when it is high, as it has been recently, and get into the market when it is low."[24] Burton Malkiel, a well-known proponent of the general validity of EMH, stated that this correlation may be consistent with an efficient market due to differences in interest rates.[25]

Investors, including the likes of Warren Buffett,[26] George Soros,[27][28] and researchers have disputed the efficient-market hypothesis both empirically and theoretically. Behavioral economists attribute the imperfections in financial markets to a combination of cognitive biases such as overconfidence, overreaction, representative bias, information bias, and various other predictable human errors in reasoning and information processing. These have been researched by psychologists such as Daniel Kahneman, Amos Tversky and Paul Slovic and economist Richard Thaler.

Empirical evidence has been mixed, but has generally not supported strong forms of the efficient-market hypothesis.[29][30][31] According to Dreman and Berry, in a 1995 paper, low P/E (price-to-earnings) stocks have greater returns.[32] In an earlier paper, Dreman also refuted the assertion by Ray Ball that these higher returns could be attributed to higher beta leading to a failure to correctly risk-adjust returns;[33] Dreman's research had been accepted by efficient market theorists as explaining the anomaly[34] in neat accordance with modern portfolio theory.

Behavioral psychology

[edit]
Daniel Kahneman

Behavioral psychology approaches to stock market trading are among some of the alternatives to EMH (investment strategies such as momentum trading seek to exploit exactly such inefficiencies).[35] However, Nobel Laureate co-founder of the programme Daniel Kahneman —announced his skepticism of investors beating the market: "They're just not going to do it. It's just not going to happen." Indeed, defenders of EMH maintain that behavioral finance strengthens the case for EMH in that it highlights biases in individuals and committees and not competitive markets. For example, one prominent finding in behavioral finance is that individuals employ hyperbolic discounting. It is demonstrably true that bonds, mortgages, annuities and other similar obligations subject to competitive market forces do not. Any manifestation of hyperbolic discounting in the pricing of these obligations would invite arbitrage thereby quickly eliminating any vestige of individual biases. Similarly, diversification, derivative securities and other hedging strategies assuage if not eliminate potential mispricings from the severe risk-intolerance (loss aversion) of individuals underscored by behavioral finance. On the other hand, economists, behavioral psychologists and mutual fund managers are drawn from the human population and are therefore subject to the biases that behavioralists showcase. By contrast, the price signals in markets are far less subject to individual biases highlighted by the Behavioral Finance programme. Richard Thaler has started a fund based on his research on cognitive biases. In a 2008 report he identified complexity and herd behavior as central to the 2008 financial crisis.[36]

Further empirical work has highlighted the impact transaction costs have on the concept of market efficiency, with much evidence suggesting that any anomalies pertaining to market inefficiencies are the result of a cost benefit analysis made by those willing to incur the cost of acquiring the valuable information in order to trade on it. Additionally, the concept of liquidity is a critical component to capturing "inefficiencies" in tests for abnormal returns. Any test of this proposition faces the joint hypothesis problem, where it is impossible to ever test for market efficiency, since to do so requires the use of a measuring stick against which abnormal returns are compared —one cannot know if the market is efficient if one does not know if a model correctly stipulates the required rate of return. Consequently, a situation arises where either the asset pricing model is incorrect or the market is inefficient, but one has no way of knowing which is the case.[citation needed]

The performance of stock markets is correlated with the amount of sunshine in the city where the main exchange is located.[37]

EMH anomalies and rejection of the Capital Asset Pricing Model (CAPM)

[edit]

While event studies of stock splits are consistent with the EMH,[38] other empirical analyses have found problems with the efficient-market hypothesis. Early examples include the observation that small neglected stocks and stocks with high book-to-market (low price-to-book) ratios (value stocks) tended to achieve abnormally high returns relative to what could be explained by the CAPM.[clarification needed][29][30] Further tests of portfolio efficiency by Gibbons, Ross and Shanken (1989) (GJR) led to rejections of the CAPM, although tests of efficiency inevitably run into the joint hypothesis problem (see Roll's critique).

Following GJR's results and mounting empirical evidence of EMH anomalies, academics began to move away from the CAPM towards risk factor models such as the Fama-French 3 factor model. These risk factor models are not properly founded on economic theory (whereas CAPM is founded on Modern Portfolio Theory), but rather, constructed with long-short portfolios in response to the observed empirical EMH anomalies. For instance, the "small-minus-big" (SMB) factor in the FF3 factor model is simply a portfolio that holds long positions on small stocks and short positions on large stocks to mimic the risks small stocks face. These risk factors are said to represent some aspect or dimension of undiversifiable systematic risk which should be compensated with higher expected returns. Additional popular risk factors include the "HML" value factor,[39] "MOM" momentum factor,[40] "ILLIQ" liquidity factors.[41] See also Robert Haugen.

View of some journalists, economists, and investors

[edit]

Several observers have argued closed end funds (CEFs) show evidence of market inefficiency.[42][43][44][45][46] Unlike mutual funds or exchange traded funds which can regularly redeem or create new shares and tend to trade very close to net asset value (NAV) of the assets held within the fund, CEFs raise capital by issuing a fixed number of shares at inception and after inception are closed to new capital. CEFs often trade at a price that is at a substantial discount (below) to their NAV, but can also trade at a premium (above NAV), implying that investors are paying substantially above or below the price for the same securities sold as CEFs than when sold in other contexts. Owen A. Lamont and Richard H. Thaler argue there are various explanations that might plausibly account for "moderate" discounts or premia for CEFs, but there are also cases of extreme discounts or premia that appear to be anomalous and seem to violate the "Law of One Price" principle.[47]

Economists Matthew Bishop and Michael Green claim that full acceptance of the hypothesis goes against the thinking of Adam Smith and John Maynard Keynes, who both believed irrational behavior had a real impact on the markets.[48]

Economist John Quiggin has claimed that "Bitcoin is perhaps the finest example of a pure bubble", and that it provides a conclusive refutation of EMH.[49] While other assets that have been used as currency (such as gold, tobacco) have value or utility independent of people's willingness to accept them as payment, Quiggin argues that "in the case of Bitcoin there is no source of value whatsoever" and thus Bitcoin should be priced at zero or worthless.

Tshilidzi Marwala surmised that artificial intelligence (AI) influences the applicability of the efficient market hypothesis in that the greater amount of AI-based market participants, the more efficient the markets become.[50][51][52]

Warren Buffett has also argued against EMH, most notably in his 1984 presentation "The Superinvestors of Graham-and-Doddsville". He says preponderance of value investors among the world's money managers with the highest rates of performance rebuts the claim of EMH proponents that luck is the reason some investors appear more successful than others.[53] Nonetheless, Buffett has recommended index funds that aim to track average market returns for most investors.[54] Buffett's business partner Charlie Munger has stated the EMH is "obviously roughly correct", in that a hypothetical average investor will tend towards average results "and it's quite hard for anybody to [consistently] beat the market by significant margins".[55] However, Munger also believes "extreme" commitment to the EMH is "bonkers", as the theory's originators were seduced by an "intellectually consistent theory that allowed them to do pretty mathematics [yet] the fundamentals did not properly tie to reality."[56]

Burton Malkiel in his A Random Walk Down Wall Street (1973)[57] argues that "the preponderance of statistical evidence" supports EMH, but admits there are enough "gremlins lurking about" in the data to prevent EMH from being conclusively proved.

In his book The Reformation in Economics, economist and financial analyst Philip Pilkington has argued that the EMH is actually a tautology masquerading as a theory.[58] He argues that, taken at face value, the theory makes the banal claim that the average investor will not beat the market average—which is a tautology. When pressed on this point, Pinkington argues that EMH proponents will usually say that any actual investor will converge with the average investor given enough time and so no investor will beat the market average. But Pilkington points out that when proponents of the theory are presented with evidence that a small minority of investors do, in fact, beat the market over the long-run, these proponents then say that these investors were simply 'lucky'. Pilkington argues that introducing the idea that anyone who diverges from the theory is simply 'lucky' insulates the theory from falsification and so, drawing on the philosopher of science and critic of neoclassical economics Hans Albert, Pilkington argues that the theory falls back into being a tautology or a pseudoscientific construct.[59]

Nobel Prize-winning economist Paul Samuelson argued that the stock market is "micro efficient" but not "macro efficient": the EMH is much better suited for individual stocks than it is for the aggregate stock market as a whole. Research based on regression and scatter diagrams, published in 2005, has strongly supported Samuelson's dictum.[60]

Mathematician Andrew Odlyzko argued in a 2010 paper that the UK Railway Mania of the 1830s and '40s "provides a convincing demonstration of market inefficiency."[61] When railroads were a new and innovative technology, there was widespread public interest in trading rail-related stocks and large amounts of capital were devoted to building more rail projects than could realistically be used for shipping or passengers. After the mania collapsed in the 1840s, many railroad stocks were worthless and many planned projects abandoned.

Peter Lynch, a mutual fund manager at Fidelity Investments who consistently more than doubled market averages while managing the Magellan Fund, has argued that the EMH is contradictory to the random walk hypothesis—though both concepts are widely taught in business schools without seeming awareness of a contradiction. If asset prices are rational and based on all available data as the efficient market hypothesis proposes, then fluctuations in asset price are not random. But if the random walk hypothesis is valid, then asset prices are not rational.[62]

Joel Tillinghast, also a fund manager at Fidelity with a long history of outperforming a benchmark, has written that the core arguments of the EMH are "more true than not" and he accepts a "sloppy" version of the theory allowing for a margin of error.[63] But he also contends the EMH is not completely accurate or accurate in all cases, given the recurrent existence of economic bubbles (when some assets are dramatically overpriced) and the fact that value investors (who focus on underpriced assets) have tended to outperform the broader market over long periods. Tillinghast also asserts that even staunch EMH proponents will admit weaknesses to the theory when assets are significantly over- or under-priced, such as double or half their value according to fundamental analysis.

In a 2012 book, investor Jack Schwager argues the EMH is "right for the wrong reasons".[64] He agrees it is "very difficult" to consistently beat average market returns, but contends it's not due to how information is distributed more or less instantly to all market participants. Information may be distributed more or less instantly, but Schwager proposes information may not be interpreted or applied in the same way by different people and skill may play a factor in how information is used. Schwager argues markets are difficult to beat because of the unpredictable and sometimes irrational behavior of humans who buy and sell assets in the stock market. Schwager also cites several instances of mispricing that he contends are impossible according to a strict or strong interpretation of the EMH.[65][66]

2008 financial crisis

[edit]

The 2008 financial crisis led to renewed scrutiny and criticism of the hypothesis.[67] Market strategist Jeremy Grantham said the EMH was responsible for the 2008 financial crisis, claiming that belief in the hypothesis caused financial leaders to have a "chronic underestimation of the dangers of asset bubbles breaking".[68] Financial journalist Roger Lowenstein said "The upside of the current Great Recession is that it could drive a stake through the heart of the academic nostrum known as the efficient-market hypothesis."[69] Former Federal Reserve chairman Paul Volcker said "It should be clear that among the causes of the recent financial crisis was an unjustified faith in rational expectations, market efficiencies, and the techniques of modern finance."[70] In a 2009 article published on the Financial Analysts Journal, Laurence B. Siegel wrote that "By 2007–2009, you had to be a fanatic to believe in the literal truth of the EMH."[71]

At the International Organization of Securities Commissions annual conference, held in June 2009, the hypothesis took center stage. Martin Wolf, the chief economics commentator for the Financial Times, dismissed the hypothesis as being a useless way to examine how markets function in reality.[72] Economist Paul McCulley said the hypothesis had not failed, but was "seriously flawed" in its neglect of human nature.[73][74]

The 2008 financial crisis led economics scholar Richard Posner to back away from the hypothesis. Posner accused some of his Chicago School colleagues of being "asleep at the switch", saying that "the movement to deregulate the financial industry went too far by exaggerating the resilience—the self healing powers—of laissez-faire capitalism."[75] Others, such as economist and Nobel laurete Eugene Fama, said that the hypothesis held up well during the crisis: "Stock prices typically decline prior to a recession and in a state of recession. This was a particularly severe recession. Prices started to decline in advance of when people recognized that it was a recession and then continued to decline. That was exactly what you would expect if markets are efficient."[75] Despite this, Fama said that "poorly informed investors could theoretically lead the market astray" and that stock prices could become "somewhat irrational" as a result.[76]

Efficient markets applied in securities class action litigation

[edit]

The theory of efficient markets has been practically applied in the field of Securities Class Action Litigation. Efficient market theory, in conjunction with "fraud-on-the-market theory", has been used in Securities Class Action Litigation to both justify and as mechanism for the calculation of damages.[77] In the Supreme Court Case, Halliburton v. Erica P. John Fund, U.S. Supreme Court, No. 13-317, the use of efficient market theory in supporting securities class action litigation was affirmed. Supreme Court Justice Roberts wrote that "the court's ruling was consistent with the ruling in 'Basic' because it allows 'direct evidence when such evidence is available' instead of relying exclusively on the efficient markets theory."[78]

See also

[edit]

Notes

[edit]

References

[edit]

Sources

[edit]

Further reading

[edit]
[edit]
Revisions and contributorsEdit on WikipediaRead on Wikipedia
from Grokipedia
The efficient-market hypothesis (EMH) is a theory in positing that the prices of securities fully and instantaneously incorporate all available , thereby eliminating opportunities for investors to achieve superior risk-adjusted returns through trading strategies based on that . Developed by in his seminal 1970 review of theory and evidence, the hypothesis asserts that competitive markets aggregate diverse investor analyses into equilibrium prices that serve as unbiased estimators of intrinsic value, assuming rational behavior and no transaction frictions. EMH delineates three forms based on scope: the weak form, where prices reflect all past market data and yields no excess returns; the semi-strong form, incorporating all publicly available such that cannot consistently outperform; and the strong form, encompassing even private , rendering unprofitable. Empirical tests, including event studies on earnings announcements and dividend changes, have lent considerable support to the semi-strong form in liquid markets, underpinning models like the (CAPM) and justifying passive indexing strategies. Nonetheless, persistent anomalies—such as post-earnings announcement drifts, value premiums, and momentum persistence—along with evidence of excess volatility and market bubbles, have prompted critiques from behavioral economists who attribute deviations to irrational investor psychology and limits to , though proponents counter that many anomalies fail to replicate robustly or survive risk adjustments and costs.

Core Concepts

Definition and Key Implications

The efficient-market hypothesis (EMH) posits that asset prices in financial markets fully reflect all available information, rendering it impossible to consistently achieve superior risk-adjusted returns by exploiting that information. Formulated by Eugene F. Fama in his 1965 dissertation and elaborated in his 1970 review, the hypothesis defines market efficiency in terms of informational efficiency, where prices adjust instantaneously to new data, incorporating historical prices, public disclosures, and—under stronger variants—private insights. This framework rests on the joint hypothesis problem, wherein tests of efficiency are inseparable from assumptions about asset pricing models, such as the (CAPM). A primary implication is the random walk behavior of prices, where successive changes are independent and unpredictable based on prior information, as rational arbitrageurs eliminate any persistent discrepancies. Consequently, strategies like , which parse historical price and volume patterns, or semi-strong form , which evaluates public and economic indicators, cannot yield abnormal profits net of risk, since such information is already priced in. This challenges the efficacy of active portfolio management, suggesting that transaction costs and fees often erode any purported edges, thereby favoring low-cost passive indexing that mirrors broad market returns. The hypothesis further implies that market efficiency facilitates optimal capital allocation, as prices signal true economic values, guiding resources toward productive uses without systematic mispricings from investor sentiment or incomplete information processing. However, it accommodates premiums, where higher expected returns compensate for bearing systematic s rather than informational advantages, and does not guarantee fair outcomes but rather competitive equilibrium where inefficiencies are fleeting due to informed trading. Empirical validation requires distinguishing true anomalies from risk mispecifications, underscoring the hypothesis's reliance on rigorous testing against alternative models.

Forms of Market Efficiency

The efficient-market hypothesis posits three distinct forms of market efficiency—weak, semi-strong, and —differing in the scope of information assumed to be instantaneously and fully reflected in asset prices. These forms, formalized by in , provide a hierarchy for evaluating how markets process information, with each stronger form encompassing the assumptions of the weaker ones. The weak form asserts that prices incorporate all historical , such as past prices and trading volumes, rendering ineffective for achieving risk-adjusted returns above the market average. Empirical support for this form derives from statistical tests like serial correlation analysis and runs tests, which generally fail to reject the of no predictability from historical data in major equity markets. The semi-strong form extends this by assuming prices incorporate all publicly available information almost instantly in liquid assets, including , economic data, and news announcements, such that anticipated news causes zero net move while only surprises prompt adjustments that fade within minutes to hours as algorithms arbitrage them; neither nor can yield consistent abnormal profits after adjusting for . Event studies, such as those examining price reactions to announcements or mergers, typically show rapid price adjustments within minutes or hours, consistent with this form, though post-event drifts observed in some datasets (e.g., small-cap post- surprises) challenge full . Fama emphasized that semi-strong implies informationally efficient prices but allows for premiums, as deviations must be compensated by higher expected returns rather than opportunities. The strong form claims prices incorporate all information, public and private (including insider knowledge), implying no —professional or otherwise—can achieve superior returns through any means, as markets preempt even non-public data. This form is widely regarded as the least tenable, with evidence from performance studies showing modest underperformance net of fees and regulations acknowledging exploitable private information advantages, such as U.S. SEC filings revealing abnormal returns by corporate executives trading on undisclosed material events. Fama noted in his original framework that strong-form efficiency serves primarily as a benchmark, unlikely to hold in practice due to incentives for .

Theoretical Foundations

The (RWH) asserts that successive changes in asset prices are independent and identically distributed random variables, implying that future price movements cannot be predicted from historical price data alone. This model formalizes stock prices as following a where Pt=Pt1+ϵtP_t = P_{t-1} + \epsilon_t, with ϵt\epsilon_t representing unpredictable shocks drawn from a with zero mean. The hypothesis originated in Louis Bachelier's 1900 doctoral dissertation Théorie de la Spéculation, which applied to model Paris prices, treating deviations from equilibrium as random and proposing that does not influence average prices over time. Empirical groundwork for RWH emerged in Maurice Kendall's 1953 analysis of economic , which examined over 300 British and U.S. speculative price series spanning 1928–1938 and 1946–1949, finding negligible and concluding that price differences resembled independent drawings from a rather than deterministic trends. advanced the framework in his 1965 paper "Random Walks in Prices," synthesizing prior work and emphasizing tests for in successive price changes, arguing that even modest predictability would erode under competition among informed traders. Fama's review highlighted that while early tests focused on serial , broader requires additional checks for patterns like runs or variance ratios, with evidence from daily stock returns supporting the model's core implications up to that period. RWH underpins the weak form of the efficient-market hypothesis (EMH), as the absence of serial dependence in prices means based on past patterns yields no excess returns beyond random chance. If prices incorporate all historical information instantaneously, innovations ϵt\epsilon_t reflect new shocks, rendering the process unpredictable and aligning with market efficiency where opportunities from historical data dissipate rapidly. However, RWH assumes strict independence, which Fama noted could hold approximately even without if dependencies average out over investors' diverse information sets. Related models extend RWH by relaxing assumptions or incorporating and time value. The martingale property, central to no-arbitrage , posits that the conditional expected the current under the : E[Pt+1Ft]=PtE[P_{t+1} | \mathcal{F}_t] = P_t, where Ft\mathcal{F}_t is the of up to time tt. In EMH contexts, undiscounted prices form a submartingale due to positive expected returns from risk premia, such that E[Pt+1Ft](1+rf)PtE[P_{t+1} | \mathcal{F}_t] \geq (1 + r_f) P_t, with rfr_f as the , ensuring no predictable profits after adjusting for . These models generalize RWH by allowing conditional expectations rather than strict zero-mean increments, accommodating heterogeneous beliefs while preserving unpredictability from public . Samuelson's 1965 contributions further linked martingales to efficient , demonstrating that imply processes where deviations from fundamentals revert without exploitable patterns.

Mechanisms of Price Adjustment

In efficient markets, prices adjust to new through the trading actions of rational, profit-maximizing investors who compete to exploit informational advantages. Upon the arrival of relevant —such as announcements, releases, or corporate events—investors revise their estimates of an asset's expected flows and , leading to buy or sell orders that shift the supply-demand balance and alter equilibrium prices. This process ensures that prices rapidly converge to reflect all available information, rendering systematic abnormal returns unattainable after adjusting for . Arbitrage plays a central role in accelerating this adjustment, particularly for deviations across related assets or from fundamental values. Arbitrageurs identify and trade on temporary mispricings, such as those arising from shocks or slow , by simultaneously buying undervalued securities and selling overvalued ones (or equivalents via ), which restores price alignment without net or risk in ideal conditions. Theoretical models posit that unbounded arbitrage capital and low frictions enable near-instantaneous corrections, as any persistent discrepancy would attract unlimited profits until eliminated. Market microstructure elements, including liquidity provision by dealers and high-frequency traders, further enable swift execution of these trades. In liquid markets, order books facilitate immediate matching of bids and asks, minimizing price impact from individual trades while aggregating dispersed information into quoted prices. Advances in trading technology have empirically shortened adjustment times; for instance, post-2000 electronic markets exhibit sub-second reactions to public news in major equities, compared to minutes or hours in earlier eras.

Historical Development

Early Precursors and Influences

The notion of unpredictable price movements in financial markets traces back to the mid-19th century. In 1863, French stockbroker Jules Regnault observed in his book Calcul des chances et philosophie de la Bourse that successive price changes in stocks are independent, suggesting a random process without discernible patterns or memory of prior fluctuations. A foundational mathematical contribution came from Louis Bachelier's 1900 doctoral thesis Théorie de la spéculation, which applied to model stock prices at the Bourse as a continuous , where price increments are independent and identically distributed, rendering prediction from historical data impossible. Bachelier derived the expectation that future prices reflect all available information instantaneously, anticipating key elements of market efficiency, though his work emphasized probabilistic diffusion rather than rational investor behavior and was largely ignored until rediscovered in the . Early 20th-century empirical studies further challenged trend-following and . Holbrook Working's 1934 analysis of commodity futures prices highlighted biases from time-averaging in sparse trading data, showing that apparent trends often resulted from measurement errors rather than genuine predictability, and provided initial evidence of weak serial correlation in returns. Similarly, Maurice G. Kendall's 1953 examination of 300 British and U.S. stock series in The Analysis of Economic , Part I: Prices found near-zero in first differences of prices, concluding that changes behave like independent draws from a random process, undermining serial dependence as a basis for . These findings, rooted in statistical scrutiny of , influenced later academic work by demonstrating the absence of exploitable patterns in historical prices, setting the stage for formal theories of informational .

Formalization by Fama and Contemporaries

Eugene Fama provided the seminal formalization of the efficient-market hypothesis (EMH) in his 1970 review article, "Efficient Capital Markets: A Review of Theory and Empirical Work," published in The Journal of Finance. In this work, Fama defined an efficient market as one in which prices "fully reflect" all available information, implying that it is impossible to consistently achieve superior risk-adjusted returns by exploiting that information. He categorized market efficiency into three forms based on the scope of information incorporated: weak form (prices reflect all past market data, rendering technical analysis ineffective); semi-strong form (prices reflect all publicly available information, invalidating fundamental analysis for excess returns); and strong form (prices reflect all information, public and private, making even insider trading unprofitable). This tripartite classification systematized prior informal discussions of market efficiency and provided a framework for empirical testing. Fama's formalization built on his earlier 1965 doctoral dissertation at the , which empirically supported the model of stock prices, positing that successive price changes are independent and thus unpredictable from historical data. Contemporaries at the , including Michael Jensen and , contributed through collaborative empirical work that bolstered the hypothesis's foundations. Notably, the 1969 study by Fama, Lawrence Fisher, Jensen, and Roll examined announcements from 1927 to 1959 across 940 events, finding that prices adjusted rapidly—within minutes to days—to the new public information, with abnormal returns dissipating quickly thereafter, consistent with semi-strong efficiency. This event-study methodology, pioneered in their paper, became a standard tool for testing information incorporation and demonstrated that markets process earnings announcements and other disclosures with minimal delay. Jensen, in his 1968 dissertation supervised by Fama, developed performance measures using the (CAPM), analyzing 115 funds from 1945 to 1964 and concluding that most underperformed benchmarks after fees, supporting the idea that rarely beats efficient markets. Roll's contributions included critiques and extensions of tests, such as his 1977 paper questioning the joint problem in CAPM-EMH evaluations, where apparent inefficiencies might stem from model misspecification rather than true market irrationality. These efforts collectively shifted EMH from descriptive anecdote to a testable grounded in econometric rigor, influencing the development of and influencing regulatory perspectives on market transparency. Despite later anomalies, the 1970 formalization remains the 's cornerstone, emphasizing and as price-correcting mechanisms.

Evolution Through the Late 20th Century

In the 1970s, following Eugene Fama's formal articulation of the (EMH) in 1970, empirical research focused on testing its semi-strong form through event studies, which examined price reactions to public announcements such as earnings reports, mergers, and dividend changes. These studies consistently found that stock prices adjusted rapidly—often within minutes or hours—to new public information, with no persistent abnormal returns available to investors acting on it, thereby supporting the notion that markets incorporate publicly available data efficiently. For instance, analyses of earnings surprises showed unbiased and swift revisions in expectations, aligning with the hypothesis that arbitrageurs exploit mispricings quickly. The 1980s brought initial challenges via documented anomalies, including the small-firm effect (smaller stocks outperforming larger ones on a risk-adjusted basis, as identified by Rolf Banz in 1981) and seasonal patterns like the , where returns were elevated early in the year. Proponents countered that these patterns reflected unaccounted risk premia rather than inefficiency, invoking the joint hypothesis problem: tests of market efficiency are confounded by potentially flawed asset pricing models, such as the (CAPM), making it impossible to isolate inefficiency without a correct risk benchmark. Grossman and Stiglitz's 1980 paradox further highlighted theoretical tensions, arguing that perfect efficiency would eliminate incentives for information gathering, yet markets require informed traders to function efficiently. By the 1990s, responses to anomalies evolved through multifactor models, culminating in Fama and French's 1993 three-factor model, which augmented CAPM with (small-minus-big) and value (high-minus-low book-to-market) factors as compensations for systematic risks, explaining prior puzzles without abandoning EMH. This framework posited that apparent inefficiencies were rational premia for bearing distress or illiquidity risks, with empirical backtests showing the model outperforming CAPM in capturing returns. Concurrently, behavioral finance critiques intensified, drawing on from Kahneman and Tversky (1979) to argue for persistent irrationality driving overreactions and underreactions, as seen in excess volatility debates from Shiller's 1981 work. However, EMH advocates maintained that behavioral factors either represented risks or failed rigorous out-of-sample validation, preserving the hypothesis's core claim amid growing but inconclusive challenges.

Empirical Evidence

Tests of Weak-Form Efficiency

Autocorrelation tests measure serial dependence in asset returns to determine if past returns predict future ones. In analyses of stock returns, Eugene Fama's 1970 review found autocorrelations near zero and statistically insignificant across various lags for individual securities and portfolios, consistent with weak-form efficiency where price histories do not yield predictive power. Similar results hold for daily and monthly data on major indices like the from the 1960s onward, with correlations rarely exceeding 0.05 in absolute value. Runs tests evaluate randomness in sequences of price increases or decreases, counting consecutive runs to test against non-random clustering. Applications to US daily stock price changes from 1897 to 1929 by Cowles and Jones, and later extensions to post-1950 data, showed no significant deviations from expected run lengths under independence, failing to reject the random walk model. Variance ratio tests, developed by Lo and MacKinlay in 1988, assess whether the variance of k-period returns equals k times the one-period variance, as required by a random walk. Their examination of weekly CRSP value-weighted index returns from July 1962 to December 1985 detected positive autocorrelations, yielding variance ratios significantly above 1 for horizons up to 10 weeks (e.g., 1.024 for k=2, rejecting at 1% level), implying short-term predictability and challenging strict weak-form efficiency in US markets. Filter rule tests simulate technical strategies, buying after price rises exceeding a threshold (e.g., 1-50%) and selling on reversals. Sidney Alexander's 1961 study of stocks from 1946-1959 reported gross excess returns of up to 40% annually for small filters, but Fama's 1970 reassessment, incorporating 0.01% per share transaction costs, eliminated these advantages, yielding net returns indistinguishable from buy-and-hold benchmarks. Empirical results vary by market maturity; developed exchanges like the NYSE exhibit approximate weak-form efficiency with minimal exploitable predictability after costs, while emerging markets often show significant autocorrelations (e.g., up to 0.15 in daily returns) and profitable rules, attributed to thinner and slower information diffusion. Despite anomalies like short-horizon , aggregate evidence from data supports limited rejection of weak-form tenets, with deviations rarely persisting post-adjustment for microstructure or .

Tests of Semi-Strong Form Efficiency

Event studies constitute the primary methodology for testing semi-strong form efficiency, examining whether stock prices rapidly incorporate all publicly available information following specific announcements, such as earnings releases, mergers, or dividend declarations, by measuring abnormal returns around the event date. These tests compute abnormal returns as the difference between actual returns and expected returns based on models like the market model, aggregating them into cumulative abnormal returns (CARs) to assess adjustment speed and completeness. A seminal test involved quarterly earnings announcements, where Ball and Brown (1968) analyzed U.S. from 1957 to 1965 and found that approximately 85% of the total price adjustment to annual surprises occurred in the month before the announcement due to prior leaks and forecasts, with the remaining adjustment happening rapidly post-announcement, supporting semi-strong efficiency. However, they also documented a post-earnings announcement drift (PEAD), wherein with positive surprises continued to yield average abnormal returns of about 1.5% over the following 60 days, indicating incomplete immediate incorporation of and posing a challenge to strict semi-strong efficiency. Subsequent studies, including those in emerging markets like and , have replicated rapid initial adjustments but persistent drifts, with PEAD magnitudes varying by and environment. Merger and acquisition announcements provide another key test, where target firms typically experience significant positive of 20-30% upon public bid disclosure, reflecting quick incorporation of the premium offered, while acquirers show insignificant or negative returns averaging -1% to -2%, consistent with efficient of public deal terms. Event studies on U.S. mergers from the 1980s to 2000s, for instance, confirm that abnormal returns materialize almost entirely within minutes to days of announcements via trading systems, aligning with semi-strong predictions, though pre-announcement run-ups suggest partial anticipation from rumors rather than inefficiency. In contrast, some international evidence, such as in , reveals delayed adjustments post-merger news, with accumulating over weeks, questioning universality. Dividend announcement tests similarly show U.S. markets reacting swiftly, with positive (negative) surprises yielding CARs of around 1-3% (-1 to -2%) within one to two days, per studies from the onward, though longer-term drifts in some sectors like FMCG indicate under-reaction. Fama's (1998) comprehensive review of hundreds of event studies across announcement types concludes that prices adjust within hours to days on average, providing strong aggregate support for semi-strong , while acknowledging anomalies like PEAD as potentially attributable to mispricing or joint issues rather than outright inefficiency. Overall, while early tests bolstered the , persistent anomalies highlight limits, with meta-evidence suggesting holds better in developed, liquid markets.

Tests of Strong-Form Efficiency

Studies of corporate insiders' trading provide the primary empirical tests of strong-form efficiency, as these individuals possess non-public that should, under the hypothesis, be unable to generate abnormal risk-adjusted returns. of U.S. Securities and Exchange Commission (SEC) filings reveals consistent outperformance following insider purchases and underperformance after sales, indicating incomplete incorporation of private into prices. For instance, Seyhun (1986) examined over 150,000 insider transactions from 1975 to 1981, finding that purchases yielded average monthly abnormal returns of approximately 2.5% to 3%, even after accounting for trading costs and risk, while sales preceded negative returns of similar magnitude. Earlier work by Jaffe (1974) on transactions from 1962 to 1968 similarly documented significant positive abnormal returns for strategies based on insider buys, averaging around 3-5% over short horizons, with cumulative effects persisting for months. These patterns hold across firm sizes but are more pronounced in smaller, less liquid stocks where is greater. Such results directly contradict strong-form , as private information confers a trading advantage not neutralized by market prices. Additional evidence from exchange specialists and large block traders, who access quasi-private order flow data, shows analogous profits, further undermining the hypothesis. For example, studies of block trades in the 1970s and 1980s found abnormal returns of 1-2% around transactions, attributable to unrevealed . While transaction costs and legal restrictions limit exploitation by outsiders, the insiders' own excess returns—estimated in aggregate at billions annually in modern contexts—demonstrate that markets fail to reflect all private instantaneously or fully. Recent analyses, including those up to , confirm persistence of these effects, with insiders offloading overvalued shares to retail investors, yielding net profits exceeding $100 billion yearly.

Meta-Analyses and Aggregate Results

A comprehensive of empirical tests across weak, semi-strong, and strong forms of the efficient-market hypothesis (EMH) reveals mixed but predominantly supportive aggregate evidence for market , particularly in developed equity markets, though with persistent debates over anomalies. Variance-ratio tests, which assess deviations from random walks, applied to daily and monthly U.S. returns from 1962 to 2010, show ratios close to unity for most horizons, indicating weak-form holds robustly for large , with deviations largely confined to small-cap or illiquid securities attributable to nonsynchronous trading. Similarly, event studies aggregating hundreds of corporate announcements, such as earnings releases and mergers from the onward, demonstrate rapid price adjustments—typically within minutes to days—to public information, supporting semi-strong , though abnormal returns post-event average near zero after risk adjustments. Meta-analyses of specific anomalies highlight challenges to semi-strong efficiency but underscore replication issues and declining profitability. For instance, a of 97 cross-sectional return predictors identified in academic literature finds that out-of-sample predictability is significantly lower than in-sample, with post-publication returns declining by an average of 58% across factors like , value, and , consistent with data-snooping and inflating initial discoveries. Another examination of 82 anomaly characteristics confirms this pattern, showing that exploitable alphas erode after publication due to by practitioners and increased awareness, implying that apparent inefficiencies are often transient rather than structural violations of EMH. Surveys compiling over 150 anomalies, such as those by Hou, Xue, and Zhang (2020), indicate that many are captured by multifactor models incorporating investment and profitability risks, reducing their status as inefficiencies under the joint problem—where tests confound with specifications. Aggregate results from global studies further temper anomaly critiques, showing that purported inefficiencies like the size effect or value premium weaken or reverse in recent decades (post-1980s) across 64 markets, with long-short strategy returns averaging under 0.5% monthly after transaction costs and shrinking over time as markets adapt. Strong-form tests, involving private information, yield limited evidence of persistent outperformance by insiders or professionals; net returns after fees underperform benchmarks by 1-2% annually over 1962-2020, aligning with EMH predictions that superior skill is rare and diluted by competition. Overall, while anomalies persist in niche samples (e.g., microcaps or emerging markets), meta-evidence supports EMH as a useful , with deviations explainable by premia, behavioral frictions, or methodological artifacts rather than systemic inefficiency.

Challenges and Anomalies

Behavioral Finance Critiques


Behavioral critiques the efficient-market hypothesis (EMH) by positing that decisions are influenced by cognitive biases and emotional factors, leading to systematic deviations from rather than fully efficient incorporation of . Proponents argue that EMH's assumption of ignores of predictable irrational behaviors, such as overconfidence and , which generate exploitable anomalies. These critiques gained prominence through works like Richard Thaler's 2003 survey, which catalogs psychology-based return predictors challenging EMH's joint hypothesis of and risk-based pricing.
A foundational element is , formulated by and in 1979, which demonstrates that individuals overweight low-probability events, exhibit (valuing losses approximately twice as much as gains), and make choices relative to a reference point rather than absolute outcomes. This framework explains phenomena like the , where investors prematurely sell appreciating assets while clinging to depreciating ones to avoid realizing losses; Terrance Odean's 1998 analysis of brokerage records from 10,000 U.S. households confirmed this pattern, with realized gains outnumbering losses by over 50% despite underlying performance. Such behaviors imply underreaction to bad news and overreaction to good news, contradicting EMH's prediction of unbiased price adjustments. Robert Shiller's 1981 excess volatility puzzle further undermines EMH by showing that aggregate variance exceeds what discount models justify—prices fluctuate 5 to 13 times more than fundamentals over horizons from 1926 to 1979—attributable to fads, , and feedback loops rather than rational revisions. Behavioral explanations extend to anomalies like long-term reversals (De Bondt and Thaler, 1985), where portfolios of prior losers outperform winners by 25% over three years, and persistent effects (Jegadeesh and Titman, 1993), where past winners continue outperforming. Limits to exacerbate these, as rational traders cannot fully correct mispricings due to risks from unpredictable traders and short-selling constraints. Critics like Shiller in (2000) attribute bubbles, such as the late-1990s dot-com surge, to amplified investor psychology rather than information efficiency, with price-to-earnings ratios exceeding 40 despite slowing earnings growth. While behavioral models incorporate these insights via noisy or heterogeneous beliefs, they challenge EMH's core tenet that ensures prices reflect intrinsic value, though empirical profitability of anomaly-based strategies remains debated after transaction costs.

Documented Market Anomalies

Market anomalies are empirical patterns in financial asset returns that appear inconsistent with the efficient-market hypothesis, as they suggest predictability and potential for excess returns beyond risk-adjusted benchmarks. These include cross-sectional variations, time-series effects, and event-based drifts, documented across decades of academic research primarily using U.S. and international equity data. While some anomalies have been partially attributed to risk factors or data-mining artifacts, others persist in out-of-sample tests, challenging the notion of fully efficient . The size effect posits that stocks of smaller firms outperform those of larger firms on a risk-adjusted basis. Banz (1981) first documented this using U.S. data from 1936 to 1975, finding small-cap stocks yielded average monthly excess returns of approximately 0.4% after controlling for beta. Fama and French (1992) confirmed the pattern in 1963–1990 data, with small stocks generating higher returns unexplained by the CAPM, though the effect weakened post-1980s in some samples. Value anomalies involve high book-to-market (value) stocks outperforming low book-to-market (growth) stocks. (1977) showed low price-earnings ratio stocks earned superior returns from 1957 to 1971, with excess returns persisting after market adjustments. Fama and French (1993) extended this, reporting value stocks' monthly premium of 0.48% over growth stocks in U.S. data from 1963 to 1990, a pattern replicated internationally and persisting in some post-publication periods, particularly outside the U.S. Momentum effects demonstrate that stocks with strong recent performance (winners) continue to outperform recent losers over 3–12 month horizons. Jegadeesh and Titman (1993) found U.S. winners outperforming losers by 1% per month from 1965 to 1989, net of transaction costs in simulations. This intermediate-term persistence holds across markets but reverses over longer horizons, as De Bondt and Thaler (1985) documented 3–5 year loser outperformance of 0.4% monthly in U.S. data from 1926 to 1982. Calendar anomalies include the , where small stocks exhibit abnormally high returns in January, potentially due to tax-loss selling. Rozeff and Kinney (1976) reported NYSE small-stock January returns averaging 4.37% from 1904 to 1974, exceeding annual averages. The weekend effect shows lower Monday returns, with French (1980) finding U.S. stocks underperforming by 0.31% on Mondays from 1953 to 1977. These patterns have diminished or vanished post-publicity in many markets. Post-earnings announcement drift (PEAD) reveals stocks with positive earnings surprises continuing to rise for months afterward. Ball and Brown (1968) identified this in U.S. data, with surprises predicting 60–70% of subsequent 60-day returns. Recent analyses show even after Fama-French adjustments. Many anomalies decay after academic publication, with and (2016) estimating a 58% average reduction in 97 U.S. predictors' returns post-publication, attributed to increased investor awareness and rather than solely statistical . This decay is less pronounced for non-U.S. anomalies, where book-to-price and earnings-to-price effects retain t-statistics above 7 in 2003–2018 data. Such findings suggest markets adapt, but incomplete efficiency in pricing information or limits to allow temporary deviations.

Insights from Major Crises Including 2008

The 2008 global financial crisis highlighted potential limitations in the efficient-market hypothesis, particularly in the semi-strong form, as prices of mortgage-backed securities and related assets failed to fully incorporate public signals of prior to the downturn. Subprime mortgage default rates began rising in early 2007, with delinquencies reaching 10% by mid-year amid evidence of fraudulent lending practices and over-leveraged institutions, yet spreads and housing indices like Case-Shiller showed delayed adjustments until the collapse in March 2008. The failure on September 15, 2008, precipitated a credit freeze and equity plunge, with the index declining 38% in the ensuing five months, suggesting that dispersed information on counterparty risks was not promptly aggregated into prices despite availability through regulatory filings and analyst reports. Defenders of the hypothesis, such as Ray Ball, argued that the crisis reflected incomplete risk assessment rather than market inefficiency per se, noting that proprietary trading losses at banks stemmed from unhedged exposures to unpriced tail risks, while post-crisis price corrections—such as the S&P 500's rebound of over 400% from March 2009 lows by 2021—demonstrated eventual incorporation of evolving information. Burton Malkiel reconciled EMH with behavioral elements, invoking Hyman Minsky's financial instability hypothesis to explain how prolonged stability bred leverage and complacency, yet emphasized that competitive trading still enforced rapid adjustments once shocks materialized, as evidenced by the VIX volatility index spiking to 80 on October 27, 2008, before normalizing. Empirical tests during the period, including variance ratio analyses on daily returns, indicated heightened predictability and serial correlation in asset prices amid liquidity evaporation, rejecting weak-form efficiency temporarily but not disproving the broader framework when accounting for time-varying risk premiums. Earlier crises provided analogous insights, underscoring short-term deviations amid long-term resilience. The 1987 crash saw the drop 22.6% on October 19, driven by portfolio insurance programs and margin calls, yet intraday trading data revealed bid-ask spreads widening transiently before prices stabilized, consistent with EMH's prediction of quick information diffusion once order imbalances resolved. The 2000 dot-com bust, with the falling 78% from its March 10 peak to October 2002, exposed overoptimism in tech valuations despite public earnings misses, but subsequent recoveries in surviving firms aligned with revised growth forecasts, challenging strong-form efficiency in private information handling by insiders. Across these events, meta-analyses of return predictability during high-volatility regimes reveal clustered anomalies like momentum reversals, attributable to herding or funding constraints rather than persistent inefficiency, as markets reverted to fundamentals within quarters. These patterns suggest EMH holds better in equilibrium but strains under exogenous shocks, informing adaptive models that incorporate regime shifts without discarding informational as a baseline.

Defenses and Reconciliations

The Joint Hypothesis Problem

The joint hypothesis problem arises in empirical tests of the efficient-market hypothesis (EMH), as any assessment of market efficiency requires simultaneously evaluating an equilibrium model of expected returns; apparent inefficiencies may thus reflect model misspecification rather than true informational inefficiencies. articulated this challenge, noting that "tests of efficiency are difficult because they require a specification of expected returns," such that rejections of efficiency could stem from incorrect pricing models rather than violations of EMH. For instance, under the (CAPM), strategies like value or investing often generate statistically significant alphas (abnormal returns), suggesting inefficiency; however, incorporating multifactor models, such as the Fama-French three-factor model introduced in 1993, adjusts these alphas toward zero by attributing premiums to compensation for risks like size and value factors not captured by CAPM. This inseparability defends EMH against critiques based on anomalies, as proponents argue that documented patterns—such as the or post-earnings announcement drift—do not conclusively disprove efficiency without a complete, correct risk-adjustment model, which remains elusive. Fama has emphasized that the problem implies "rationality is not established by the existing tests... and the joint-hypothesis problem likely means that it cannot be established," underscoring the non-falsifiability of strict EMH but also its resilience to empirical challenges. Critics, including behavioral finance advocates like , acknowledge the issue but contend it complicates rather than resolves debates, as iterative model refinements can perpetually "explain" away inefficiencies without advancing understanding of underlying mechanisms. In practice, the joint hypothesis has influenced research by shifting focus toward refining models; for example, tests using the (adding ) in the 1990s reduced alphas for many anomalies, supporting the view that markets price risks efficiently even if single-factor benchmarks fail. Yet, persistent puzzles, such as low volatility anomalies where high-beta stocks underperform predictions across models updated through 2020, highlight ongoing tensions, as no consensus model fully eliminates all apparent mispricings. This framework thus reconciles EMH with evidence by privileging model evolution over outright rejection, though it demands caution in interpreting anomalies as definitive evidence against efficiency.

Risk Premiums and Rational Explanations

The Fama–French three-factor model provides a rational risk-based framework for explaining patterns such as the size effect and value premium, which might otherwise appear as inefficiencies. By extending the (CAPM) to include a market factor alongside small-minus-big (SMB) and high-minus-low (HML) book-to-market factors, the model accounts for higher average returns on small-cap and value stocks as compensation for their greater exposure to systematic risks, including financial distress and economic downturn sensitivity. Fama and French (1993) empirically demonstrate that these factors capture cross-sectional return variations, with SMB and HML premiums reflecting undiversifiable risks rather than mispricing. The equity risk premium (ERP), representing the excess return of stocks over risk-free assets, exemplifies another rational compensation mechanism. U.S. historical data from 1928 to 2023 indicate an arithmetic ERP of approximately 8.6% for stocks over Treasury bills, with geometric averages closer to 6%. Under rational , this premium arises from investors' required compensation for equities' higher volatility and covariance with consumption, as formalized in the CAPM. The —its perceived excessiveness relative to standard utility parameters—has prompted rational extensions, such as rare disaster models incorporating low-probability, high-impact events like wars or depressions, which elevate precautionary savings and thus required returns; Barro (2006) calibrates such a model to match historical U.S. and global ERP observations using data on macroeconomic disasters. Momentum strategies, where past winners outperform losers, also admit risk-based rationales consistent with EMH. These include exposures to dynamic systematic risks, such as countercyclical beta or industry tied to economic states, where winners against downturns less effectively. Additionally, rational inattention models posit that investors optimally delay processing firm-specific due to costs, leading to gradual risk repricing that generates intermediate-term as compensation for transient underreaction risks. Empirical tests within multifactor frameworks, like extensions of Fama–French, show returns loading positively on such risks, reconciling the pattern with equilibrium pricing. While not all anomalies yield equally robust risk explanations, these frameworks defend EMH by attributing premiums to priced covariances with aggregate wealth or consumption shocks, rather than irrationality, emphasizing that tests of efficiency are joint with models.

Adaptive Markets and Bounded Efficiency

The Adaptive Markets Hypothesis (AMH), proposed by Andrew W. Lo in 2004, posits that financial markets function as complex adaptive systems akin to biological ecosystems, where emerges from evolutionary processes rather than static rational equilibrium. Under AMH, investors exhibit , —selecting satisfactory rather than optimal strategies—due to cognitive limitations and environmental constraints, leading to market behaviors that oscillate between periods of relative and inefficiency. This framework reconciles the efficient-market hypothesis (EMH) with behavioral anomalies by arguing that market is not absolute or perpetual but contingent on factors such as the number of participants, their computational abilities, and external shocks like economic crises, which drive adaptation through trial-and-error learning, competition, and resource allocation. Bounded efficiency, as integrated into adaptive perspectives, describes markets where informational incorporation is incomplete or delayed due to these evolutionary dynamics and investors' bounded cognitive capacities, challenging the strict EMH while affirming partial informational efficiency under stable conditions. For instance, during high-competition phases with abundant arbitrageurs, prices may approximate EMH predictions by rapidly reflecting news; however, in low-competition or high-uncertainty environments—such as the —herd behavior, loss aversion, and slow adaptation can sustain mispricings, as evidenced by prolonged deviations in asset valuations post-shock. Empirical support for AMH includes time-varying predictability in returns: Lo's analysis of U.S. equity data from 1962 to 2004 showed serial correlation in weekly returns rejecting constant random walks, aligning with adaptive shifts rather than fixed inefficiency. AMH implies that strategies exploiting anomalies, like or , succeed when adaptation lags but erode as competitors replicate them, fostering a dynamic equilibrium of bounded . Unlike pure behavioral critiques, which emphasize persistent , AMH attributes such patterns to evolutionary fitness trade-offs, where risk-taking for (e.g., overconfidence in markets) enhances long-term viability despite short-term deviations from . This bounded view cautions against dismissing EMH outright, as markets periodically self-correct through selection pressures eliminating underperforming agents, though full remains unattainable due to inherent information costs and heterogeneous beliefs. Applications include portfolio design that adapts to regime changes, with Lo advocating diversification across to hedge evolutionary risks.

Applications and Implications

In Passive Investing and Portfolio Theory

The efficient-market hypothesis (EMH) posits that in efficient markets, active strategies cannot consistently generate excess returns after for and transaction costs, thereby underpinning the rationale for passive investing through low-cost index funds that replicate broad market benchmarks. This approach gained prominence following Eugene Fama's formalization of EMH in 1970, which argued that prices fully reflect available , rendering selection and futile for superior performance. John Bogle launched the first retail index mutual fund, the Vanguard 500 Index Fund tracking the , on May 1, 1976, explicitly drawing on EMH principles to advocate for cost-minimizing strategies that capture market returns. Empirical evidence reinforces EMH's support for passive strategies, with studies showing that the majority of actively managed funds underperform their passive benchmarks over extended periods. For instance, ' SPIVA U.S. Scorecard for the 15-year period ending mid-2023 reported that 92% of large-cap active funds failed to outperform the after fees. Burton Malkiel's analysis of performance similarly concludes that professional managers rarely beat broad index funds net of expenses, attributing this to the informational efficiency implied by EMH and the drag of higher active fees, which averaged 0.65% for equity funds versus 0.05% for passive indices as of 2023. In portfolio theory, EMH integrates with (MPT), developed by in 1952, by asserting that the market portfolio—typically a —lies on the , offering optimal risk-adjusted returns through diversification without the need for security mispricing exploitation. Under EMH assumptions, MPT's mean-variance optimization implies that investors should hold the market portfolio as a proxy for the tangency portfolio in the (CAPM), minimizing unsystematic risk via broad exposure rather than concentrated bets. This synergy promotes passive allocation across , such as the classic 60/40 stock-bond split, to achieve diversified beta exposure aligned with equilibrium pricing.

Regulatory and Policy Considerations

The efficient-market hypothesis (EMH) serves as a foundational rationale for mandatory disclosure requirements in securities regulation, particularly under the U.S. and , which compel issuers to provide timely, accurate public information to enable semi-strong form efficiency where prices reflect all publicly available data. This framework assumes that without such regulations, information asymmetries could distort prices, leading to inefficient capital allocation; regulators like the SEC thus prioritize rules ensuring broad dissemination to prevent selective advantages for institutional investors. Regulation Fair Disclosure (Reg FD), adopted by the SEC on August 10, 2000, exemplifies this by prohibiting companies from selectively disclosing material nonpublic information to analysts or select investors before public release, thereby reinforcing EMH's premise that uniform access fosters rapid price adjustment and market integrity. Prohibitions on , codified in Section 10(b) of the 1934 Act and Rule 10b-5, align with EMH by barring trades on material nonpublic information, as empirical tests indicate insiders achieve abnormal returns that would undermine strong-form efficiency if permitted, prompting policy to prioritize investor protection over potential efficiency gains from broader information flow. Courts and regulators invoke EMH in enforcement, using event studies to measure price impacts from disclosures, assuming efficient reactions validate the materiality of violations; for instance, the "truth-on-the-market" defense in SEC actions holds that a misstatement lacks materiality if corrective facts are already reflected in prices via efficient incorporation. Broader policy implications of EMH advocate for intervention in pricing mechanisms, positing that markets self-correct deviations through , which has influenced deregulatory stances but also drawn scrutiny post-2008 for underemphasizing systemic risks like or liquidity failures that regulations such as Dodd-Frank Act provisions (enacted July 21, 2010) aim to mitigate via enhanced oversight. While EMH supports policies curbing manipulation—e.g., against deceptive practices to preserve informational efficiency—critiques highlight that overreliance on efficiency assumptions may insufficiently address behavioral anomalies, prompting calls for adaptive regulations incorporating processing costs and impacts to sustain fair markets without stifling competition. The fraud-on-the-market doctrine, grounded in the semi-strong form of the efficient-market hypothesis, presumes that investors in an efficient rely indirectly on the integrity of the market price, which incorporates all publicly available material information. This presumption, articulated by the in Basic Inc. v. Levinson (485 U.S. 224, 1988), facilitates class certification in Rule 10b-5 securities fraud actions under the by eliminating the need for plaintiffs to prove individualized reliance on alleged misrepresentations. Courts assess market efficiency using factors such as trading volume relative to outstanding shares, the stock's listing on a major exchange, , analyst coverage, and historical price reactions to new information, often drawing from the framework outlined in Cammer v. Bloom (711 F. Supp. 1264, D.N.J. 1989). Event study methodology, which relies on the efficient-market hypothesis to attribute statistically significant stock price movements to specific disclosures, is routinely employed to establish loss causation and quantify economic damages in securities class actions. This approach regresses abnormal returns—deviations from expected returns based on market models—against event dates tied to corrective disclosures or revelations of prior misstatements, assuming rapid price adjustment in efficient markets. Defendants may challenge the presumption of reliance or efficiency by presenting evidence of no price impact from the alleged misrepresentation, as affirmed in Halliburton Co. v. Erica P. John Fund, Inc. (573 U.S. 258, 2014), though empirical tests of efficiency remain contested due to potential confounding factors like clustered events or low statistical power in single-firm analyses. In practice, the hypothesis supports defendants' arguments against excessive damages by highlighting that post-disclosure price declines often reflect the revelation of underlying truths rather than the misrepresentation itself, consistent with market efficiency. However, critiques in litigation note limitations, such as the joint hypothesis problem—where observed inefficiencies could stem from mispriced risk rather than informational failures—and vulnerabilities during market crashes, where standard event studies may fail to isolate fraud-related impacts. Despite these debates, the doctrine endures, with recent rulings like Goldman Sachs Group, Inc. v. Arkansas Teacher Retirement System (594 U.S. ___, 2021) requiring plaintiffs to sufficiently tie price impacts to specific misstatements while upholding the core EMH-based framework.

Recent Developments

Technological Advances and High-Frequency Trading

Technological advancements in computing power, algorithmic software, and low-latency network infrastructure have profoundly influenced trading dynamics, enabling the rise of (HFT), which involves executing a large number of orders at extremely high speeds, often in milliseconds or microseconds, using proprietary algorithms. These developments, including the proliferation of fiber-optic cables, for data, and co-location services allowing servers to be placed physically near exchange matching engines, reduced latency from seconds in the 1990s to microseconds by the mid-2000s. Regulatory changes such as the U.S. Securities and Exchange Commission's Regulation National Market System (Reg NMS) in 2005 further accelerated HFT adoption by promoting competition among trading venues and requiring best execution prices, leading to HFT firms accounting for over 50% of U.S. equity trading volume by 2009. In the context of the efficient-market hypothesis (EMH), HFT is argued to enhance market by facilitating rapid incorporation of new into prices, thereby supporting the semi-strong form of EMH, which posits that prices reflect all publicly available . Empirical studies indicate that HFT improves , as high-frequency traders contribute significantly to intraday price formation without engaging in random or manipulative strategies. For instance, analysis of U.S. futures shows HFT increases by reducing pricing errors and accelerating adjustments to fundamental value. Similarly, HFT participation has been found to narrow bid-ask spreads and boost , metrics consistent with more efficient markets where transaction costs are minimized and prices more accurately reflect underlying asset values. Evidence from specific events further underscores HFT's role in efficiency. Following low-attention earnings announcements, HFT trading reduces price inefficiencies by 65% to 100%, as algorithms quickly process and trade on dispersed information that slower participants overlook. Interruptions in HFT activity, such as during experimental shutdowns, lead to measurable declines in across multiple dimensions, including wider spreads and reduced depth, suggesting HFT's net positive contribution to efficient pricing. However, critics contend that HFT can amplify short-term volatility or contribute to events like the , potentially introducing noise that challenges EMH's assumption of rational pricing; yet, subsequent research attributes such incidents more to order imbalances than inherent HFT flaws, with overall evidence favoring efficiency gains over instability. Recent integrations of advanced technologies, such as AI-driven algorithms within HFT frameworks, continue to refine these effects, with studies showing sustained improvements in real-time market efficiency through low-latency activity that outpaces traditional trading. While efficiency appears to vary temporally—clustering in high-efficiency periods interrupted by lower ones, aligning with adaptive market perspectives— the preponderance of peer-reviewed findings supports HFT as a mechanism that aligns prices more closely with fundamentals, bolstering rather than undermining EMH in technologically advanced markets.

Evidence from Emerging Markets and Crises Post-2020

Empirical tests of the efficient-market hypothesis (EMH) in emerging markets often reveal deviations from weak-form efficiency, where returns exhibit serial correlation and predictability not consistent with a . Variance ratio tests and runs tests applied to indices in regions such as , , and nations (Brazil, Russia, India, China, South Africa) frequently reject the of no , attributing this to factors like thin trading volumes, higher , and regulatory weaknesses. For example, a study of 15 emerging equity markets found persistent anomalies and effects, suggesting prices do not fully incorporate historical . During the crisis from January to July 2020, equity markets displayed heightened inefficiencies, with futures-spot mispricing intensifying; for instance, China's SSE50 index basis predicted spot prices, indicating incomplete information reflection amid panic selling and disruptions. Ljung-Box and Lo-MacKinlay variance ratio tests confirmed deviations from efficiency across these markets, exacerbated by global uncertainty and capital outflows exceeding $83 billion from emerging economies in March 2020 alone. Similar patterns emerged in the Russia-Ukraine conflict (February to June 2022), where Russia's MXRU index shifted from overpricing to underpricing, and the Middle East crisis (October 2023 to September 2024) showed persistent opportunities, challenging semi-strong EMH as political shocks amplified deviations. Contrasting evidence suggests partial resilience, as stock indices in eight emerging markets from January 2020 to February 2022 predicted economic activity and improved forecasts, aligning with EMH's informational benchmark. However, these findings are mixed, with crisis-induced volatility often leading to and overreactions more pronounced in emerging contexts due to limited institutional depth, supporting critiques that EMH assumes away real-world frictions like uneven access to information. Overall, post-2020 crises underscore that while short-term adjustments occur, emerging markets exhibit bounded , with anomalies persisting longer than in developed counterparts.

Integration with AI, Machine Learning, and Cryptocurrencies

The application of (AI) and (ML) to financial markets has prompted reevaluation of the efficient-market hypothesis (EMH), particularly its implications for information processing and . AI-driven accelerates the incorporation of new data into asset prices, potentially enhancing market efficiency by reducing informational asymmetries and enabling rapid arbitrage of mispricings. However, empirical tests using ML models, such as neural networks, have identified short-term predictability in stock returns, challenging the weak form of EMH which posits that past prices fully reflect historical information. For instance, artificial neural networks have demonstrated superior directional prediction accuracy for indices like the NYSE 100 and FTSE 100, achieving out-of-sample performance that suggests exploitable patterns persist despite transaction costs. Critics argue that ML's ability to detect non-linear patterns in high-dimensional data undermines EMH by revealing inefficiencies overlooked by traditional linear models, yet proponents counter that any predictive edges erode as AI adoption diffuses across market participants, restoring equilibrium. A study on ML of cross-sectional returns found that while models generate statistically significant predictions, their economic value diminishes after accounting for risk-adjusted returns and risks, aligning with semi-strong EMH where public information is quickly impounded. Furthermore, low-cost AI universal approximators may reshape by democratizing advanced , but simulations indicate that widespread deployment leads to and amplified volatility during stress periods rather than sustained alpha generation. This dynamic supports adaptive interpretations of EMH, where evolves with technological capabilities but remains bounded by behavioral and computational limits. In cryptocurrency markets, EMH faces greater scrutiny due to observed deviations from randomness, such as in returns and speculative bubbles, indicating weak-form inefficiency particularly in early trading periods. Tests on using variance ratio and methods reject behavior, with predictability linked to factors like illiquidity and volatility that hinder rapid information diffusion. Meta-analyses of efficiency studies confirm persistent inefficiencies, though time-varying measures show gradual maturation as trading volumes grow and institutional participation increases, consistent with adaptive market hypothesis extensions. The intersection of AI/ML with cryptocurrencies amplifies these tensions, as ML models applied to blockchain data—incorporating sentiment from social media and on-chain metrics—have yielded predictive accuracies exceeding random benchmarks, exploiting asymmetries in nascent markets. Yet, as AI enhances liquidity provision and in crypto exchanges, efficiency metrics improve, suggesting that technological integration may propel these markets toward semi-strong forms of EMH over time. Empirical evidence from top cryptocurrencies indicates that while short-term inefficiencies persist, AI-driven narrows them, though risks of flash crashes from synchronized algorithms underscore limits to full efficiency.

References

Add your contribution
Related Hubs
User Avatar
No comments yet.