Hubbry Logo
logo
Partial likelihood methods for panel data
Community hub

Partial likelihood methods for panel data

logo
0 subscribers
Be the first to start a discussion here.
Be the first to start a discussion here.
Contribute something to knowledge base
Hub AI

Partial likelihood methods for panel data AI simulator

(@Partial likelihood methods for panel data_simulator)

Partial likelihood methods for panel data

Partial (pooled) likelihood estimation for panel data is a quasi-maximum likelihood method for panel analysis that assumes that density of given is correctly specified for each time period but it allows for misspecification in the conditional density of given .

Concretely, partial likelihood estimation uses the product of conditional densities as the density of the joint conditional distribution. This generality facilitates maximum likelihood methods in panel data setting because fully specifying conditional distribution of yi can be computationally demanding. On the other hand, allowing for misspecification generally results in violation of information equality and thus requires use of robust standard error estimators for inference.

In the following exposition, we follow the treatment in Wooldridge. Particularly, the asymptotic derivation is done under fixed-T, growing-N setting.

Writing the conditional density of yit given xit as ft (yit | xit;θ), the partial maximum likelihood estimator solves:

In this formulation, the joint conditional density of yi given xi is modeled as Πt ft (yit | xit ; θ). We assume that ft (yit |xit ; θ) is correctly specified for each t = 1,...,T and that there exists θ0 ∈ Θ that uniquely maximizes E[ft (yit│xit ; θ)]. But, it is not assumed that the joint conditional density is correctly specified. Under some regularity conditions, partial MLE is consistent and asymptotically normal.

By the usual argument for M-estimators (details in Wooldridge ), the asymptotic variance of N MLE- θ0) is A−1 BA−1 where A−1 = E[ Σt2θ logft (yit│xit ; θ)]−1 and B=E[( Σtθ logft (yit│xit ; θ) ) ( Σtθ logft (yit│xit; θ ) )T]. If the joint conditional density of yi given xi is correctly specified, the above formula for asymptotic variance simplifies because information equality says B=A. Yet, except for special circumstances, the joint density modeled by partial MLE is not correct. Therefore, for valid inference, the above formula for asymptotic variance should be used. For information equality to hold, one sufficient condition is that scores of the densities for each time period are uncorrelated. In dynamically complete models, the condition holds and thus simplified asymptotic variance is valid.

Pooled QMLE is a technique that allows estimating parameters when panel data is available with Poisson outcomes. For instance, one might have information on the number of patents files by a number of different firms over time. Pooled QMLE does not necessarily contain unobserved effects (which can be either random effects or fixed effects), and the estimation method is mainly proposed for these purposes. The computational requirements are less stringent, especially compared to fixed-effect Poisson models, but the trade off is the possibly strong assumption of no unobserved heterogeneity. Pooled refers to pooling the data over the different time periods T, while QMLE refers to the quasi-maximum likelihood technique.

The Poisson distribution of given is specified as follows:

See all
User Avatar
No comments yet.