June 7, 2022, 1:11 a.m. | Shivam Gupta, Jasper C.H. Lee, Eric Price, Paul Valiant

stat.ML updates on arXiv.org arxiv.org

We consider 1-dimensional location estimation, where we estimate a parameter
$\lambda$ from $n$ samples $\lambda + \eta_i$, with each $\eta_i$ drawn i.i.d.
from a known distribution $f$. For fixed $f$ the maximum-likelihood estimate
(MLE) is well-known to be optimal in the limit as $n \to \infty$: it is
asymptotically normal with variance matching the Cram\'er-Rao lower bound of
$\frac{1}{n\mathcal{I}}$, where $\mathcal{I}$ is the Fisher information of $f$.
However, this bound does not hold for finite $n$, or when $f$ varies …

arxiv likelihood location math maximum likelihood estimation

Founding AI Engineer, Agents

@ Occam AI | New York

AI Engineer Intern, Agents

@ Occam AI | US

AI Research Scientist

@ Vara | Berlin, Germany and Remote

Data Architect

@ University of Texas at Austin | Austin, TX

Data ETL Engineer

@ University of Texas at Austin | Austin, TX

Lead GNSS Data Scientist

@ Lurra Systems | Melbourne