Adaptive estimator

In statistics, an adaptive estimator is an estimator in a parametric or semiparametric model with nuisance parameters such that the presence of these nuisance parameters does not affect efficien...

In statistics, an adaptive estimator is an estimator in a parametric or semiparametric model with nuisance parameters such that the presence of these nuisance parameters does not affect efficien...

Adaptive Projected Subgradient Method

The Adaptive Projected Subgradient Method (APSM) is an algorithm, the goal of which is to minimize iteratively a sequence of cost functions.

The Adaptive Projected Subgradient Method (APSM) is an algorithm, the goal of which is to minimize iteratively a sequence of cost functions.

Adaptive projected subgradient method

The adaptive projected subgradient method (APSM) is an algorithm, the goal of which is to minimize iteratively a sequence of cost functions.

The adaptive projected subgradient method (APSM) is an algorithm, the goal of which is to minimize iteratively a sequence of cost functions.

Auxiliary particle filter

The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when ...

The auxiliary particle filter is a particle filtering algorithm introduced by Pitt and Shephard in 1999 to improve some deficiencies of the sequential importance resampling (SIR) algorithm when ...

Backcasting

Backcasting starts with defining a desirable future and then works backwards to identify policies and programs that will connect the future to the present.

Backcasting starts with defining a desirable future and then works backwards to identify policies and programs that will connect the future to the present.

Backus-Gilbert method

In mathematics, the Backus–Gilbert method, also known as the optimally localized average method is named for its discoverers, geophysicists George E. Backus and James Freeman Gilbert.

In mathematics, the Backus–Gilbert method, also known as the optimally localized average method is named for its discoverers, geophysicists George E. Backus and James Freeman Gilbert.

Bayes estimator

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the ...

In estimation theory and decision theory, a Bayes estimator or a Bayes action is an estimator or decision rule that minimizes the posterior expected value of a loss function (i.e., the ...

Bayesian spam filtering

Bayesian spam filtering (after Rev.

Bayesian spam filtering (after Rev.

Best linear unbiased prediction

In statistics, best linear unbiased prediction (BLUP) is used in linear mixed models for the estimation of random effects.

In statistics, best linear unbiased prediction (BLUP) is used in linear mixed models for the estimation of random effects.

Blind deconvolution

In electrical engineering and applied mathematics, blind deconvolution refers to deconvolution without explicit knowledge of the impulse response function used in the convolution.

In electrical engineering and applied mathematics, blind deconvolution refers to deconvolution without explicit knowledge of the impulse response function used in the convolution.

Chapman–Robbins bound

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter.

In statistics, the Chapman–Robbins bound or Hammersley–Chapman–Robbins bound is a lower bound on the variance of estimators of a deterministic parameter.

Confidence region

In statistics, a confidence region is a multi-dimensional generalization of a confidence interval.

In statistics, a confidence region is a multi-dimensional generalization of a confidence interval.

Consistent estimator

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ

In statistics, a consistent estimator or asymptotically consistent estimator is an estimator—a rule for computing estimates of a parameter θ

_{0}—having the property that ...Cramér-Rao bound

In estimation theory and statistics, the Cramér–Rao bound (CRB) or Cramér–Rao lower bound (CRLB), named in honor of Harald Cramér and Calyampudi Radhakrishna Rao who were among the first t...

In estimation theory and statistics, the Cramér–Rao bound (CRB) or Cramér–Rao lower bound (CRLB), named in honor of Harald Cramér and Calyampudi Radhakrishna Rao who were among the first t...

Cramér–Rao bound

In estimation theory and statistics, the Cramér–Rao bound or Cramér–Rao lower bound, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao who were among the first to derive it, ...

In estimation theory and statistics, the Cramér–Rao bound or Cramér–Rao lower bound, named in honor of Harald Cramér and Calyampudi Radhakrishna Rao who were among the first to derive it, ...

Data assimilation

Data assimilation is the process by which observations are incorporated into a computer model of a real system.

Data assimilation is the process by which observations are incorporated into a computer model of a real system.

Delphi method

The Delphi method is a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts.

The Delphi method is a structured communication technique, originally developed as a systematic, interactive forecasting method which relies on a panel of experts.

Efficient estimator

In statistics, an efficient estimator is an estimator that estimates the quantity of interest in some “best possible” manner.

In statistics, an efficient estimator is an estimator that estimates the quantity of interest in some “best possible” manner.

Empirical likelihood

Empirical likelihood (EL) is an estimation method in statistics.

Empirical likelihood (EL) is an estimation method in statistics.

Empirical probability

The empirical probability, also known as relative frequency, or experimental probability, is the ratio of the number of outcomes in which a specified event occurs to the total number...

The empirical probability, also known as relative frequency, or experimental probability, is the ratio of the number of outcomes in which a specified event occurs to the total number...

Ensemble Kalman filter

The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models.

The ensemble Kalman filter (EnKF) is a recursive filter suitable for problems with a large number of variables, such as discretizations of partial differential equations in geophysical models.

Estimating equations

In statistics, the method of estimating equations is a way of specifying how the parameters of a statistical model should be estimated.

In statistics, the method of estimating equations is a way of specifying how the parameters of a statistical model should be estimated.

Estimation

Estimation is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

Estimation is the process of finding an estimate, or approximation, which is a value that is usable for some purpose even if input data may be incomplete, uncertain, or unstable.

Estimation of signal parameters via rotational invariance techniques

In estimation theory, estimation of signal parameters via rotational invariant techniques is a technique to determine parameters of a mixture of sinusoids in a background noise.

In estimation theory, estimation of signal parameters via rotational invariant techniques is a technique to determine parameters of a mixture of sinusoids in a background noise.

Estimation theory

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component.

Estimation theory is a branch of statistics that deals with estimating the values of parameters based on measured/empirical data that has a random component.

Estimator

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule and its result (the estimate) are distinguished.

In statistics, an estimator is a rule for calculating an estimate of a given quantity based on observed data: thus the rule and its result (the estimate) are distinguished.

Expectation-maximization algorithm

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

In statistics, an expectation–maximization (EM) algorithm is an iterative method for finding maximum likelihood or maximum a posteriori (MAP) estimates of parameters in statistical models, where the model depends on unobserved latent variables.

Extended Kalman filter

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance.

In estimation theory, the extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance.

Extremum estimator

In statistics and econometrics, extremum estimators is a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective functi...

In statistics and econometrics, extremum estimators is a wide class of estimators for parametric models that are calculated through maximization (or minimization) of a certain objective functi...

Filtering problem (stochastic processes)

In the theory of stochastic processes, the filtering problem is a mathematical model for a number of filtering problems in signal processing and the like.

In the theory of stochastic processes, the filtering problem is a mathematical model for a number of filtering problems in signal processing and the like.

Fisher information

In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about ...

In mathematical statistics, the Fisher information (sometimes simply called information) is a way of measuring the amount of information that an observable random variable X carries about ...

Fixed effects model

In econometrics and statistics, a fixed effects model is a statistical model that represents the observed quantities in terms of explanatory variables that are treated as if the quantities were ...

In econometrics and statistics, a fixed effects model is a statistical model that represents the observed quantities in terms of explanatory variables that are treated as if the quantities were ...

Forecast error

In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest.

In statistics, a forecast error is the difference between the actual or real and the predicted or forecast value of a time series or any other phenomenon of interest.

Fraction of variance unexplained

In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) Y which cannot be explained, ...

In statistics, the fraction of variance unexplained (FVU) in the context of a regression task is the fraction of variance of the regressand (dependent variable) Y which cannot be explained, ...

Generalized canonical correlation

In statistics, the generalized canonical correlation analysis (gCCA), is a way of making sense of cross-correlation matrices between the sets of random variables when there are more than two sets.

In statistics, the generalized canonical correlation analysis (gCCA), is a way of making sense of cross-correlation matrices between the sets of random variables when there are more than two sets.

Generalized method of moments

In econometrics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models.

In econometrics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models.

Helmert–Wolf blocking

The Helmert–Wolf blocking (HWB) is a least squares solution method for a sparse canonical block-angular (CBA) system of linear equations.

The Helmert–Wolf blocking (HWB) is a least squares solution method for a sparse canonical block-angular (CBA) system of linear equations.

Hodges' estimator

In statistics, Hodges’ estimator (or Hodges–Le Cam estimator) is a famous counter example of an estimator which is, i.e. it attains smaller asymptotic variance than regular efficient estim...

In statistics, Hodges’ estimator (or Hodges–Le Cam estimator) is a famous counter example of an estimator which is, i.e. it attains smaller asymptotic variance than regular efficient estim...

Hodges’ estimator

In statistics, Hodges’ estimator (or Hodges–Le Cam estimator) is a famous counter example of an estimator which is, i.e. it attains smaller asymptotic variance than regular efficient estim...

In statistics, Hodges’ estimator (or Hodges–Le Cam estimator) is a famous counter example of an estimator which is, i.e. it attains smaller asymptotic variance than regular efficient estim...

Identifiability

In statistics, identifiability is a property which a model must satisfy in order for precise inference to be possible.

In statistics, identifiability is a property which a model must satisfy in order for precise inference to be possible.

Interval estimation

In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which ...

In statistics, interval estimation is the use of sample data to calculate an interval of possible (or probable) values of an unknown population parameter, in contrast to point estimation, which ...

Invariant estimator

In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity.

In statistics, the concept of being an invariant estimator is a criterion that can be used to compare the properties of different estimators for the same quantity.

Invariant extended Kalman filter

The invariant extended Kalman filter (IEKF) is a new version of the extended Kalman filter (EKF) for nonlinear systems possessing symmetries (or invariances).

The invariant extended Kalman filter (IEKF) is a new version of the extended Kalman filter (EKF) for nonlinear systems possessing symmetries (or invariances).

Inverse-variance weighting

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the sum.

In statistics, inverse-variance weighting is a method of aggregating two or more random variables to minimize the variance of the sum.

James-Stein estimator

The James–Stein estimator is a nonlinear estimator which can be shown to dominate, or outperform, the "ordinary" (least squares) technique.

The James–Stein estimator is a nonlinear estimator which can be shown to dominate, or outperform, the "ordinary" (least squares) technique.

James–Stein estimator

The James–Stein estimator is a nonlinear estimator which can be shown to dominate, or outperform, the "ordinary" (least squares) technique.

The James–Stein estimator is a nonlinear estimator which can be shown to dominate, or outperform, the "ordinary" (least squares) technique.

Kaplan-Meier estimator

The Kaplan–Meier estimator (named after Edward L. Kaplan and Paul Meier), also known as the product limit estimator, estimates the survival function from life-time data.

The Kaplan–Meier estimator (named after Edward L. Kaplan and Paul Meier), also known as the product limit estimator, estimates the survival function from life-time data.

Kaplan–Meier estimator

The Kaplan–Meier estimator, also known as the product limit estimator, estimates the survival function from life-time data.

The Kaplan–Meier estimator, also known as the product limit estimator, estimates the survival function from life-time data.

Karhunen-Loève theorem

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of o...

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of o...

Karhunen–Loève theorem

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of o...

In the theory of stochastic processes, the Karhunen–Loève theorem (named after Kari Karhunen and Michel Loève) is a representation of a stochastic process as an infinite linear combination of o...

Kullback's inequality

In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function.

In information theory and statistics, Kullback's inequality is a lower bound on the Kullback–Leibler divergence expressed in terms of the large deviations rate function.

Kushner equation

In filtering theory the Kushner equation (after Harold Kushner) is an equation for the conditional probability density of the state of a stochastic non-linear dynamical system, given noisy measu...

In filtering theory the Kushner equation (after Harold Kushner) is an equation for the conditional probability density of the state of a stochastic non-linear dynamical system, given noisy measu...

L-estimator

In statistics, an L-estimator is an estimator which is an L-statistic – a linear combination of order statistics of the measurements.

In statistics, an L-estimator is an estimator which is an L-statistic – a linear combination of order statistics of the measurements.

Lag windowing

Lag windowing is a technique that consists of windowing the auto-correlation coefficients prior to estimating Linear prediction coefficients (LPC).

Lag windowing is a technique that consists of windowing the auto-correlation coefficients prior to estimating Linear prediction coefficients (LPC).

Least absolute deviations

Least absolute deviations (LAD), also known as Least Absolute Errors (LAE), Least Absolute Value (LAV), or Least Absolute Residual (LAR) or the L

Least absolute deviations (LAD), also known as Least Absolute Errors (LAE), Least Absolute Value (LAV), or Least Absolute Residual (LAR) or the L

_{1}norm problem, is a mathematical optimiz...Lehmann-Scheffé theorem

In statistics, the Lehmann–Scheffé theorem, named after Erich Leo Lehmann and Henry Scheffé, states that any unbiased estimator based only on a complete, sufficient statistic is the unique best ...

In statistics, the Lehmann–Scheffé theorem, named after Erich Leo Lehmann and Henry Scheffé, states that any unbiased estimator based only on a complete, sufficient statistic is the unique best ...

Lehmann–Scheffé theorem

In statistics, the Lehmann–Scheffé theorem, named after Erich Leo Lehmann and Henry Scheffé, states that any unbiased estimator based only on a complete, sufficient statistic is the unique best ...

In statistics, the Lehmann–Scheffé theorem, named after Erich Leo Lehmann and Henry Scheffé, states that any unbiased estimator based only on a complete, sufficient statistic is the unique best ...

Likelihood function

In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model.

In statistics, a likelihood function (often simply the likelihood) is a function of the parameters of a statistical model.

Likelihood principle

The likelihood principle is a controversial principle of statistical inference which asserts that all of the information in a sample is contained in the likelihood function.

The likelihood principle is a controversial principle of statistical inference which asserts that all of the information in a sample is contained in the likelihood function.

Linear prediction

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.

Linear prediction is a mathematical operation where future values of a discrete-time signal are estimated as a linear function of previous samples.

Linear regression

In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X.

In statistics, linear regression is an approach for modeling the relationship between a scalar dependent variable y and one or more explanatory variables denoted X.

Location estimation in sensor networks

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements, when the measurements are acquired in a distributed...

Location estimation in wireless sensor networks is the problem of estimating the location of an object from a set of noisy measurements, when the measurements are acquired in a distributed...

M-estimator

In statistics, M-estimators are a broad class of estimators, which are obtained as the minima of sums of functions of the data.

In statistics, M-estimators are a broad class of estimators, which are obtained as the minima of sums of functions of the data.

Matched filter

In signal processing, a matched filter (originally known as a North filter) is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the...

In signal processing, a matched filter (originally known as a North filter) is obtained by correlating a known signal, or template, with an unknown signal to detect the presence of the...

Maximum a posteriori estimation

In Bayesian statistics, a maximum a posteriori probability estimate is a mode of the posterior distribution.

In Bayesian statistics, a maximum a posteriori probability estimate is a mode of the posterior distribution.

Maximum likelihood

In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model.

In statistics, maximum-likelihood estimation (MLE) is a method of estimating the parameters of a statistical model.

Maximum spacing estimation

In statistics, maximum spacing estimation (MSE or MSP), or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model.

In statistics, maximum spacing estimation (MSE or MSP), or maximum product of spacing estimation (MPS), is a method for estimating the parameters of a univariate statistical model.

Mean and predicted response

In linear regression mean response and predicted response are values of the dependent variable calculated from the regression parameters and a given value of the independent variable.

In linear regression mean response and predicted response are values of the dependent variable calculated from the regression parameters and a given value of the independent variable.

Mean squared error

In statistics, the mean squared error (MSE) of an estimator is one of many ways to quantify the difference between values implied by an estimator and the true values of the quantity being estimated.

In statistics, the mean squared error (MSE) of an estimator is one of many ways to quantify the difference between values implied by an estimator and the true values of the quantity being estimated.

Method of moments (statistics)

In statistics, the method of moments is a method of estimation of population parameters such as mean, variance, median, etc.

In statistics, the method of moments is a method of estimation of population parameters such as mean, variance, median, etc.

Method of simulated moments

In econometrics, the method of simulated moments (MSM) (also called simulated method of moments) is a structural estimation technique introduced by Daniel McFadden.

In econometrics, the method of simulated moments (MSM) (also called simulated method of moments) is a structural estimation technique introduced by Daniel McFadden.

Minimum chi-square estimation

In statistics, minimum chi-square estimation is a method of estimation of unobserved quantities based on observed data.

In statistics, minimum chi-square estimation is a method of estimation of unobserved quantities based on observed data.

Minimum distance estimation

Minimum distance estimation (MDE) is a statistical method for fitting a mathematical model to data, usually the empirical distribution.

Minimum distance estimation (MDE) is a statistical method for fitting a mathematical model to data, usually the empirical distribution.

Minimum mean square error

In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE) of the fitted values of a dependent va...

In statistics and signal processing, a minimum mean square error (MMSE) estimator is an estimation method which minimizes the mean square error (MSE) of the fitted values of a dependent va...

Minimum-variance unbiased estimator

In statistics a uniformly minimum-variance unbiased estimator or minimum-variance unbiased estimator (UMVUE or MVUE) is an unbiased estimator that has lower variance than any o...

In statistics a uniformly minimum-variance unbiased estimator or minimum-variance unbiased estimator (UMVUE or MVUE) is an unbiased estimator that has lower variance than any o...

MINQUE

In statistics, the theory of minimum norm quadratic unbiased estimation was developed by C.R. Rao.

In statistics, the theory of minimum norm quadratic unbiased estimation was developed by C.R. Rao.

Motion estimation

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence.

Motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence.

Nuisance parameter

In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest.

In statistics, a nuisance parameter is any parameter which is not of immediate interest but which must be accounted for in the analysis of those parameters which are of interest.

Observed information

In statistics, the observed information, or observed Fisher information, is the negative of the second derivative of the "log-likelihood".

In statistics, the observed information, or observed Fisher information, is the negative of the second derivative of the "log-likelihood".

Ordinary least squares

In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model.

In statistics, ordinary least squares (OLS) or linear least squares is a method for estimating the unknown parameters in a linear regression model.

Orthogonality principle

In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator.

In statistics and signal processing, the orthogonality principle is a necessary and sufficient condition for the optimality of a Bayesian estimator.

Particle filter

Particle filters or Sequential Monte Carlo (SMC) methods are a set of on-line posterior density estimation algorithms that estimate the posterior density of the state-space by directly imp...

Particle filters or Sequential Monte Carlo (SMC) methods are a set of on-line posterior density estimation algorithms that estimate the posterior density of the state-space by directly imp...

Pitman closeness criterion

In statistical theory, the Pitman closeness criterion, named after E. J. G. Pitman, is a way of comparing two candidate estimators for the same parameter.

In statistical theory, the Pitman closeness criterion, named after E. J. G. Pitman, is a way of comparing two candidate estimators for the same parameter.

Point estimation

In statistics, point estimation involves the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" or "best estimate" of an unknown (fixed or ...

In statistics, point estimation involves the use of sample data to calculate a single value (known as a statistic) which is to serve as a "best guess" or "best estimate" of an unknown (fixed or ...

Pyrrho's lemma

In statistics, Pyrrho's lemma is the result that if one adds just one extra, but specially formulated, variable as a regressor to a linear regression model, one can get any desired outcome in t...

In statistics, Pyrrho's lemma is the result that if one adds just one extra, but specially formulated, variable as a regressor to a linear regression model, one can get any desired outcome in t...

Quasi-maximum likelihood

A quasi-maximum likelihood estimate (QMLE, also known as a "pseudo-likelihood estimate" or a "composite likelihood estimate") is an estimate of a parameter θ in a statistical model that is...

A quasi-maximum likelihood estimate (QMLE, also known as a "pseudo-likelihood estimate" or a "composite likelihood estimate") is an estimate of a parameter θ in a statistical model that is...

Rao-Blackwell theorem

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimato...

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimato...

Rao–Blackwell theorem

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimato...

In statistics, the Rao–Blackwell theorem, sometimes referred to as the Rao–Blackwell–Kolmogorov theorem, is a result which characterizes the transformation of an arbitrarily crude estimato...

Recursive Bayesian estimation

Recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function recursively over time using incoming...

Recursive Bayesian estimation, also known as a Bayes filter, is a general probabilistic approach for estimating an unknown probability density function recursively over time using incoming...

Regularization perspectives on support vector machines

Regularization perspectives on support vector machines provide a way of interpreting support vector machines (SVMs) in the context of other machine learning algorithms.

Regularization perspectives on support vector machines provide a way of interpreting support vector machines (SVMs) in the context of other machine learning algorithms.

Relaxed intersection

The relaxed intersection of intervals is not necessary an interval.

The relaxed intersection of intervals is not necessary an interval.

Restricted maximum likelihood

In statistics, the restricted (or residual, or reduced) maximum likelihood (REML) approach is a particular form of maximum likelihood estimation which does not base estim...

In statistics, the restricted (or residual, or reduced) maximum likelihood (REML) approach is a particular form of maximum likelihood estimation which does not base estim...

Richardson-Lucy deconvolution

The Richardson–Lucy algorithm, also known as Lucy-Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function.

The Richardson–Lucy algorithm, also known as Lucy-Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function.

Richardson–Lucy deconvolution

The Richardson–Lucy algorithm, also known as Lucy-Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function.

The Richardson–Lucy algorithm, also known as Lucy-Richardson deconvolution, is an iterative procedure for recovering a latent image that has been blurred by a known point spread function.

Score (statistics)

In statistics, the score, score function, efficient score or informant plays an important role in several aspects of inference.

In statistics, the score, score function, efficient score or informant plays an important role in several aspects of inference.

Scoring algorithm

In statistics, Fisher's scoring algorithm is a form of Newton's method used to solve maximum likelihood equations numerically.

In statistics, Fisher's scoring algorithm is a form of Newton's method used to solve maximum likelihood equations numerically.

Sequential estimation

In statistics, sequential estimation refers to estimation methods in sequential analysis where the sample size is not fixed in advance.

In statistics, sequential estimation refers to estimation methods in sequential analysis where the sample size is not fixed in advance.

Set estimation

Set-membership approach In statistics, a random vector x is classically represented by a probability density function.

Set-membership approach In statistics, a random vector x is classically represented by a probability density function.

Shrinkage estimator

In statistics, a shrinkage estimator is an estimator that, either explicitly or implicitly, incorporates the effects of shrinkage.

In statistics, a shrinkage estimator is an estimator that, either explicitly or implicitly, incorporates the effects of shrinkage.

Sieve estimator

In statistics, sieve estimators are a class of non-parametric estimator which use progressively more complex models to estimate an unknown high-dimensional function as more data becomes availabl...

In statistics, sieve estimators are a class of non-parametric estimator which use progressively more complex models to estimate an unknown high-dimensional function as more data becomes availabl...

Simple linear regression

In statistics, simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable.

In statistics, simple linear regression is the least squares estimator of a linear regression model with a single explanatory variable.

Small area estimation

Small area estimation is any of several statistical techniques involving the estimation of parameters for small sub-populations, generally used when the sub-population of interest is included in...

Small area estimation is any of several statistical techniques involving the estimation of parameters for small sub-populations, generally used when the sub-population of interest is included in...

Spectral density estimation

In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence ...

In statistical signal processing, the goal of spectral density estimation (SDE) is to estimate the spectral density (also known as the power spectral density) of a random signal from a sequence ...

Stein's unbiased risk estimate

In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of "a nearly arbitrary, nonlinear biased estimator."

In statistics, Stein's unbiased risk estimate (SURE) is an unbiased estimator of the mean-squared error of "a nearly arbitrary, nonlinear biased estimator."

Stochastic optimization

Stochastic optimization (SO) methods are optimization methods that generate and use random variables.

Stochastic optimization (SO) methods are optimization methods that generate and use random variables.

Testimator

A testimator is an estimator whose value depends on the result of a test for statistical significance.

A testimator is an estimator whose value depends on the result of a test for statistical significance.

Tikhonov regularization

Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems.

Tikhonov regularization, named for Andrey Tikhonov, is the most commonly used method of regularization of ill-posed problems.

Trend estimation

Trend estimation is a statistical technique to aid interpretation of data.

Trend estimation is a statistical technique to aid interpretation of data.

Trimmed estimator

Given an estimator, a trimmed estimator is obtained by excluding some of the extreme values.

Given an estimator, a trimmed estimator is obtained by excluding some of the extreme values.

U-statistic

In statistical theory, a U-statistic is a class of statistics that is especially important in estimation theory; the letter "U" stands for unbiased.

In statistical theory, a U-statistic is a class of statistics that is especially important in estimation theory; the letter "U" stands for unbiased.

V-statistic

V-statistics are a class of statistics named for Richard von Mises who developed their asymptotic distribution theory in a fundamental paper in 1947.

V-statistics are a class of statistics named for Richard von Mises who developed their asymptotic distribution theory in a fundamental paper in 1947.

Wiener deconvolution

In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution.

In mathematics, Wiener deconvolution is an application of the Wiener filter to the noise problems inherent in deconvolution.

Wiener filter

In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known...

In signal processing, the Wiener filter is a filter used to produce an estimate of a desired or target random process by linear time-invariant filtering an observed noisy process, assuming known...

Zakai equation

In filtering theory the Zakai equation is a linear stochastic partial differential equation for the un-normalized density of a hidden state.

In filtering theory the Zakai equation is a linear stochastic partial differential equation for the un-normalized density of a hidden state.