Abstract
Consider a model for n observations, Yi = μi + σξi (i = 1,…, n), where the ξi are n independent unit normal variables and the μi are restrained by p linear inequalities. The maximum-likelihood estimate of {μi} is {Yoi} minimizing Σ(YiYoi)2 subject to the linear inequalities; the computation of {Yoi} requires quadratic programming. This paper is concerned with the distribution of the residual sum of squares Σ(YiYoi)2 which it is natural to use for inference about σ2. Using the Kuhn-Tucker conditions, which the Yoi must satisfy, upper and lower bounds are obtained for the percentage points of Σ(YiYoi)22.The upper bound is Σ(YiYoi)22≤ Xnk2 which holds conditionally on exactly nk independent linear inequalities being satisfied as equations by the Yoi A direct lower bound uγ, n, n−k is given with (nk) regarded as a random variable. A more satisfactory Bayesian lower bound ¾X⅔(nk)2≤Σ(YiYoi)22 holds if .μ. is a priori uniformly distributed over its possible values, and under restrictive conditions on the linear inequalities. This bound holds conditionally, given nk. Some suggestions for further developments are given, and an applica tion to paired comparisons considered.

This publication has 0 references indexed in Scilit: