STAT430 : Questions?

Referers: Fall2007 :: (Remote :: Orphans :: Tree )

Dorman Wiki
Dorman Lab Wiki
0Questions
- Can  you please let us know when the term project's  'written submission'  is due ( specific date )  ? thanks

December 11, final exam day. (Was stated on TermProject? page.)

saras



Yes.  Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate).  Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution).


SSR is the sum of squared residuals.  SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean.  SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.


Perhaps a lot of the data in the field you best know is of this form.  Not all fields are so restricted.  Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue.  One should be cautioned against this approach.  Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model.  To run the test, you need multiple measures at the same level.  One way is to measure the same individual multiple times, but correlated data could result.  It would be much better to collect data from multiple individuals with the same covariates.  Of course, with continuous covariates, there may be no matching individuals.  Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.


Yes, there is a typo in the formula for E [ X ] .  Here is corrected formula plus detailed derivation.


Agreed.  My notes for derivation of E [ X ] read simpler.


It is not an iff statement.  One must have all moments external link match (when they exist) with those of a known distribution to conclude that a random variable has this distribution.  See moment generating function external link.  Thus, we would also have to check higher moments, like E [ X 3 ] , match those of a Poisson random variable to conclude that X Poisson.


Yes, this is abusive notation.  So, Ω is the sample space consisting of all possible outcomes of a random experiment.  A random variable maps Ω to some subspace of R .  If we sort of forget about the random experiment and outcomes, and treat the random variable as the outcome, then we can call this R subspace Ω X .  Proper, careful notation would probably use something other than Ω for this purpose.


As per our discussion about goodness-of-fit tests, the degrees of freedom should be m -1 less the number of parameters estimated, where m is the number of categories.  In the test of independence, the number of categories is n r n c .  Under independence, there are n r -1 parameters to estimate for the marginal pmf on rows, one for each category minus the constraint that the pmf i p i = 1 sums to one.  Similarly, there are n c -1 additional parameters to estimate for the pmf on columns.  Therefore, the number of degrees of freedom is n r n c - ( n r -1 ) - ( n c -1 ) -1 = ( n r -1 ) ( n c -1 ) , in agreement with the rule for tests of independence.  In conclusion, the test of independence can be viewed as a special type of goodness-of-fit test.


There is only a vague course schedule.  We will cover rudimentary experimental design, multiple linear regression, general linear models, logistic regression, poisson regression, stochastic processes (Bernoulli, Poisson, Brownian, discrete time Markov chain), simulation, including random number generation, Monte Carlo integration, and MCMC.
Comments [Hide comments/form]