December 11, final exam day. (Was stated on TermProject? page.)

]]>December 11, final exam day. (Was stated on TermProject? page.)

]]>- Can you please let us know when the term project's 'written submission' is due ( specific date ) ? thanks

- Can you please let us know when the term project's 'written submission' is due ? thanks

]]>- Can you please let us know when the term project's 'written submission' is due ( specific date ) ? thanks

- Can you please let us know when the term project's 'written submission' is due ? thanks

]]>- Can you please let us know when the term project's 'written submission' is due ? thanks

- Can you please let us know when the term project written submission is due ? thanks

]]>- Can you please let us know when the term project's 'written submission' is due ? thanks

- Can you please let us know when the term project written submission is due ? thanks

]]>- Can you please let us know when the term project written submission is due ? thanks

saras

]]>saras

- Can you please let us know when the term project written submission is due ? thanks

saras

]]>saras

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution).

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

]]>Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution).

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

]]>- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?
SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?
Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

]]>Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

- So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?
SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?
Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

]]>Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

- So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

]]>SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

Yes. Confidence intervals are for localizing population parameters (nonrandom quantities that you are trying to estimate). Prediction intervals are for localizing random variables, which presumably you know something about (e.g. you've estimated their sampling distribution), otherwise it would be pointless

]]>SSR is the sum of squared residuals. SST is the numerator of a standard sample variance, i.e. the difference between the observations and their population mean. SSR is similar but the sample mean is replaced with the prediction value, obtained by making assumptions about the relationships of the sample means across levels.

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

Perhaps a lot of the data in the field you best know is of this form. Not all fields are so restricted. Indeed people will often ignore this dependence and continue with the analysis as if it were not an issue. One should be cautioned against this approach. Experimental design does not solve this problem.

For example, suppose you wanted to test lack of fit to some kind of regression model. To run the test, you need multiple measures at the same level. One way is to measure the same individual multiple times, but correlated data could result. It would be much better to collect data from multiple individuals with the same covariates. Of course, with continuous covariates, there may be no matching individuals. Maybe it is time to take a good course on time series analysis to help you model the resulting correlation.

- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

]]>- In the MLR assumptions we state that cov($$epsi_i$$,$$epsi_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

]]>- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

]]> - So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

- After going through the notes again I am still a little fuzzy on the difference between confidence interval and prediction interval. Is the confidence interval the 1-$$alpha$$ range of E[y] while the prediction interval is the 1-$$alpha$$ range for y itself?

]]> - So, a residual is the observed value minus the predicted value, but is this the only definition of residual? If it is, it then seems like SST should be the sum of squares of residuals. Is this so? By the notation we have used I could also justify it being SSR. I'm just a little turned around about this at the moment... This could clarify the meaning of MSR if it is them SSR/k. Then it's the mean squared residual?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

- In the MLR assumptions we state that cov($$epsilon_i$$,$$epsilon_j$$) = 0 for all i not equal to j and that this holds for random samples, but not necessarily for time series data or repeated measures on an individual. However, it seems like the majority of studies are one of the latter two. Is our work in experimental design teaching us how to satisfy this assumption by blocking and such so that our data is of a proper form for MLR analysis? Is it that we simply violate this assumption some times? Or, am I thinking of studies in the wrong way; possibly the same mistake of language as thinking of a random variable in the same way as a random number generator?

There is only a vague course schedule. We will cover rudimentary experimental design, multiple linear regression, general linear models, logistic regression, poisson regression, stochastic processes (Bernoulli, Poisson, Brownian, discrete time Markov chain), simulation, including random number generation, Monte Carlo integration, and MCMC.

]]>There is only a vague course schedule. We will cover rudimentary experimental design, multiple linear regression, general linear models, logistic regression, poisson regression, stochastic processes (Bernoulli, Poisson, Brownian, discrete time Markov chain), simulation, including random number generation, Monte Carlo integration, and MCMC.

]]>For some reason the wiki won't let me comment on HW#3. There is no observed value of 65 for 1) b. What do you mean by "observed 65?"

]]>For some reason the wiki won't let me comment on HW#3. There is no observed value of 65 for 1) b. What do you mean by "observed 65?"

]]>For some reason the wiki won't let me comment on HW#3. There is no observed value of 65 for 1) b. What do you mean by "observed 65?"

In HW3 (for some reason the wiki won't let me comment at the moment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment at the moment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment at the moment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE, or is this the variance of $$\epsilon_i$$? Also, there is no observed value of 65 for 1) b.

In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE? Also, there is no observed value of 65 for 1) b.

]]>In HW3 (for some reason the wiki won't let me comment...) is $$\sigma_\epsilon^2$$ the same as MSE? Also, there is no observed value of 65 for 1) b.

- First of all, is this the area where you intended us to post questions Dr.?

]]>How about here?

- First of all, is this the area where you intended us to post questions Dr.?

]]>How about here?

- Is there any course schedule that we can know what we will learn during this semester? That would helpful for deciding which group project is suitable. Thanks.

]]>- Is there any course schedule that we can know what we will learn during this semester? That would helpful for deciding which group project is suitable. Thanks.

]]>