Simple Linear Regression Models

30 downloads 263 Views 887KB Size Report
Good models (contd.) ▫ ... The best linear model minimizes the sum of squared errors ... SS0 has just one degree of fr
Performance Evaluation:

Simple Linear Regression Models Hongwei Zhang http://www.cs.wayne.edu/~hzhang

Statistics is the art of lying by means of figures. --- Dr. Wilhelm Stekhel

Acknowledgement: this lecture is partially based on the slides of Dr. Raj Jain.

Simple linear regression models 

Response Variable: Estimated variable



Predictor Variables: Variables used to predict the response 







Also called predictors or factors

Regression Model: Predict a response for a given set of predictor variables Linear Regression Models: Response is a linear function of predictors Simple Linear Regression Models: Only one predictor

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Definition of a good model?

Good models (contd.) 

Regression models attempt to minimize the distance measured vertically between the observation point and the model line (or curve) 

The length of the line segment is called residual, modeling error, or simply error



The negative and positive errors should cancel out => Zero overall error 



Many lines will satisfy this criterion

Choose the line that minimizes the sum of squares of the errors

Good models (contd.) 

Formally, 

where,

is the predicted response when the predictor variable is

x. The parameter b0 and b1 are fixed regression parameters to be determined from the data. 

Given n observation pairs {(x1, y1), …, (xn, yn)}, the estimated response for the i-th observation is:



The error is:

Good models (contd.) 

The best linear model minimizes the sum of squared errors (SSE):

subject to the constraint that the overall mean error is zero:



This is equivalent to the unconstrained minimization of the variance of errors (Exercise 14.1)

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Estimation of model parameters 

Regression parameters that give minimum error variance are:

where,

Example 14.1

Example (contd.)

Example (contd.)

Derivation of regression parameters?

Derivation (contd.)

Derivation (contd.)

Least Squares Regression vs. Least Absolute Deviations Regression? Least Squares Regression

Least Absolute Deviations Regression

Not very robust to outliers

Robust to outliers

Simple analytical solution

No analytical solving method (have to use iterative computation-intensive method)

Stable solution

Unstable solution

Always one unique solution

Possibly multiple solutions

The unstable property of the method of least absolute deviations means that, for any small horizontal adjustment of a data point, the regression line may jump a large amount. In contrast, the least squares solutions is stable in that, for any small horizontal adjustment of a data point, the regression line will always move only slightly, or continuously.

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Allocation of variation

Allocation of variation (contd.) 

The sum of squared errors without regression would be:

This is called total sum of squares or (SST). It is a measure of

y's variability and is called variation of y. SST can be computed as follows:

Where, SSY is the sum of squares of y (or Σy2). SS0 is the sum of squares of

and is equal to

Allocation of variation (contd.)

Variation not explained by the regression

Allocation of variation (contd.) 



Example 

For the disk I/O-CPU time data of Example 14.1:



The regression explains 97% of CPU time's variation.

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Standard deviation of errors 

Since errors are obtained after calculating two regression parameters from the data, errors have n-2 degrees of freedom



SSE/(n-2) is called mean squared errors or (MSE)



Standard deviation of errors = square root of MSE



Note: 





SSY has n degrees of freedom since it is obtained from n independent observations without estimating any parameters SS0 has just one degree of freedom since it can be computed simply from SST has n-1 degrees of freedom, since one parameter must be calculated from the data before SST can be computed

Standard deviation of errors (contd.) 

SSR, which is the difference between SST and SSE, has the remaining one degree of freedom.



Overall,



Notice that the degrees of freedom add just the way the sums of squares do

Example

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

CIs for regression parameters 

Regression coefficients b0 and b1 are estimates from a single sample of size n => 1) Random; 2) Using another sample, the estimates may be different.



If β0 and β1 are true parameters of the population (i.e., y = β0 + β1x), then the computed coefficients b0 and b1 are estimates of β0 and β1, respectively.



Sample standard deviation of b0 and b1

CIs for regression parameters (contd.) 

The 100(1-α)% confidence intervals for b0 and b1 can be computed using t[1-α/2; n-2] --- the 1-α/2 quantile of a t variate with n-2 degrees of freedom. The confidence intervals are:



If a confidence interval includes zero, then the regression parameter cannot be considered different from zero at the 100(1-α)% confidence level

Example

Example (contd.) 

The 0.95-quantile of a t-variate with 5 degrees of freedom is 2.015 => 90% confidence interval for b0 is:



Since, the confidence interval includes zero, the hypothesis that this parameter is zero cannot be rejected at 0.10 significance level => b0 is essentially zero.

=> 90% Confidence Interval for b1 is:



Since the confidence interval does not include zero, the slope b1 is significantly different from zero at this confidence level.

Case study 14.1: remote procedure call

Case study (contd.)

Case study (contd.)

Case study (contd.) 

Best linear models are:



The regressions explain 81% and 75% of the variation, respectively. Does ARGUS takes larger time per byte as well as a larger set up time per call than UNIX?

Case study (contd.)

? Intervals for intercepts overlap while those of the slopes do not. => Set up times are not significantly different in the two systems while the per byte times (slopes) are different.

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

CI for predications

CI for predications (contd.)

CI for predications (contd.) 

Standard deviation of the prediction is minimal at the center of the measured range (i.e., when x = x); Goodness of the prediction decreases as we move away from the center.

Example

Example (contd.)

Example (contd.)

Outline 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters



Confidence Intervals for Predictions



Visual Tests for verifying Regression Assumption

Visual test for regress assumptions 

Regression assumptions: 

The true relationship between the response variable y and the predictor variable x is linear.



The predictor variable x is non-stochastic and it is measured without any error.



The model errors are statistically independent.



The errors are normally distributed with zero mean and a

constant standard deviation.

Visual test for linear relationship

Visual test for independent errors 



Any trend would imply the dependence of errors on predictor variable => curvilinear model or transformation



In practice, dependence can be proven yet independent cannot

Visual test for independent errors (contd.) 



Any trend would imply that other factors (such as environmental conditions or side effects) should be considered in the modeling

Visual test for “normal distribution of errors”?

Visual test for constant standard deviation of errors

Example

Another example: RPC performance

Summary 

Definition of a Good Model



Estimation of Model parameters



Allocation of Variation



Standard deviation of Errors



Confidence Intervals for Regression Parameters & Predictions



Visual Tests for verifying Regression Assumption

Homework#5 1.

(100 points)