Introduction
Introduction
In some data sets, there are values (observed data points) called outliers. Outliers are observed data points that are far from the least-squares line. They have large errors, where the error or residual is not very close to the best-fit line.
Outliers need to be examined closely. Sometimes, they should not be included in the analysis of the data, like if it is possible that an outlier is a result of incorrect data. Other times, an outlier may hold valuable information about the population under study and should remain included in the data. The key is to examine carefully what causes a data point to be an outlier.
Besides outliers, a sample may contain one or a few points that are called influential points. Influential points are observed data points that are far from the other observed data points in the horizontal direction. These points may have a big effect on the slope of the regression line. To begin to identify an influential point, you can remove it from the data set and determine whether the slope of the regression line is changed significantly.
You also want to examine how the correlation coefficient, r, has changed. Sometimes, it is difficult to discern a significant change in slope, so you need to look at how the strength of the linear relationship has changed. Computers and many calculators can be used to identify outliers and influential points. Regression analysis can determine if an outlier is, indeed, an influential point. The new regression will show how omitting the outlier will affect the correlation among the variables, as well as the fit of the line. A graph showing both regression lines helps determine how removing an outlier affects the fit of the model.
Identifying Outliers
Identifying Outliers
We could guess at outliers by looking at a graph of the scatter plot and best-fit line. However, we would like some guideline regarding how far away a point needs to be to be considered an outlier. As a rough rule of thumb, we can flag as an outlier any point that is located farther than two standard deviations above or below the best-fit line. The standard deviation used is the standard deviation of the residuals or errors.
We can do this visually in the scatter plot by drawing an extra pair of lines that are two standard deviations above and below the best-fit line. Any data points outside this extra pair of lines are flagged as potential outliers. Or, we can do this numerically by calculating each residual and comparing it with twice the standard deviation. With regard to the TI-83, 83+, or 84+ calculators, the graphical approach is easier. The graphical procedure is shown first, followed by the numerical calculations. You would generally need to use only one of these methods.
Example 12.12
In the third exam/final exam example, you can determine whether there is an outlier. If there is an outlier, as an exercise, delete it and fit the remaining data to a new line. For this example, the new line ought to fit the remaining data better. This means the SSE (sum of the squared errors) should be smaller and the correlation coefficient ought to be closer to 1 or –1.
Graphical Identification of OutliersWith the TI-83, 83+, or 84+ graphing calculators, it is easy to identify the outliers graphically and visually. If we were to measure the vertical distance from any data point to the corresponding point on the line of best fit and that distance were equal to 2s or more, then we would consider the data point to be too far from the line of best fit. We need to find and graph the lines that are two standard deviations below and above the regression line. Any points that are outside these two lines are outliers. Let’s call these lines Y2 and Y3.
As we did with the equation of the regression line and the correlation coefficient, we will use technology to calculate this standard deviation for us. Using the LinRegTTest with these data, scroll down through the output screens to find s = 16.412.
Line Y2 = –173.5 + 4.83x – 2(16.4), and line Y3 = –173.5 + 4.83x + 2(16.4),
Graph the scatter plot with the best-fit line in equation Y1, then enter the two extra lines as Y2 and Y3 in the Y= equation editor. Press ZOOM-9 to get a good view. You will see that the only point that is not between Y2 and Y3 is the point (65, 175). On the calculator screen, it is barely outside these lines, but it is considered an outlier because it is more than two standard deviations away from the best-fit line. The outlier is the student who had a grade of 65 on the third exam and 175 on the final exam.
Sometimes a point is so close to the lines used to flag outliers on the graph that it is difficult to tell whether the point is between or outside the lines. On a computer, enlarging the graph may help; on a small calculator screen, zooming in may make the graph clearer. Note that when the graph does not give a clear enough picture, you can use the numerical comparisons to identify outliers.
Identify the potential outlier in the scatter plot. The standard deviation of the residuals, or errors, is approximately 8.6.
Numerical Identification of Outliers
Numerical Identification of Outliers
In Table 12.8, the first two columns include the third exam and final exam data. The third column shows the predicted ŷ values calculated from the line of best fit: ŷ = –173.5 + 4.83x. The residuals, or errors, that were mentioned in Section 3 of this chapter have been calculated in the fourth column of the table: observed y value – predicted y value = y – ŷ.
s is the standard deviation of all the y – ŷ = ε values, where n is the total number of data points. If each residual is calculated and squared, and the results are added, we get the SSE. The standard deviation of the residuals is calculated from the SSE as
Note
We divide by (n – 2) because the regression model involves two estimates.
Rather than calculate the value of s ourselves, we can find s using a computer or calculator. For this example, the calculator function LinRegTTest found s = 16.4 as the standard deviation of the residuals 35, –17, 16, –6, –19, 9, 3, –1, –10, –9, –1 .
x | y | ŷ | y – ŷ |
---|---|---|---|
65 | 175 | 140 | 175 – 140 = 35 |
67 | 133 | 150 | 133 – 150= –17 |
71 | 185 | 169 | 185 – 169 = 16 |
71 | 163 | 169 | 163 – 169 = –6 |
66 | 126 | 145 | 126 – 145 = –19 |
75 | 198 | 189 | 198 – 189 = 9 |
67 | 153 | 150 | 153 – 150 = 3 |
70 | 163 | 164 | 163 – 164 = –1 |
71 | 159 | 169 | 159 – 169 = –10 |
69 | 151 | 160 | 151 – 160 = –9 |
69 | 159 | 160 | 159 – 160 = –1 |
We are looking for all data points for which the residual is greater than 2s = 2(16.4) = 32.8 or less than –32.8. Compare these values with the residuals in column four of the table. The only such data point is the student who had a grade of 65 on the third exam and 175 on the final exam; the residual for this student is 35.
How Does the Outlier Affect the Best-Fit Line?
How Does the Outlier Affect the Best-Fit Line?
Numerically and graphically, we have identified point (65, 175) as an outlier. Recall that recalculation of the least-squares regression line and summary statistics, following deletion of an outlier, may be used to determine whether an outlier is also an influential point. This process also allows you to compare the strength of the correlation of the variables and possible changes in the slope both before and after the omission of any outliers.
Compute a new best-fit line and correlation coefficient using the 10 remaining points.
On the TI-83, TI-83+, or TI-84+ calculators, delete the outlier from L1 and L2. Using the LinRegTTest, found under Stat and Tests, the new line of best fit and correlation coefficient are the following:
and .
The slope is now 7.39, compared to the previous slope of 4.83. This seems significant, but we need to look at the change in r-values as well. The new line shows , which indicates a stronger correlation than the original line, with since is closer to 1. This means the new line is a better fit to the data values. The line can better predict the final exam score given the third exam score. It also means the outlier of (65, 175) was an influential point, since there is a sizeable difference in r-values. We must now decide whether to delete the outlier. If the outlier was recorded erroneously, it should certainly be deleted. Because it produces such a profound effect on the correlation, the new line of best fit allows for better prediction and an overall stronger model.
You may use Excel to graph the two least-squares regression lines and compare the slopes and fit of the lines to the data, as shown in Figure 12.18.
You can see that the second graph shows less deviation from the line of best fit. It is clear that omission of the influential point produced a line of best fit that more closely models the data.
Numerical Identification of Outliers: Calculating s and Finding Outliers Manually
Numerical Identification of Outliers: Calculating s and Finding Outliers Manually
If you do not have the function LinRegTTest on your calculator, then you must calculate the outlier in the first example by doing the following:
First, square each |y – ŷ|.The squares are 352, 172, 162, 62, 192, 92, 32, 12, 102, 92, 12.
Then, add (sum) all the |y – ŷ| squared terms using the formula
(Recall that yi – ŷi = εi.)
= 352 + 172 + 162 + 62 + 192 + 92 + 32 + 12 + 102 + 92 + 12
= 2,440 = SSE.
The result, SSE, is the sum of squared errors.
Next, calculate s, the standard deviation of all the y – ŷ = ε-values where n = the total number of data points.
The calculation is .
For the third exam/final exam example,
Next, multiply s by 2:
If we were to measure the vertical distance from any data point to the corresponding point on the line of best fit and that distance is at least 2s, then we would consider the data point to be too far from the line of best fit. We call that point a potential outlier.
For the example, if any of the |y – ŷ| values are at least 32.94, the corresponding (x, y) data point is a potential outlier.
For the third exam/final exam example, all the |y – ŷ| values are less than 31.29 except for the first one, which is 35:
35 > 31.29. That is, |y – ŷ| ≥ (2)(s).
The point that corresponds to |y – ŷ| = 35 is (65, 175). Therefore, the data point (65, 175) is a potential outlier. For this example, we will delete it. (Remember, we do not always delete an outlier.)
When outliers are deleted, the researcher should either record that data were deleted, and why, or the researcher should provide results both with and without the deleted data. If data are erroneous and the correct values are known (e.g., student 1 actually scored a 70 instead of a 65), then this correction can be made to the data.
ŷ = –355.19 + 7.39x and r = .9121.
Example 12.13
Using this new line of best fit (based on the remaining 10 data points in the third exam/final exam example), what would a student who receives a 73 on the third exam expect to receive on the final exam? Is this the same as the prediction made using the original line?
Using the new line of best fit, ŷ = –355.19 + 7.39(73) = 184.28. A student who scored 73 points on the third exam would expect to earn 184 points on the final exam.
The data points for the graph from the third exam/final exam example are as follows: (1, 5), (2, 7), (2, 6), (3, 9), (4, 12), (4, 13), (5, 18), (6, 19), (7, 12), and (7, 21). Remove the outlier and recalculate the line of best fit. Find the value of ŷ when x = 10.
Example 12.14
The consumer price index (CPI) measures the average change over time in prices paid by urban consumers for consumer goods and services. The CPI affects nearly all Americans because of the many ways it is used. One of its biggest uses is as a measure of inflation. By providing information about price changes in the nation’s economy to government, businesses, and labor forces, the CPI helps them make economic decisions. The president, U.S. Congress, and the Federal Reserve Board use CPI trends to formulate monetary and fiscal policies. In the following table, x is the year and y is the CPI.
x | y | x | y |
---|---|---|---|
1915 | 10.1 | 1969 | 36.7 |
1926 | 17.7 | 1975 | 49.3 |
1935 | 13.7 | 1979 | 72.6 |
1940 | 14.7 | 1980 | 82.4 |
1947 | 24.1 | 1986 | 109.6 |
1952 | 26.5 | 1991 | 130.7 |
1964 | 31.0 | 1999 | 166.6 |
- Draw a scatter plot of the data.
- Calculate the least-squares line. Write the equation in the form ŷ = a + bx.
- Draw the line on a scatter plot.
- Find the correlation coefficient. Is it significant?
- What is the average CPI for the year 1990?
- See Figure 12.19.
- Using our calculator, ŷ = –3204 + 1.662x is the equation of the line of best fit.
- See Figure 12.19.
- r = 0.8694. The number of data points is n = 14. Use the 95 Percent Critical Values of the Sample Correlation Coefficient table at the end of Chapter 12: In this case, df = 12. The corresponding critical values from the table are ±0.532. Since 0.8694 > 0.532, r is significant. We can use the predicted regression line we found above to make the prediction for x = 1990.
- ŷ = –3204 + 1.662(1990) = 103.4 CPI.
Note
In the example, notice the pattern of the points compared with the line. Although the correlation coefficient is significant, the pattern in the scatter plot indicates that a curve would be a more appropriate model to use than a line. In this example, a statistician would prefer to use other methods to fit a curve to these data, rather than model the data with the line we found. In addition to doing the calculations, it is always important to look at the scatter plot when deciding whether a linear model is appropriate.
If you are interested in seeing more years of data, visit the Bureau of Labor Statistics CPI website (ftp://ftp.bls.gov/pub/special.requests/cpi/cpiai.txt). Our data are taken from the column Annual Avg. (third column from the right). For example, you could add more current years of data. Try adding the more recent years: 2004, CPI = 188.9; 2008, CPI = 215.3; and 2011, CPI = 224.9. See how this affects the model. (Check: ŷ = –4436 + 2.295x; r = 0.9018. Is r significant? Is the fit better with the addition of the new points?)
The following table shows economic development measured in per capita income (PCINC).
Year | PCINC | Year | PCINC |
---|---|---|---|
1870 | 340 | 1920 | 1,050 |
1880 | 499 | 1930 | 1,170 |
1890 | 592 | 1940 | 1,364 |
1900 | 757 | 1950 | 1,836 |
1910 | 927 | 1960 | 2,s132 |
- What are the independent and dependent variables?
- Draw a scatter plot.
- Use regression to find the line of best fit and the correlation coefficient.
- Interpret the significance of the correlation coefficient.
- Is there a linear relationship between the variables?
- Find the coefficient of determination and interpret it.
- What is the slope of the regression equation? What does it mean?
- Use the line of best fit to estimate PCINC for 1900 and for 2000.
- Determine whether there are any outliers.
95 Percent Critical Values of the Sample Correlation Coefficient Table
95 Percent Critical Values of the Sample Correlation Coefficient Table
Degrees of Freedom: n – 2 | Critical Values: + and – |
---|---|
1 | 0.997 |
2 | 0.950 |
3 | 0.878 |
4 | 0.811 |
5 | 0.754 |
6 | 0.707 |
7 | 0.666 |
8 | 0.632 |
9 | 0.602 |
10 | 0.576 |
11 | 0.555 |
12 | 0.532 |
13 | 0.514 |
14 | 0.497 |
15 | 0.482 |
16 | 0.468 |
17 | 0.456 |
18 | 0.444 |
19 | 0.433 |
20 | 0.423 |
21 | 0.413 |
22 | 0.404 |
23 | 0.396 |
24 | 0.388 |
25 | 0.381 |
26 | 0.374 |
27 | 0.367 |
28 | 0.361 |
29 | 0.355 |
30 | 0.349 |
40 | 0.304 |
50 | 0.273 |
60 | 0.250 |
70 | 0.232 |
80 | 0.217 |
90 | 0.205 |
100 | 0.195 |