Share
VIDEOS 1 TO 50
Snarky Puppy - Outlier (We Like It Here)
Snarky Puppy - Outlier (We Like It Here)
Published: 2014/03/17
Channel: groundUPmusicNYC
Statistics - How to find outliers
Statistics - How to find outliers
Published: 2012/10/21
Channel: MySecretMathTutor
Outlier - Skinwalker
Outlier - Skinwalker
Published: 2017/05/11
Channel: REDSCARE MEDIA
Malcolm Gladwell  Outliers
Malcolm Gladwell Outliers
Published: 2017/01/10
Channel: audio boock
Bonobo : Outlier
Bonobo : Outlier
Published: 2017/01/13
Channel: Bonobo Official
Outlier - Bridges (OFFICIAL MUSIC VIDEO)
Outlier - Bridges (OFFICIAL MUSIC VIDEO)
Published: 2014/11/12
Channel: Dreambound
OTHERS by Hypebeast - Outlier
OTHERS by Hypebeast - Outlier
Published: 2012/08/29
Channel: iamOTHER
Outlier NYC Unboxing and Review | Performance Dress Wear - "Future of Clothing" ?
Outlier NYC Unboxing and Review | Performance Dress Wear - "Future of Clothing" ?
Published: 2016/11/14
Channel: The Kavalier
Counterparts - Outlier
Counterparts - Outlier
Published: 2013/07/25
Channel: Conor Cooley
THE OUTLIER - Official Trailer
THE OUTLIER - Official Trailer
Published: 2017/06/12
Channel: Montana Wild
OUTLIERS BY MALCOLM GLADWELL ANIMATED BOOK REVIEW
OUTLIERS BY MALCOLM GLADWELL ANIMATED BOOK REVIEW
Published: 2015/01/23
Channel: FightMediocrity
Outlier - Vigilante
Outlier - Vigilante
Published: 2015/05/05
Channel: Dreambound
boxplot with outliers
boxplot with outliers
Published: 2011/10/17
Channel: Diane R Koenig
My Everyday Pants - 6mo OUTLIER Slim Dungarees Review | Ultimate Dad Pants
My Everyday Pants - 6mo OUTLIER Slim Dungarees Review | Ultimate Dad Pants
Published: 2017/04/21
Channel: The Kavalier
Outlier
Outlier
Published: 2017/01/12
Channel: Bonobo - Topic
How to Show 4th Graders How to Get an Outlier in Math : Math Equations & More
How to Show 4th Graders How to Get an Outlier in Math : Math Equations & More
Published: 2014/02/16
Channel: eHowEducation
Outlier - Horizon
Outlier - Horizon
Published: 2014/06/04
Channel: Dreambound
10 Best Ideas | Outliers | Malcolm Gladwell | Book Summary
10 Best Ideas | Outliers | Malcolm Gladwell | Book Summary
Published: 2016/08/29
Channel: Clark Kegley
Judging outliers in a dataset
Judging outliers in a dataset
Published: 2016/11/11
Channel: Khan Academy
Outliers: The Story of Success - Malcolm Gladwell Animated Book Review
Outliers: The Story of Success - Malcolm Gladwell Animated Book Review
Published: 2016/02/27
Channel: Practical Psychology
Great Pants for Travel | Outlier Slim Dungarees Review
Great Pants for Travel | Outlier Slim Dungarees Review
Published: 2017/07/09
Channel: Pack Hacker
Outliers with Box and Whisker Plots
Outliers with Box and Whisker Plots
Published: 2014/05/17
Channel: StraightA Stats
THE OUTLIER - Behind The Scenes
THE OUTLIER - Behind The Scenes
Published: 2017/05/24
Channel: Montana Wild
How to detect outliers in SPSS
How to detect outliers in SPSS
Published: 2016/04/20
Channel: how2stats
The Right Way to Detect Outliers - Outlier Labeling Rule (part 1)
The Right Way to Detect Outliers - Outlier Labeling Rule (part 1)
Published: 2011/09/09
Channel: how2stats
Outlier in Math-6th Grade Math
Outlier in Math-6th Grade Math
Published: 2016/03/13
Channel: MooMoo Math and Science
THE OUTLIER - Teaser #1
THE OUTLIER - Teaser #1
Published: 2017/04/25
Channel: Montana Wild
Outlier Slim Dungarees (2014) Review
Outlier Slim Dungarees (2014) Review
Published: 2015/03/19
Channel: Packing Lite
Finding Outliers & Modified Boxplots 1.5(IQR) Rule
Finding Outliers & Modified Boxplots 1.5(IQR) Rule
Published: 2014/05/09
Channel: MATHRoberg
Spoon - Outlier
Spoon - Outlier
Published: 2014/09/16
Channel: Brooks Wyrick
Identifying and Highlighting Outliers in Excel
Identifying and Highlighting Outliers in Excel
Published: 2015/07/19
Channel: Todd Grande
Smith Outlier & Outlier XL Review | SportRx
Smith Outlier & Outlier XL Review | SportRx
Published: 2016/08/08
Channel: SportRx
Outliers -- 1.5 x IQR (Improved!)
Outliers -- 1.5 x IQR (Improved!)
Published: 2012/09/06
Channel: kelly castaneda
How to Use the Outliers Function in Excel
How to Use the Outliers Function in Excel
Published: 2016/01/28
Channel: eHowTech
How to find outliers
How to find outliers
Published: 2013/09/18
Channel: Stephanie Glen
Estatística Descritiva - Outliers, boxplot e padronização
Estatística Descritiva - Outliers, boxplot e padronização
Published: 2013/08/17
Channel: Statmeup
Outlier Analysis - Part 1
Outlier Analysis - Part 1
Published: 2016/02/14
Channel: Gourab Nath
Outlier Treatment in R - Part 1 - Discarding Outliers
Outlier Treatment in R - Part 1 - Discarding Outliers
Published: 2016/02/14
Channel: Gourab Nath
Weka Tutorial 19: Outliers and Extreme Values (Data Preprocessing)
Weka Tutorial 19: Outliers and Extreme Values (Data Preprocessing)
Published: 2012/11/01
Channel: Rushdi Shams
Kyrie Irving "Outlier" HD
Kyrie Irving "Outlier" HD
Published: 2014/11/20
Channel: JoeBlow TV
Dealing with an outlier - Winsorize
Dealing with an outlier - Winsorize
Published: 2016/06/03
Channel: how2stats
Outlier Detection using RapidMiner
Outlier Detection using RapidMiner
Published: 2010/03/25
Channel: Markus Hofmann
Outlier - Undesirable
Outlier - Undesirable
Published: 2017/09/20
Channel: REDSCARE MEDIA
Outlier Ultrafine Merino Tshirt Hands On Review
Outlier Ultrafine Merino Tshirt Hands On Review
Published: 2016/01/14
Channel: Packing Lite
2014 vs 2015 Outlier Slim Dungarees
2014 vs 2015 Outlier Slim Dungarees
Published: 2015/04/11
Channel: Packing Lite
Outlier Air Forged Oxford Review
Outlier Air Forged Oxford Review
Published: 2015/03/13
Channel: Packing Lite
Outlier Merino Co/Pivot Review
Outlier Merino Co/Pivot Review
Published: 2015/03/05
Channel: Packing Lite
Outlier New Way Shorts (2015) Review
Outlier New Way Shorts (2015) Review
Published: 2015/04/02
Channel: Packing Lite
Bonobo: Outlier [Horizons VR Trailer]
Bonobo: Outlier [Horizons VR Trailer]
Published: 2017/06/01
Channel: Bonobo Official
The Effects of Outliers
The Effects of Outliers
Published: 2010/07/06
Channel: statslectures
NEXT
GO TO RESULTS [51 .. 100]

WIKIPEDIA ARTICLE

From Wikipedia, the free encyclopedia
Jump to: navigation, search
Figure 1. Box plot of data from the Michelson–Morley experiment displaying four outliers in the middle column, as well as one outlier in the first column.

In statistics, an outlier is an observation point that is distant from other observations.[1][2] An outlier may be due to variability in the measurement or it may indicate experimental error; the latter are sometimes excluded from the data set.[3] An outlier can cause serious problems in statistical analyses.

Outliers can occur by chance in any distribution, but they often indicate either measurement error or that the population has a heavy-tailed distribution. In the former case one wishes to discard them or use statistics that are robust to outliers, while in the latter case they indicate that the distribution has high skewness and that one should be very cautious in using tools or intuitions that assume a normal distribution. A frequent cause of outliers is a mixture of two distributions, which may be two distinct sub-populations, or may indicate 'correct trial' versus 'measurement error'; this is modeled by a mixture model.

In most larger samplings of data, some data points will be further away from the sample mean than what is deemed reasonable. This can be due to incidental systematic error or flaws in the theory that generated an assumed family of probability distributions, or it may be that some observations are far from the center of the data. Outlier points can therefore indicate faulty data, erroneous procedures, or areas where a certain theory might not be valid. However, in large samples, a small number of outliers is to be expected (and not due to any anomalous condition).

Outliers, being the most extreme observations, may include the sample maximum or sample minimum, or both, depending on whether they are extremely high or low. However, the sample maximum and minimum are not always outliers because they may not be unusually far from other observations.

Naive interpretation of statistics derived from data sets that include outliers may be misleading. For example, if one is calculating the average temperature of 10 objects in a room, and nine of them are between 20 and 25 degrees Celsius, but an oven is at 175 °C, the median of the data will be between 20 and 25 °C but the mean temperature will be between 35.5 and 40 °C. In this case, the median better reflects the temperature of a randomly sampled object (but not the temperature in the room) than the mean; naively interpreting the mean as "a typical sample", equivalent to the median, is incorrect. As illustrated in this case, outliers may indicate data points that belong to a different population than the rest of the sample set.

Estimators capable of coping with outliers are said to be robust: the median is a robust statistic of central tendency, while the mean is not.[4] However, the mean is generally more precise estimator.[5]

Occurrence and causes[edit]

In the case of normally distributed data, the three sigma rule means that roughly 1 in 22 observations will differ by twice the standard deviation or more from the mean, and 1 in 370 will deviate by three times the standard deviation.[6] In a sample of 1000 observations, the presence of up to five observations deviating from the mean by more than three times the standard deviation is within the range of what can be expected, being less than twice the expected number and hence within 1 standard deviation of the expected number – see Poisson distribution – and not indicate an anomaly. If the sample size is only 100, however, just three such outliers are already reason for concern, being more than 11 times the expected number.

In general, if the nature of the population distribution is known a priori, it is possible to test if the number of outliers deviate significantly from what can be expected: for a given cutoff (so samples fall beyond the cutoff with probability p) of a given distribution, the number of outliers will follow a binomial distribution with parameter p, which can generally be well-approximated by the Poisson distribution with λ = pn. Thus if one takes a normal distribution with cutoff 3 standard deviations from the mean, p is approximately 0.3%, and thus for 1000 trials one can approximate the number of samples whose deviation exceeds 3 sigmas by a Poisson distribution with λ = 3.

Causes[edit]

Outliers can have many anomalous causes. A physical apparatus for taking measurements may have suffered a transient malfunction. There may have been an error in data transmission or transcription. Outliers arise due to changes in system behaviour, fraudulent behaviour, human error, instrument error or simply through natural deviations in populations. A sample may have been contaminated with elements from outside the population being examined. Alternatively, an outlier could be the result of a flaw in the assumed theory, calling for further investigation by the researcher. Additionally, the pathological appearance of outliers of a certain form appears in a variety of datasets, indicating that the causative mechanism for the data might differ at the extreme end (King effect).

Detection[edit]

There is no rigid mathematical definition of what constitutes an outlier; determining whether or not an observation is an outlier is ultimately a subjective exercise. There are various methods of outlier detection.[7][8][9][10] Some are graphical such as normal probability plots. Others are model-based. Box plots are a hybrid.

Model-based methods which are commonly used for identification assume that the data are from a normal distribution, and identify observations which are deemed "unlikely" based on mean and standard deviation:

Peirce's criterion[edit]

It is proposed to determine in a series of observations the limit of error, beyond which all observations involving so great an error may be rejected, provided there are as many as such observations. The principle upon which it is proposed to solve this problem is, that the proposed observations should be rejected when the probability of the system of errors obtained by retaining them is less than that of the system of errors obtained by their rejection multiplied by the probability of making so many, and no more, abnormal observations. (Quoted in the editorial note on page 516 to Peirce (1982 edition) from A Manual of Astronomy 2:558 by Chauvenet.) [11][12][13][14]

Tukey's fences[edit]

Other methods flag observations based on measures such as the interquartile range. For example, if and are the lower and upper quartiles respectively, then one could define an outlier to be any observation outside the range:

for some nonnegative constant . John Tukey proposed this test, where indicates an "outlier", and indicates data that is "far out".[15]

In anomaly detection[edit]

In the data mining task of anomaly detection, other approaches are distance-based[16][17] and density-based such as Local Outlier Factor,[18] and most of them use the distance to the k-nearest neighbors to label observations as outliers or non-outliers.[19]

Modified Thompson Tau test[edit]

The modified Thompson Tau test[citation needed] is a method used to determine if an outlier exists in a data set. The strength of this method lies in the fact that it takes into account a data set’s standard deviation, average and provides a statistically determined rejection zone; thus providing an objective method to determine if a data point is an outlier. Note: Although intuitively appealing, this method appears to be unpublished (it is not described in Thompson (1985)[20]) and one should use it with caution.

How it works: First, a data set's average is determined. Next the absolute deviation between each data point and the average are determined. Thirdly, a rejection region is determined using the formula: ; where is the critical value from the Student t distribution, n is the sample size, and s is the sample standard deviation. To determine if a value is an outlier: Calculate δ = |(X - mean(X)) / s|. If δ > Rejection Region, the data point is an outlier. If δ ≤ Rejection Region, the data point is not an outlier.

The modified Thompson Tau test is used to find one outlier at a time (largest value of δ is removed if it is an outlier). Meaning, if a data point is found to be an outlier, it is removed from the data set and the test is applied again with a new average and rejection region. This process is continued until no outliers remain in a data set.

Some work has also examined outliers for nominal (or categorical) data. In the context of a set of examples (or instances) in a data set, instance hardness measures the probability that an instance will be misclassified ( where is the assigned class label and represent the input attribute value for an instance in the training set ).[21] Ideally, instance hardness would be calculated by summing over the set of all possible hypotheses :

Practically, this formulation is unfeasible as is potentially or infinite and calculating is unknown for many algorithms. Thus, instance hardness can be approximated using a diverse subset :

where is the hypothesis induced by learning algorithm trained on training set with hyperparameters . Instance hardness provides a continuous value for determining if an instance is an outlier instance.

Working with outliers[edit]

The choice of how to deal with an outlier should depend on the cause. Some estimators are highly sensitive to outliers, notably estimation of covariance matrices.

Retention[edit]

Even when a normal distribution model is appropriate to the data being analyzed, outliers are expected for large sample sizes and should not automatically be discarded if that is the case. The application should use a classification algorithm that is robust to outliers to model data with naturally occurring outlier points.

Exclusion[edit]

Deletion of outlier data is a controversial practice frowned upon by many scientists and science instructors; while mathematical criteria provide an objective and quantitative method for data rejection, they do not make the practice more scientifically or methodologically sound, especially in small sets or where a normal distribution cannot be assumed. Rejection of outliers is more acceptable in areas of practice where the underlying model of the process being measured and the usual distribution of measurement error are confidently known. An outlier resulting from an instrument reading error may be excluded but it is desirable that the reading is at least verified.

The two common approaches to exclude outliers are truncation (or trimming) and Winsorising. Trimming discards the outliers whereas Winsorising replaces the outliers with the nearest "nonsuspect" data.[22] Exclusion can also be a consequence of the measurement process, such as when an experiment is not entirely capable of measuring such extreme values, resulting in censored data.[23]

In regression problems, an alternative approach may be to only exclude points which exhibit a large degree of influence on the estimated coefficients, using a measure such as Cook's distance.[24]

If a data point (or points) is excluded from the data analysis, this should be clearly stated on any subsequent report.

Non-normal distributions[edit]

The possibility should be considered that the underlying distribution of the data is not approximately normal, having "fat tails". For instance, when sampling from a Cauchy distribution,[25] the sample variance increases with the sample size, the sample mean fails to converge as the sample size increases, and outliers are expected at far larger rates than for a normal distribution. Even a slight difference in the fatness of the tails can make a large difference in the expected number of extreme values.

Set-membership uncertainties[edit]

A set membership approach considers that the uncertainty corresponding to the ith measurement of an unknown random vector x is represented by a set Xi (instead of a probability density function). If no outliers occur, x should belong to the intersection of all Xi's. When outliers occur, this intersection could be empty, and we should relax a small number of the sets Xi (as small as possible) in order to avoid any inconsistency.[26] This can be done using the notion of q-relaxed intersection. As illustrated by the figure, the q-relaxed intersection corresponds to the set of all x which belong to all sets except q of them. Sets Xi that do not intersect the q-relaxed intersection could be suspected to be outliers.

Figure 5. q-relaxed intersection of 6 sets for q=2 (red), q=3 (green), q= 4 (blue), q= 5 (yellow).

Alternative models[edit]

In cases where the cause of the outliers is known, it may be possible to incorporate this effect into the model structure, for example by using a hierarchical Bayes model, or a mixture model.[27][28]

See also[edit]

References[edit]

  1. ^ Grubbs, F. E. (February 1969). "Procedures for detecting outlying observations in samples". Technometrics. 11 (1): 1–21. doi:10.1080/00401706.1969.10490657. An outlying observation, or "outlier," is one that appears to deviate markedly from other members of the sample in which it occurs. 
  2. ^ Maddala, G. S. (1992). "Outliers". Introduction to Econometrics (2nd ed.). New York: MacMillan. pp. 88–96 [p. 89]. ISBN 0-02-374545-2. An outlier is an observation that is far removed from the rest of the observations. 
  3. ^ Grubbs 1969, p. 1 stating "An outlying observation may be merely an extreme manifestation of the random variability inherent in the data. ... On the other hand, an outlying observation may be the result of gross deviation from prescribed experimental procedure or an error in calculating or recording the numerical value."
  4. ^ Ripley, Brian D. 2004. Robust statistics
  5. ^ Chandan Mukherjee, Howard White, Marc Wuyts, 1998, "Econometrics and Data Analysis for Developing Countries Vol. 1" [1]
  6. ^ Ruan, Da; Chen, Guoqing; Kerre, Etienne (2005). Wets, G., ed. Intelligent Data Mining: Techniques and Applications. Studies in Computational Intelligence Vol. 5. Springer. p. 318. ISBN 978-3-540-26256-5. 
  7. ^ Rousseeuw, P; Leroy, A. (1996), Robust Regression and Outlier Detection (3rd ed.), John Wiley & Sons 
  8. ^ Hodge, Victoria J.; Austin, Jim, A Survey of Outlier Detection Methodologies, doi:10.1023/B:AIRE.0000045502.10941.a9 
  9. ^ Barnett, Vic; Lewis, Toby (1994) [1978], Outliers in Statistical Data (3 ed.), Wiley, ISBN 0-471-93094-6 
  10. ^ a b Zimek, A.; Schubert, E.; Kriegel, H.-P. (2012). "A survey on unsupervised outlier detection in high-dimensional numerical data". Statistical Analysis and Data Mining. 5 (5): 363–387. doi:10.1002/sam.11161. 
  11. ^ Benjamin Peirce, "Criterion for the Rejection of Doubtful Observations", Astronomical Journal II 45 (1852) and Errata to the original paper.
  12. ^ Peirce, Benjamin (May 1877 – May 1878). "On Peirce's criterion". Proceedings of the American Academy of Arts and Sciences. 13: 348–351. doi:10.2307/25138498. JSTOR 25138498. 
  13. ^ Peirce, Charles Sanders (1873) [1870]. "Appendix No. 21. On the Theory of Errors of Observation". Report of the Superintendent of the United States Coast Survey Showing the Progress of the Survey During the Year 1870: 200–224. . NOAA PDF Eprint (goes to Report p. 200, PDF's p. 215).
  14. ^ Peirce, Charles Sanders (1986) [1982]. "On the Theory of Errors of Observation". In Kloesel, Christian J. W.; et al. Writings of Charles S. Peirce: A Chronological Edition. Volume 3, 1872-1878. Bloomington, Indiana: Indiana University Press. pp. 140–160. ISBN 0-253-37201-1.  – Appendix 21, according to the editorial note on page 515
  15. ^ *Tukey, John W (1977). Exploratory Data Analysis. Addison-Wesley. ISBN 0-201-07616-0. OCLC 3058187. 
  16. ^ Knorr, E. M.; Ng, R. T.; Tucakov, V. (2000). "Distance-based outliers: Algorithms and applications". The VLDB Journal the International Journal on Very Large Data Bases. 8 (3–4): 237. doi:10.1007/s007780050006. 
  17. ^ Ramaswamy, S.; Rastogi, R.; Shim, K. (2000). Efficient algorithms for mining outliers from large data sets. Proceedings of the 2000 ACM SIGMOD international conference on Management of data - SIGMOD '00. p. 427. doi:10.1145/342009.335437. ISBN 1581132174. 
  18. ^ Breunig, M. M.; Kriegel, H.-P.; Ng, R. T.; Sander, J. (2000). LOF: Identifying Density-based Local Outliers (PDF). Proceedings of the 2000 ACM SIGMOD International Conference on Management of Data. SIGMOD. pp. 93–104. doi:10.1145/335191.335388. ISBN 1-58113-217-4. 
  19. ^ Schubert, E.; Zimek, A.; Kriegel, H. -P. (2012). "Local outlier detection reconsidered: A generalized view on locality with applications to spatial, video, and network outlier detection". Data Mining and Knowledge Discovery. doi:10.1007/s10618-012-0300-z. 
  20. ^ Thompson .R. (1985). "A Note on Restricted Maximum Likelihood Estimation with an Alternative Outlier Model".Journal of the Royal Statistical Society. Series B (Methodological), Vol. 47, No. 1, pp. 53-55
  21. ^ Smith, M.R.; Martinez, T.; Giraud-Carrier, C. (2014). "An Instance Level Analysis of Data Complexity". Machine Learning, 95(2): 225-256.
  22. ^ Wike, Edward L. (2006). Data Analysis: A Statistical Primer for Psychology Students. pp. 24–25. ISBN 9780202365350. 
  23. ^ Dixon, W. J. (June 1960). "Simplified estimation from censored normal samples". The Annals of Mathematical Statistics. 31 (2): 385–391. doi:10.1214/aoms/1177705900. 
  24. ^ Cook, R. Dennis (Feb 1977). "Detection of Influential Observations in Linear Regression". Technometrics (American Statistical Association) 19 (1): 15–18.
  25. ^ Weisstein, Eric W. Cauchy Distribution. From MathWorld--A Wolfram Web Resource
  26. ^ Jaulin, L. (2010). "Probabilistic set-membership approach for robust regression" (PDF). Journal of Statistical Theory and Practice. 
  27. ^ Roberts, S. and Tarassenko, L.: 1995, A probabilistic resource allocating network for novelty detection. Neural Computation 6, 270–284.
  28. ^ Bishop, C. M. (August 1994). "Novelty detection and Neural Network validation". Proceedings of the IEE Conference on Vision, Image and Signal Processing. 141 (4): 217–222. doi:10.1049/ip-vis:19941330 
  • ISO 16269-4, Statistical interpretation of data — Part 4: Detection and treatment of outliers
  • Strutz, Tilo (2010). Data Fitting and Uncertainty - A practical introduction to weighted least squares and beyond. Vieweg+Teubner. ISBN 978-3-8348-1022-9. 

External links[edit]

Disclaimer

None of the audio/visual content is hosted on this site. All media is embedded from other sites such as GoogleVideo, Wikipedia, YouTube etc. Therefore, this site has no control over the copyright issues of the streaming media.

All issues concerning copyright violations should be aimed at the sites hosting the material. This site does not host any of the streaming media and the owner has not uploaded any of the material to the video hosting servers. Anyone can find the same content on Google Video or YouTube by themselves.

The owner of this site cannot know which documentaries are in public domain, which has been uploaded to e.g. YouTube by the owner and which has been uploaded without permission. The copyright owner must contact the source if he wants his material off the Internet completely.

Powered by YouTube
Wikipedia content is licensed under the GFDL and (CC) license