Lecture 1 | Machine Learning (Stanford)

Channel: StanfordUniversity   |   2008/07/23
Play Video
1
Lecture 1 | Machine Learning (Stanford)
Lecture 1 | Machine Learning (Stanford)
::2008/07/23::
Play Video
2
A Gentle Introduction To Machine Learning; SciPy 2013 Presentation
A Gentle Introduction To Machine Learning; SciPy 2013 Presentation
::2013/07/01::
Play Video
3
Machine Learning:  The Basics, with Ron Bekkerman
Machine Learning: The Basics, with Ron Bekkerman
::2012/03/19::
Play Video
4
Seth Lloyd: Quantum Machine Learning
Seth Lloyd: Quantum Machine Learning
::2014/04/09::
Play Video
5
Artificial Intelligence: Machine Learning Introduction
Artificial Intelligence: Machine Learning Introduction
::2013/07/27::
Play Video
6
7.31.13 Machine Learning: Hottest Tech Trend in the Next 3-5 Years?
7.31.13 Machine Learning: Hottest Tech Trend in the Next 3-5 Years?
::2013/08/01::
Play Video
7
Strata Conference 2013 -- Real-World Machine Learning on Big Data: Which Methods Should You Use?
Strata Conference 2013 -- Real-World Machine Learning on Big Data: Which Methods Should You Use?
::2013/05/14::
Play Video
8
(ML 1.1) Machine learning - overview and applications
(ML 1.1) Machine learning - overview and applications
::2011/06/09::
Play Video
9
From the Lab to the Factory: Building a Production Machine Learning Infrastructure
From the Lab to the Factory: Building a Production Machine Learning Infrastructure
::2014/02/01::
Play Video
10
undergraduate machine learning 3: Basic probability
undergraduate machine learning 3: Basic probability
::2012/11/02::
Play Video
11
Machine Learning and Pattern Recognition for Algorithmic Forex and Stock Trading: Intro
Machine Learning and Pattern Recognition for Algorithmic Forex and Stock Trading: Intro
::2013/10/12::
Play Video
12
Machine Learning Meets Economics: Using Theory, Data, and Experiments to Design Markets
Machine Learning Meets Economics: Using Theory, Data, and Experiments to Design Markets
::2013/11/13::
Play Video
13
Erkki Oja: "40 Years of Machine Learning" - Professor Erkki Oja Symposium @Aalto University
Erkki Oja: "40 Years of Machine Learning" - Professor Erkki Oja Symposium @Aalto University
::2013/12/16::
Play Video
14
Machine learning - Importance sampling and MCMC I
Machine learning - Importance sampling and MCMC I
::2013/03/22::
Play Video
15
Andrew Ng - Machine Learning via Large-scale Brain Simulations - Technion lecture
Andrew Ng - Machine Learning via Large-scale Brain Simulations - Technion lecture
::2013/06/17::
Play Video
16
Machine Learning (Introduction + Data Mining VS ML)
Machine Learning (Introduction + Data Mining VS ML)
::2011/05/04::
Play Video
17
B.F Skinner. Teaching machine and programmed learning
B.F Skinner. Teaching machine and programmed learning
::2011/12/20::
Play Video
18
Apache Spark:  Distributed Machine Learning using MLbase
Apache Spark: Distributed Machine Learning using MLbase
::2013/08/16::
Play Video
19
Machine learning - Logistic regression
Machine learning - Logistic regression
::2013/03/17::
Play Video
20
Introduction to Machine Learning in C#
Introduction to Machine Learning in C#
::2013/12/16::
Play Video
21
CS 7641: Machine Learning - Part 1 of 3
CS 7641: Machine Learning - Part 1 of 3
::2014/03/10::
Play Video
22
Scaling Up Machine Learning, with Ron Bekkerman
Scaling Up Machine Learning, with Ron Bekkerman
::2012/03/19::
Play Video
23
Control Robotics and Machine Learning Lab - Technion - Electrical Engineering
Control Robotics and Machine Learning Lab - Technion - Electrical Engineering
::2014/03/25::
Play Video
24
Lecture 7 | Machine Learning (Stanford)
Lecture 7 | Machine Learning (Stanford)
::2008/07/23::
Play Video
25
Wicked Good Ruby 2013 - Machine Learning with Ruby
Wicked Good Ruby 2013 - Machine Learning with Ruby
::2013/10/31::
Play Video
26
undergraduate machine learning 2: Introduction to machine learning 2
undergraduate machine learning 2: Introduction to machine learning 2
::2012/11/02::
Play Video
27
GPU Technology Conference 2014: Machine Learning Demo (part 4) GTC
GPU Technology Conference 2014: Machine Learning Demo (part 4) GTC
::2014/03/29::
Play Video
28
Collective Minds and Machine Learning Exploration Challenge - NASA, UCSD, Harvard and TopCoder
Collective Minds and Machine Learning Exploration Challenge - NASA, UCSD, Harvard and TopCoder
::2013/09/03::
Play Video
29
Machine learning - Decision trees
Machine learning - Decision trees
::2013/02/22::
Play Video
30
Lecture 2 | Machine Learning (Stanford)
Lecture 2 | Machine Learning (Stanford)
::2008/07/23::
Play Video
31
What is Machine Learning? - Bernhard Schölkopf - MLSS 2013 Tübingen
What is Machine Learning? - Bernhard Schölkopf - MLSS 2013 Tübingen
::2013/12/20::
Play Video
32
undergraduate machine learning 1: Introduction to machine learning
undergraduate machine learning 1: Introduction to machine learning
::2012/11/02::
Play Video
33
16. Learning: Support Vector Machines
16. Learning: Support Vector Machines
::2014/01/10::
Play Video
34
An Introduction to scikit-learn: Machine Learning in Python
An Introduction to scikit-learn: Machine Learning in Python
::2013/03/30::
Play Video
35
Pycon US 2013 - Advanced Machine Learning with scikit-learn
Pycon US 2013 - Advanced Machine Learning with scikit-learn
::2013/07/13::
Play Video
36
Lecture 10 | Machine Learning (Stanford)
Lecture 10 | Machine Learning (Stanford)
::2008/07/23::
Play Video
37
12.6 Machine Learning Using An SVM
12.6 Machine Learning Using An SVM
::2012/09/30::
Play Video
38
Lecture 19 | Machine Learning (Stanford)
Lecture 19 | Machine Learning (Stanford)
::2008/07/23::
Play Video
39
The Future of Robotics and Artificial Intelligence (Andrew Ng, Stanford University, STAN 2011)
The Future of Robotics and Artificial Intelligence (Andrew Ng, Stanford University, STAN 2011)
::2011/05/23::
Play Video
40
Conférence Machine Learning : Un tour d
Conférence Machine Learning : Un tour d'horizon
::2014/04/18::
Play Video
41
2013-08-01 Prof. Andrew Ng: "Deep Learning: Machine learning via Large-scale Brain Simulations"
2013-08-01 Prof. Andrew Ng: "Deep Learning: Machine learning via Large-scale Brain Simulations"
::2014/03/24::
Play Video
42
Tutorial: scikit-learn - Machine Learning in Python with Contributor Jake VanderPlas
Tutorial: scikit-learn - Machine Learning in Python with Contributor Jake VanderPlas
::2012/04/17::
Play Video
43
Lecture 3 | Machine Learning (Stanford)
Lecture 3 | Machine Learning (Stanford)
::2008/07/23::
Play Video
44
Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab (1 of 3)
Machine Learning Discussion Group - Deep Learning w/ Stanford AI Lab (1 of 3)
::2013/10/15::
Play Video
45
12.4 Machine Learning Kernels I
12.4 Machine Learning Kernels I
::2012/09/30::
Play Video
46
undergraduate machine learning 8: Inference in Bayesian networks and dynamic programming
undergraduate machine learning 8: Inference in Bayesian networks and dynamic programming
::2012/11/13::
Play Video
47
Machine learning - Random forests
Machine learning - Random forests
::2013/02/22::
Play Video
48
Machine Learning 10-701 Lecture 1
Machine Learning 10-701 Lecture 1
::2013/01/14::
Play Video
49
Geordie Rose on Singularity 1 on 1: Machine Learning is Progressing Faster Than You Think
Geordie Rose on Singularity 1 on 1: Machine Learning is Progressing Faster Than You Think
::2013/08/20::
Play Video
50
14.3 Machine Learning Principal Component Analysis Problem Formulation
14.3 Machine Learning Principal Component Analysis Problem Formulation
::2012/09/30::
NEXT >>
RESULTS [51 .. 101]
From Wikipedia, the free encyclopedia
Jump to: navigation, search

Machine learning, a branch of artificial intelligence, concerns the construction and study of systems that can learn from data. For example, a machine learning system could be trained on email messages to learn to distinguish between spam and non-spam messages. After learning, it can then be used to classify new email messages into spam and non-spam folders.

The core of machine learning deals with representation and generalization. Representation of data instances and functions evaluated on these instances are part of all machine learning systems. Generalization is the property that the system will perform well on unseen data instances; the conditions under which this can be guaranteed are a key object of study in the subfield of computational learning theory.

There are a wide variety of machine learning tasks and successful applications. Optical character recognition, in which printed characters are recognized automatically based on previous examples, is a classic example of machine learning.[1]

Definition[edit]

In 1959, Arthur Samuel defined machine learning as a "Field of study that gives computers the ability to learn without being explicitly programmed".[2]

Tom M. Mitchell provided a widely quoted, more formal definition: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P, if its performance at tasks in T, as measured by P, improves with experience E".[3] This definition is notable for its defining machine learning in fundamentally operational rather than cognitive terms, thus following Alan Turing's proposal in Turing's paper "Computing Machinery and Intelligence" that the question "Can machines think?" be replaced with the question "Can machines do what we (as thinking entities) can do?"[4]

Generalization[edit]

A core objective of a learner is to generalize from its experience.[5][6] Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases.

Machine learning and data mining[edit]

These two terms are commonly confused, as they often employ the same methods and overlap significantly. They can be roughly defined as follows:

  • Machine learning focuses on prediction, based on known properties learned from the training data.
  • Data mining focuses on the discovery of (previously) unknown properties in the data. This is the analysis step of Knowledge Discovery in Databases.

The two areas overlap in many ways: data mining uses many machine learning methods, but often with a slightly different goal in mind. On the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in Knowledge Discovery and Data Mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data.

Human interaction[edit]

Some machine learning systems attempt to eliminate the need for human intuition in data analysis, while others adopt a collaborative approach between human and machine. Human intuition cannot, however, be entirely eliminated, since the system's designer must specify how the data is to be represented and what mechanisms will be used to search for a characterization of the data.[citation needed]

Algorithm types[edit]

Machine learning algorithms can be organized into a taxonomy based on the desired outcome of the algorithm or the type of input available during training the machine .[citation needed]

  • Supervised learning algorithms are trained on labelled examples, i.e., input where the desired output is known. The supervised learning algorithm attempts to generalise a function or mapping from inputs to outputs which can then be used speculatively to generate an output for previously unseen inputs.
  • Unsupervised learning algorithms operate on unlabelled examples, i.e., input where the desired output is unknown. Here the objective is to discover structure in the data (e.g. through a cluster analysis), not to generalise a mapping from inputs to outputs.
  • Semi-supervised learning combines both labeled and unlabelled examples to generate an appropriate function or classifier.
  • Transduction, or transductive inference, tries to predict new outputs on specific and fixed (test) cases from observed, specific (training) cases.
  • Reinforcement learning is concerned with how intelligent agents ought to act in an environment to maximise some notion of reward. The agent executes actions which cause the observable state of the environment to change. Through a sequence of actions, the agent attempts to gather knowledge about how the environment responds to its actions, and attempts to synthesise a sequence of actions that maximises a cumulative reward.
  • Learning to learn learns its own inductive bias based on previous experience.
  • Developmental learning, elaborated for Robot learning, generates its own sequences (also called curriculum) of learning situations to cumulatively acquire repertoires of novel skills through autonomous self-exploration and social interaction with human teachers, and using guidance mechanisms such as active learning, maturation, motor synergies, and imitation.

Theory[edit]

The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common.

In addition to performance bounds, computational learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results. Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time.

There are many similarities between machine learning theory and statistical inference, although they use different terms.

Approaches[edit]

Decision tree learning[edit]

Decision tree learning uses a decision tree as a predictive model which maps observations about an item to conclusions about the item's target value.

Association rule learning[edit]

Association rule learning is a method for discovering interesting relations between variables in large databases.

Artificial neural networks[edit]

An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.

Inductive logic programming[edit]

Inductive logic programming (ILP) is an approach to rule learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program which entails all the positive and none of the negative examples. Inductive programming is a related field that considers any kind of programming languages for representing hypotheses (and not only logic programming), such as functional programs.

Support vector machines[edit]

Support vector machines (SVMs) are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other.

Clustering[edit]

Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to some predesignated criterion or criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated for example by internal compactness (similarity between members of the same cluster) and separation between different clusters. Other methods are based on estimated density and graph connectivity. Clustering is a method of unsupervised learning, and a common technique for statistical data analysis.

Bayesian networks[edit]

A Bayesian network, belief network or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independencies via a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning.

Reinforcement learning[edit]

Reinforcement learning is concerned with how an agent ought to take actions in an environment so as to maximize some notion of long-term reward. Reinforcement learning algorithms attempt to find a policy that maps states of the world to the actions the agent ought to take in those states. Reinforcement learning differs from the supervised learning problem in that correct input/output pairs are never presented, nor sub-optimal actions explicitly corrected.

Representation learning[edit]

Several learning algorithms, mostly unsupervised learning algorithms, aim at discovering better representations of the inputs provided during training. Classical examples include principal components analysis and cluster analysis. Representation learning algorithms often attempt to preserve the information in their input but transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions, allowing to reconstruct the inputs coming from the unknown data generating distribution, while not being necessarily faithful for configurations that are implausible under that distribution.

Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse (has many zeros). Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into (high-dimensional) vectors.[7] Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data.[8]

Similarity and metric learning[edit]

In this problem, the learning machine is given pairs of examples that are considered similar and pairs of less similar objects. It then needs to learn a similarity function (or a distance metric function) that can predict if new objects are similar. It is sometimes used in Recommendation systems.

Sparse Dictionary Learning[edit]

In this method, a datum is represented as a linear combination of basis functions, and the coefficients are assumed to be sparse. Let x be a d-dimensional datum, D be a d by n matrix, where each column of D represents a basis function. r is the coefficient to represent x using D. Mathematically, sparse dictionary learning means the following 
 x \approx D \times r
where r is sparse. Generally speaking, n is assumed to be larger than d to allow the freedom for a sparse representation.

Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine which classes a previously unseen datum belongs to. Suppose a dictionary for each class has already been built. Then a new datum is associated with the class such that it's best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image path can be sparsely represented by an image dictionary, but the noise cannot.[9]

Applications[edit]

Applications for machine learning include:

In 2006, the online movie company Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy on its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million.[12]

In 2010 The Wall Street Journal wrote about a money management firm Rebellion Research's use of machine learning to predict economic movements, the article talks about Rebellion Research's prediction of the financial crisis and economic recovery.[13]

Software[edit]

Software suites containing a variety of machine learning algorithms include the following:

Journals and conferences[edit]

See also[edit]

References[edit]

  1. ^ Wernick, Yang, Brankov, Yourganov and Strother, Machine Learning in Medical Imaging, IEEE Signal Processing Magazine, vol. 27, no. 4, July 2010, pp. 25-38
  2. ^ Phil Simon (March 18, 2013). Too Big to Ignore: The Business Case for Big Data. Wiley. p. 89. ISBN 978-1118638170. 
  3. ^ * Mitchell, T. (1997). Machine Learning, McGraw Hill. ISBN 0-07-042807-7, p.2.
  4. ^ Harnad, Stevan (2008), "The Annotation Game: On Turing (1950) on Computing, Machinery, and Intelligence", in Epstein, Robert; Peters, Grace, The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer, Kluwer 
  5. ^ Christopher M. Bishop (2006) Pattern Recognition and Machine Learning, Springer ISBN 0-387-31073-8.
  6. ^ Mehryar Mohri, Afshin Rostamizadeh, Ameet Talwalkar (2012) Foundations of Machine Learning, The MIT Press ISBN 9780262018258.
  7. ^ Lu, Haiping; Plataniotis, K.N.; Venetsanopoulos, A.N. (2011). "A Survey of Multilinear Subspace Learning for Tensor Data". Pattern Recognition 44 (7): 1540–1551. doi:10.1016/j.patcog.2011.01.004.  edit
  8. ^ Yoshua Bengio (2009). Learning Deep Architectures for AI. Now Publishers Inc. pp. 1–3. ISBN 978-1-60198-294-0. 
  9. ^ Aharon, M, M Elad, and A Bruckstein. 2006. “K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation.” Signal Processing, IEEE Transactions on 54 (11): 4311-4322
  10. ^ Daniel Jurafsky and James H. Martin (2009). Speech and Language Processing. Pearson Education. pp. 207 ff. 
  11. ^ Tesauro, Gerald (March 1995). "Temporal Difference Learning and TD-Gammon". Communications of the ACM 38 (3). 
  12. ^ "BelKor Home Page" research.att.com
  13. ^ [1]

Further reading[edit]

External links[edit]

Wikipedia content is licensed under the GFDL License

Mashpedia enables any individual or company to promote their own Youtube-hosted videos or Youtube Channels, offering a simple and effective plan to get them in front of our engaged audience.

Want to learn more? Please contact us at: hello@mashpedia.com

Powered by YouTube
LEGAL
  • Mashpedia © 2014