ผลต่างระหว่างรุ่นของ "Machine Learning at U of C"

จาก Theory Wiki
ไปยังการนำทาง ไปยังการค้นหา
 
แถว 11: แถว 11:
 
<math>A: \cup_{n=1}^{\infty} Z^n \rightarrow F</math>
 
<math>A: \cup_{n=1}^{\infty} Z^n \rightarrow F</math>
  
=== Loss function ===  
+
=== Loss function ===
 +
Suppose the learning algorithm outputs h. The learning error can be measured by
 +
 
 +
<math>\int (y-h(x))^2 dP </math>
 +
 
 +
One can prove that minimizing this quantity could be reduced to the problem of minimizing the following quantity.
 +
 
 +
<math>||f_p - h||^2_{l_2(\mathbb{P})} = \int (f_p(x)-h(x))^2 P_x(x) dx </math>
 +
 
 +
And that's the reason why we try to learn <math>\mathbb{E}_p[y|x]</math>
  
 
=== Ordinary Least Square ===  
 
=== Ordinary Least Square ===  

รุ่นแก้ไขเมื่อ 07:05, 30 มีนาคม 2550

This page contains a list of topics, definitions, and results from Machine Learning course at University of Chicago.

Week 1

Learning problem

Given a distribution on . We want to learn the objective function (with respect to the distribution ).


Learning Algorithms

Let Z be the set of possible samples. The learning algorithm is a function that maps a number of samples to a measurable function (denoted here by F a class of all measurable functions). Sometimes we consider a class of computable functions instead.

Loss function

Suppose the learning algorithm outputs h. The learning error can be measured by

One can prove that minimizing this quantity could be reduced to the problem of minimizing the following quantity.

And that's the reason why we try to learn

Ordinary Least Square

Tikhonov Regularization

Week 2