Showing changes from revision #3 to #4:
Added | Removed | Changed
The Eckart-Young Theorem states that the approximation matrix with matrix rank $n$ to a matrix $A$ which has an error matrix with the lowest Frobenius norm is formed by taking the sum of the biggest $n$ elements of the Singular Value Decomposition. This can be used for very, very basic factor analysis.
Todo: explanation.
There is an interesting illustration of low-rank approximation of a matrix, in this case an image, on Staffan Liljegrens Azimuth page.
In the extreme case, given a matrix $A$ where $A_{i j}=f(x_i,y_j)$ for some “tabulated function defined by data”, the Eckart-Young theorem for $n=1$ can be used to find the lowest squared error approximate decomposition into $A^{approx}=\sigma \mathbf{u} \mathbf{v}^T$ (where $|\mathbf{u}|_2=|\mathbf{v}|_2=1$). This corresponds to an approximate decomposition $A_{i j}^{approx}=\sigma f_x(x_i) f_y(y_j)$ for some “tabulated functions” $f_x$ and $f_y$ . This can be analysed to see if a separable function decomposition of $f$ is likely (given most functions $f$ derived from data are contaminated by noise) and used in attempting to fit actual functions to the tables of $f_x$ and $f_y$ values. However this approach only deals with separable cases in two original variables.
“Low Rank Approximation” section of page Singular value decomposition, Wikipedia.
Frobenius norm, Wikipedia.
sex shop sex shop sex shop sex shop sex shop lingerie sex shop atacado calcinhas uniformes profissionais uniformes dicas de sexo