Bilinear regression (changes)

Showing changes from revision #5 to #6:
Added | ~~Removed~~ | ~~Chan~~ged

In statistics/machine learning the individual samples often come in the form of 2-D arrays, eg, a set of population counts of different species (one axis) at different points in time (second axis). Standard regression collapses these arrays into vectors and thus loses the structure in the regression process. **Bilinear regression** attempts to use the array structure by using the samples as matrices.

The bilinear predictor function takes the form

(1)$f(X) = tr(U^T X V) + b = \sum_{i=1:m} u^T_i X v_i + b$

Noe that, as is particularly apparent in the $(u_i,v_i)$ form, there is a freedom to move a multiplicative factor between the $u_i$ columns and the matching $v_i$ forms.

When performing regularised fitting the score function is

(2)$E = \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right)^2 + \lambda \sum_{i=1:m} R(u_i) + R(v_i)$

where $\lambda$ is the regularization strength and $R()$ is the regularization function.

The derivatives are

(3)$\frac{\partial E}{\partial v_I}=2 \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right) (u^T_I X_j)^T + \lambda \frac{\partial R(v_I)}{\partial v_I}$

(4)$=2 \left( u^T_I \left( \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right) X_j \right)\right)^T + \lambda \frac{\partial R(v_I)}{\partial v_I}$

(5)$=2 (u^T_I A)^T + \lambda \frac{\partial R(v_I)}{\partial v_I}$

and

(6)$\frac{\partial E}{\partial u_I}=2 \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right) (X_j v_I) + \lambda \frac{\partial R(u_I)}{\partial u_I}$

(7)$=2 \left( \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right) X_j\right) v_I + \lambda \frac{\partial R(u_I)}{\partial u_I}$

(8)$=2 A v_I + \lambda \frac{\partial R(u_I)}{\partial u_I}$

Finally for the constant term $b$ the derivative is

(9)$\frac{\partial E}{\partial b}=2 \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i + b - y_j\right)$

so that given the other variables the optimal $b$ is

(10)$b = -\frac{1}{n} \sum_{j=1:n} \left(\sum_{i=1:m} u^T_i X_j v_i - y_j\right)$

Another possibility for fitting the regression coefficients is Co-ordinate descent.

J. V. Shi, Y. Xu, and R. G. Baraniuk, “Sparse Bilinear Logistic Regression”, Submitted to Journal of Machine Learning Research, 2014, Apr, Submitted.

category: mathematical methods