Saturday, March 24, 2012

Linear regression with Numpy

Few post ago, we have seen how to use the function numpy.linalg.lstsq(...) to solve an over-determined system. This time, we'll use it to estimate the parameters of a regression line.
A linear regression line is of the form w1x+w2=y and it is the line that minimizes the sum of the squares of the distance from each data point to the line. So, given n pairs of data (xi, yi), the parameters that we are looking for are w1 and w2 which minimize the error


and we can compute the parameter vector w = (w1 , w2)T as the least-squares solution of the following over-determined system


Let's use numpy to compute the regression line:
from numpy import arange,array,ones,linalg
from pylab import plot,show

xi = arange(0,9)
A = array([ xi, ones(9)])
# linearly generated sequence
y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24]
w = linalg.lstsq(A.T,y)[0] # obtaining the parameters

# plotting the line
line = w[0]*xi+w[1] # regression line
plot(xi,line,'r-',xi,y,'o')
show()
We can see the result in the plot below.



You can find more about data fitting using numpy in the following posts: Update, the same result could be achieve using the function scipy.stats.linregress (thanks ianalis!):
from numpy import arange,array,ones#,random,linalg
from pylab import plot,show
from scipy import stats

xi = arange(0,9)
A = array([ xi, ones(9)])
# linearly generated sequence
y = [19, 20, 20.5, 21.5, 22, 23, 23, 25.5, 24]

slope, intercept, r_value, p_value, std_err = stats.linregress(xi,y)

print 'r value', r_value
print  'p_value', p_value
print 'standard deviation', std_err

line = slope*xi+intercept
plot(xi,line,'r-',xi,y,'o')
show()

Friday, March 16, 2012

SVD decomposition with numpy

The SVD decomposition is a factorization of a matrix, with many useful applications in signal processing and statistics. In this post we will see
  • how to compute the SVD decomposition of a matrix A using numpy,
  • how to compute the inverse of A using the matrices computed by the decomposition,
  • and how to solve a linear equation system Ax=b using using the SVD.
The SVD decomposition of a matrix A is of the fom


Since U and V are orthogonal (this means that U^T*U=I and V^T*V=I) we can write the inverse of A as (see Solving overdetermined systems with the QR decomposition for the tricks)


So, let's start computing the factorization and the inverse
from numpy import *

A = floor(random.rand(4,4)*20-10) # generating a random
b = floor(random.rand(4,1)*20-10) # system Ax=b

U,s,V = linalg.svd(A) # SVD decomposition of A

# computing the inverse using pinv
pinv = linalg.pinv(A)
# computing the inverse using the SVD decomposition
pinv_svd = dot(dot(V.T,linalg.inv(diag(s))),U.T)

print "Inverse computed by lingal.pinv()\n",pinv
print "Inverse computed using SVD\n",pinv_svd
As we can see, the output shows that pinv and pinv_svd are the equal
Inverse computed by lingal.pinv()
[[ 0.06578301 -0.04663721  0.0436917   0.089838  ]
 [ 0.15243004  0.044919   -0.03681885  0.00294551]
 [ 0.18213058 -0.01718213  0.06872852 -0.07216495]
 [ 0.03976436  0.09867452  0.03387334 -0.04270987]]
Inverse computed using SVD
[[ 0.06578301 -0.04663721  0.0436917   0.089838  ]
 [ 0.15243004  0.044919   -0.03681885  0.00294551]
 [ 0.18213058 -0.01718213  0.06872852 -0.07216495]
 [ 0.03976436  0.09867452  0.03387334 -0.04270987]]
Now, we can solve Ax=b using the inverse:


or solving the system

Multiplying by U^T we obtain


Then, letting c be U^Tb and w be V^Tx, we see


Since sigma is diagonal, we can easily obtain w solving the system above. And finally, we can obtain x solving

Let's compare the results of those methods:

x = linalg.solve(A,b) # solve Ax=b using linalg.solve

xPinv = dot(pinv_svd,b) # solving Ax=b computing x = A^-1*b

# solving Ax=b using the equation above
c = dot(U.T,b) # c = U^t*b
w = linalg.solve(diag(s),c) # w = V^t*c
xSVD = dot(V.T,w) # x = V*w

print "Ax=b solutions compared"
print x.T
print xSVD.T
print xPinv.T
As aspected, we have the same solutions:
Ax=b solutions compared
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]
[[ 0.13549337 -0.37260677  1.62886598 -0.09720177]]

Saturday, March 10, 2012

Solving overdetermined systems with the QR decomposition

A system of linear equations is considered overdetermined if there are more equations than unknowns. In practice, we have a system Ax=b where A is a m by n matrix and b is a m dimensional vector b but m is greater than n. In this case, the vector b cannot be expressed as a linear combination of the columns of A. Hence, we can't find x so that satisfies the problem Ax=b (except in specific cases) but it is possible to determine x so that Ax is as close to b as possible. So we wish to find x which minimizes the following error


Considering the QR decomposition of A we have that Ax=b becomes


multiplying by Q^T we obtain

and since Q^T is orthogonal (this means that Q^T*Q=I) we have


Now, this is a well defined system, R is an upper triangular matrix and Q^T*b is a vector. More precisely b is the orthogonal projection of b onto the range of A. And,


The function linalg.lstsq() provided by numpy returns the least-squares solution to a linear system equation and is able to solve overdetermined systems. Let's compare the solutions of linalg.lstsq() with the ones computed using the QR decomposition:
from numpy import *

# generating a random overdetermined system
A = random.rand(5,3)
b = random.rand(5,1) 

x_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution

Q,R = linalg.qr(A) # qr decomposition of A
Qb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A)
x_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b

# comparing the solutions
print 'qr solution'
print x_qr
print 'lstqs solution'
print x_lstsq
This is the output of the script above:
qr solution
[[ 0.08704059]
 [-0.10106932]
 [ 0.56961487]]
lstqs solution
[[ 0.08704059]
 [-0.10106932]
 [ 0.56961487]]
As we can see, the solutions are the same.