pwlf update: fitting continuous piecewise linear models to data
April 8, 2018
pwlf improvements in 0.2.0 release
- much faster at finding optimum break point locations due to new derivation of regression problem
- pwlf now uses pure numpy instead of Python to solve continuous piecewise linear fit
- new mathematical derivation (defined below)
- pep8 style naming for class, variables, and functions
- old naming convention still usable, but will be depreciated at some time
- Available soon on github or pypi
New derivation
There was nothing wrong with Golovchenko (2004), which was the basis for the the first release of pwlf. I’ve always knew that the for loops used to assemble the regression matrix were slow, and this was killing the performance for large problems. After reading this, I felt a bit inspired to do something about it. The pwlf library has been modified to solve the linear regression matrix directly, where previously a square matrix was assembled (as oppose to the regression matrix). The result is that more of the code is solved in numpy (which is fast), instead of Python.
Let’s assume we have a one dimensional data set. In this case we will assume \(\mathbf{x} \) is the independent variable and \(\mathbf{y} \) is dependent on \(\mathbf{x} \) such that \(\mathbf{y}(\mathbf{x}) \). Our data is paired as
where \((x_1, y_1) \) represents the first data point and the data points have been ordered according to \( x_1 < x_2 < x_3 < \cdots < x_n \) for \(n \) number of data points. A piecewise linear function can be constructed to the function as follows
where \(b_1 \) is the \(x \) location of the first break point, \(b_2 \) is the \(x \) location of the second break point, and so forth until the last break point \(b_{n_b} \) for \(n_b \) number of break points. The break points are also ordered as \(b_1 < b_2 < \cdots < b_{n_b} \). Additionally the first break point is always \(b_1 = x_1 \), and the last break point is always \(b_{n_b} = x_n \). This initialization of the data seems tedious at first, but some magic will happen later on if you arrange the data this way.
Now if we enforce that the piecewise linear functions be continuous on the domain, we’ll end up with
as our continuous piecewise linear function. This can be extended in Matrix form as
where \(1_{x_n > b_1} \) represents the piecewise function of form
and \(1_{x_n > b_3} \) represents
and so forth. This is where the magic happens! If you’ve ordered your data, the result will be a regression matrix that’s already in a lower triangular style. Since \(\mathbf{x} \) was initially sorted from min to max, the matrix can be assembled quickly by replacing only the non-zero values. This won’t be a big deal if you know the break point locations \(\mathbf{b} \). However if you were running an optimization to find the ideal location for break points (as in pwlf), you may need to assemble the regression matrix thousands of times.
We can express the matrix expression as a linear equation
where \(\mathbf{A} \) is our regression matrix, \(\mathbf{\beta} \) is our unknown set of parameters, and \(\mathbf{y} \) is our vector of \(y \) values. We can use a least squares solver to solve for \(\mathbf{\beta} \). In Python, I like to use the numpy lstsq solver which uses LAPACK to solve the matrix.
Once you have found your set of optimal parameters \(\mathbf{\beta} \), then you can predict for new \(x \) values by assembling your regression matrix \(\mathbf{A} \) for the new \(x \) values. The new \(y \) values are solved by multiplying
the new regression matrix with the determined set of parameters.