Haskell isn’t normally used for things like this, but it’s quite possible to solve systems of linear equations with Haskell.
There are already several libraries for doing this, and other more advanced matrix manipulating. But here, I’m going to start simple.
In mathematics, systems of linear equations are usually represented by an augmented matrix. A system of n linear equations would be represented by an augmented matrix with n rows and n+1 columns.
For example, we have this system of equations:
This would be represented as an augmented matrix:
In Haskell we represent this as a list of lists, like this:
[ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ]
Here I’ll store each entry not as an integer, but as a floating point. You could also use Rational
in Data.Ratio
but both are fine for now.
The advantage of using Rational
over Float
is that sometimes you will end up with fractions that don’t work very well with floating point numbers. However, I’ve found that printing a list of lists of Rational
types makes it difficult to read, unless you implement a custom show function for it.
So this is how we define our matrix types in Haskell:
type Row = [Float] type Matrix = [Row]
The approach to solving this problem is rather simple. First we reduce whatever matrix we have to REF, or Row Echelon Form and then get the actual roots with some back substitution.
The algorithm used to transform a matrix to its Row Echelon Form is known as the Gaussian Elimination. Here’s what a matrix should look like after Gaussian Elimination (a represents any value):
Our matrix should look like this after Gaussian Elimination:
The REF form is not unique, so that is only one of the possible valid outputs for the Gaussian Elimination.
Why do we want to have the matrix in REF form? A matrix in this form can easily be solved using back substitution. Consider this matrix as a series of linear equations, as we did before:
Now it would be very clear how to solve for the three variables.
The Gaussian Elimination Algorithm
Here is a diagram of how Gaussian Elimination works. On each iteration, the element circled in green is considered the pivot element, while the elements enclosed in the red square are the ones we intend to remove (zero) in each iteration.
Removing the red elements in the matrix is actually quite simple. Consider how you would eliminate in equation here:
Probably you would multiply equation by 2, giving , then subtract from it, giving , eliminating .
We can also write that as . Basically to eliminate a variable, just multiply a row so it matches up, and subtract. This is middle school algebra.
To make things easier for us, we divide the row we are on so that the pivot is always 1. We do this now because we need them to be 1 anyways, and this avoids an unnecessary division in the next step.
We could, of course, not have the pivot always be 1, but we would have to do the divisions later when substituting to get the solutions. More on this later.
So to eliminate the variable under the pivot, multiply the whole row by that number. I have a picture to clarify:
We simply repeat this for all elements under the pivot.
An edge case
This is where it gets a little bit tricky. What if the pivot is 0?
We have no way of making it 1 by any kind of multiplication. Further, we cannot eliminate any elements below the pivot. What do we do now?
Simple. We swap the current row with any other row so that the pivot is not zero. Any row will do, so we’ll just pick the first one that fits.
If there is not a single element below the pivot that is not zero, the matrix is either under-determined or singular; either case it is unsolvable.
Here is my Haskell code on what I just covered:
gaussianReduce :: Matrix -> Matrix gaussianReduce matrix = fixlastrow $ foldl reduceRow matrix [0..length matrix-1] where --swaps element at position a with element at position b. swap xs a b | a > b = swap xs b a | a == b = xs | a < b = let (p1,p2) = splitAt a xs (p3,p4) = splitAt (b-a-1) (tail p2) in p1 ++ [xs!!b] ++ p3 ++ [xs!!a] ++ (tail p4) reduceRow matrix1 r = let --first non-zero element on or below (r,r). firstnonzero = head $ filter (\x -> matrix1 !! x !! r /= 0) [r..length matrix1-1] --matrix with row swapped (if needed) matrix2 = swap matrix1 r firstnonzero --row we're working with row = matrix2 !! r --make it have 1 as the leading coefficient row1 = map (\x -> x / (row !! r)) row --subtract nr from row1 while multiplying subrow nr = let k = nr!!r in zipWith (\a b -> k*a - b) row1 nr --apply subrow to all rows below nextrows = map subrow $ drop (r+1) matrix2 --concat the lists and repeat in take r matrix2 ++ [row1] ++ nextrows fixlastrow matrix' = let a = init matrix'; row = last matrix'; z = last row; nz = last (init row) in a ++ [init (init row) ++ [1, z / nz]]
Edit: There was a bug in the above code, found by Alan Zimmerman. I think it’s been fixed.
This may be a bit difficult to read because there is no syntax highlighting and the code is cut off. I’ll provide a link to the full source code at the end.
Admittedly Haskell may not have been the best language to implement this algorithm this particular way because there is so much state changing. Any language that allows mutable state would probably perform better than this code.
Notice that at the end, the last row does not get divided. The fixlastrow
function corrects this problem.
Let’s test this code:
*Main> gaussianReduce [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] [[1.0,2.0,-1.0,-4.0],[0.0,1.0,-1.0,3.0],[-0.0,0.0,1.0,-2.0]]
Excellent.
Finishing up
The next step of the algorithm is to solve the variables by back substitution. This is pretty easy, I think. My code keeps a list of already-found solutions. Folding from the right, each step it substitutes in the corresponding solution and multiplies & subtracts to get the next solution, adding that to the solution list.
--Solve a matrix (must already be in REF form) by back substitution. substitute :: Matrix -> Row substitute matrix = foldr next [last (last matrix)] (init matrix) where next row found = let subpart = init $ drop (length matrix - length found) row solution = last row - sum (zipWith (*) found subpart) in solution : found
To get a list of solutions from a matrix, we chain the substitute
and gaussianReduce
functions:
solve :: Matrix -> Row solve = substitute . gaussianReduce
*Main> solve [ [1,2,-1,-4], [2,3,-1,-11], [-2,0,-3,22] ] [-8.0,1.0,-2.0]
This means the solutions are . That seems correct, so we’re done!
The code is far from practical, though. Although it works, I haven’t really tested its performance (probably not very good), and it doesn’t handle all edge cases.