Performing this convolution requires $O(n^4)$ operations for an $n$ by $n$ image. The two dimensional Gaussian blur kernel can be written as the tensor product of two one dimensional kernels, with a bit algebra this gives:

So now to blur an image we can perform two convolution, first with the horizontal kernel, and then with the vertical one:

This procedure needs only $O(n^3)$ operations. == Back to business == Blurring images is not what we are trying to do. Instead of convolution with the Gaussian blur kernel, we are interested in convolution with @moveMatrix@. We could try the same trick, finding an @a@ such that @moveMatrix == a >< a@. Unfortunately, this is impossible. But we can still get close, there ''is'' a way to write @moveMatrix == square a + square b@, well, almost. Actually, what we have is: ]> 2 * moveMatrix ] 0 2 0 2 0 1 1 0 1 1 1 -1 0 -1 1 ] 2 0 0 0 2 1 1 0 1 1 -1 1 0 1 -1 ] == 0 0 0 0 0 == 0 0 0 0 0 - 0 0 0 0 0 == square a - square b ] 2 0 0 0 2 1 1 0 1 1 -1 1 0 1 -1 ] 0 2 0 2 0 1 1 0 1 1 1 -1 0 -1 1 where > a,b :: Array Int Integer > a = listArray (-2,2) [1,1,0,1,1] > b = listArray (-2,2) [1,-1,0,-1,1] Now we can start with @paths__conv@ from last time: > paths__conv n ij = (moveMatrix ^ n) `safeAt` ij Where @safeAt@ is a safe array indexing operator, that returns @0@ for indices that are out of bounds: > safeAt ar i > | inRange (bounds ar) i = ar ! i > | otherwise = 0 Now let's do some algebraic manipulation: ]> paths__conv n ij ]> = {- by definition of paths__conv -} ]> (moveMatrix ^ n) `safeAt` ij ]> = {- by defintion of a and b -} ]> ((square a - square b) `div` 2)^n `safeAt` ij -- division by 2 is pseudocode ]> = {- division does not depend on the index -} ]> (square a - square b)^n `safeAt` ij `div` 2^n We still cannot apply the interchange law, because the exponentiation @(^n)@ is applied to the difference of two tensor products and not a single one. We can, however, expand this exponentation by the formula: ]> (a + b)^n = sum [ multinomial [n__a,n__b] * a^n__a * b^n__b | (n__a,n__b) <- split n ] This is just the usual binomial expansion, as in FORMULA: knight/binomial-expansion-2 (x+y)^2 &= x^2 + 2xy + y^2\\ (x+y)^3 &= x^3 + 3x^2y + 3xy^2 + y^3\\ \text{etc.} Applying binomial expansion to our work-in-progress gives: ]> (square a - square b)^n `safeAt` ij `div` 2^n ]> = {- binomial expansion -} ]> sum [ multinomial [n__a,n__b] * square a^n__a * (-square b)^n__b ]> | (n__a,n__b) <- split n ] ]> `safeAt` ij `div` 2^n ]> = {- (-square b)^n__b == (-1)^n__b * square b^n__b -} ]> sum [ multinomial [n__a,n__b] * (-1)^n__b ]> * square a^n__a * square b^n__b ]> | (n__a,n__b) <- split n ] ]> `safeAt` ij `div` 2^n ]> = {- interchange law -} ]> sum [ multinomial [n__a,n__b] * (-1)^n__b ]> * square (a^n__a * b^n__b) ]> | (n__a,n__b) <- split n ] ]> `safeAt` ij `div` 2^n ]> = {- move `safeAt` inwards, since addition is pointwise -} ]> sum [ multinomial [n__a,n__b] * (-1)^n__b ]> * square (a^n__a * b^n__b) `safeAt` ij ]> | (n__a,n__b) <- split n ] ]> `div` 2^n == Fast indexing == Since @square !!!

*: The number of elements is often called the dimension of a vector. Here we use the term dimension to refer to the number of indices used, also known as the (tensor) order. So a $100*100$ pixel image has dimension $10000$ according to the first interpretation (the number of elements), but dimension two in the second interpretation (the number of indices).