Once again, rephrasing is a friend.

You might have heard of a mathematician named Gauss. He’s known for a lot of contributions in mathematics but my favorite contribution has a story. As story goes there was a maths teacher who wanted to read the newspaper while he was supposed to be teaching. So instead of teaching the teacher gave his pupils the assignment to add the numbers 1 through 100.

\[ s = 1 + 2 + 3 + \ldots + 98 + 99 + 100 = ? \] Instead of blindly calculating away, which was what most of the other students did, the young Gauss looked at the problem and started wondering if there perhaps was a more clever way of looking at the problem. After skribbling down some ideas he noticed a pattern.

\[ \begin{array}{c *{6}{c@{\hspace{6pt} + \hspace{6pt}}} c} s & = & 1 & 2 & 3 & \ldots & 98 & 99 & 100 \\ s & = & 100 & 99 & 98 & \ldots & 3 & 2 & 1 \end{array} \]

Maybe we can calculate \(s\) in another way.

\[ \begin{array}{c *{6}{c@{\hspace{6pt} + \hspace{6pt}}} c} & 1 & 2 & 3 & \ldots & 98 & 99 & 100 \\ + & 100 & 99 & 98 & \ldots & 3 & 2 & 1 \\ \hline = & 101 & 101 & 101 & \ldots & 101 & 101 & 101 \end{array} \]

It seems that if we sort cleverly the sum of these two identical series can also be expressed as a multiplication

\[ \underbrace{ \begin{array}{c *{6}{c@{\hspace{6pt} + \hspace{6pt}}} c} & 101 & 101 & 101 & \ldots & 101 & 101 & 101 & \end{array} }_{100 \times 101 = 10100} \]

In other words;

\[ s = \frac{10100}{2} = 5050 \]

What’s very neat about this “rethinking” is that it can also be generalized. If we’re interested in taking the sum from \(1 \ldots n\) then we can repeat the same trick.

\[ \begin{array}{c *{6}{c@{\hspace{6pt} + \hspace{6pt}}} c} & 1 & 2 & 3 & \ldots & n-2 & n-1 & n \\ + & n & n-1 & n-2 & \ldots & 3 & 2 & 1 \\ \hline = & n+1 & n+1 & n+1 & \ldots & n+1 & n+1 & n+1 \end{array} \]

This will give us the general formula.

\[ 1 + \ldots + n = \sum_{x=1}^n x= \frac{n\times(n+1)}{2} \]

The brilliance of this is in recognizing that there are multiple ways of calculating the thing we’re interested in. Instead of doing a whole lof of addition, we can apply a mere multiplication and still get the same result.

This is a beautiful point, because it gives another angle to mathematics. There’s a “computer-science-voice” inside of me that’s looking at this and wondering if maybe … maths can be applied much more like a compiler than a calculator?

Think about it! This example shows that you might be able to rephrase an original problem into one that is much easier to calculate. We’re literally generating an alternative set of instructions. Just like the kid in the classroom, our program might also spend less time performing a multiplication compared to a long sequence of additions.

It’s a compiler that is potentially “better” too.

You can actually run this as an experiment.

```
import numba as nb
def calc_sum(n):
return sum(i for i in range(1, n + 1))
def calc_smart(n):
return n * (n+1) / 2
@nb.njit
def calc_sum_compiled(n):
res = 0
for i in range(1, n + 1):
res += i
return res
@nb.njit
def calc_smart_compiled(n):
return n * (n+1) / 2
```

The results of various `%timeit`

calls are listed below.

calc_sum | calc_smart | calc_sum_comp | calc_smart_comp | |
---|---|---|---|---|

n=1_000 | 46.5 µs ± 1.16 µs | 143 ns ± 5.18 ns | 149 ns ± 10.7 ns | 127 ns ± 6.25 ns |

n=10_000 | 435 µs ± 3.86 µs | 135 ns ± 0.461 ns | 140 ns ± 6.97 ns | 116 ns ± 0.257 ns |

n=100_000 | 4.46 ms ± 142 µs | 147 ns ± 6.61 ns | 132 ns ± 3.9 ns | 117 ns ± 1.17 ns |

Even *if* we use the JIT compiler from numba then we still see that we can get a huge speedup by using maths as a compiler. The fastest results are achieved by using both the compiler and the maths but the example is sufficient to show that you’re able to improve a compiler by using maths.