# Functions of Functions in EMT and Python

Because Euler Math Toolbox is designed to be an interactive way to do mathematics, its algorithms for numerical problems are easy to use, such as the solution of an equation, integration, minimization, or even to plot a function. For a demonstration and the adaptation to Python, we find the minimum of a one dimensional function.

The algorithm used is the Golden Ratio subdivision of an interval. It spares one function evaluation per step, and is guaranteed to yield an inclusion of the minimum for a convex function (assuming exact computations).

Here is an example:

>function f(x) := x^3-x
>xmin = fmin("f",0)
0.577350265248


It is even easier to use an expression. Expressions are strings which are evaluated for the parameters x,y,z (by default), and thus work almost like functions.

 >fmin("x^3-x",0)
0.577350265248


The same function can be coded in Python. I did a line-by-line translation of the code of EMT.

def fmin (f,a,b=None,d=0.01,dmax=1e+20,eps=1e-14):
""" Compute the minimum of a function
Using the Golden Cut.

a: Start
b: End of interval or None (end is searched first)
d: Initial step size
dmax: Maximal step size to search for b
eps: Desired Accuracy

Returns: Point of minimum.
"""
if b == None:
x0=a; y0=f(x0);
x1=x0+d; y1=f(x1)
if y1>y0:
d=-d
x1=x0+d; y1=f(x1)
while True:
if not y1<y0:
break
d=2*d
x0=x1; x1=x0+d
y0=y1; y1=f(x1)
if abs(d)>dmax:
break
a=x0-d
b=x1;
x0=a; x3=b
y0=f(x0); y3=f(x3)
l=(3-np.sqrt(5))/2
x1=x0+l*(x3-x0); x2=x3-l*(x3-x0)
y1=f(x1); y2=f(x2)
while True:
if y1>y2:
x0=x1; x1=x2; x2=x3-l*(x3-x0)
y0=y1; y1=y2; y2=f(x2);
else:
x3=x2; x2=x1; x1=x0+l*(x3-x0)
y3=y2; y2=y1; y1=f(x1)
if np.abs(x0-x3)<eps:
return x0

To use this for the example above is easy.

def f(x):
return x**3-x

fmin(f,0) # -> 0.5773502652484417

Via lambda functions, an expression works just as well.

fmin(lambda x: x**3-x,0)

The main motivation for this posting is more complicated. We want to provide additional parameters to the function.

Euler Math Toolbox has two ways of doing this. The first way is via args(). It works as follows.

>function f(x,a) := x^3-a*x
>xmin = fmin("f",0;1)
0.577350265248
>xmin = fmin("f",0;1.1)
0.60553006474


The parameters after the semicolon are passed from „fmin“ to the function „f“ by calling it as „f(x,args())“.

The second way is with call collections. Those are collections of functions and parameters. They are available for expressions too, but need a named member for the additional parameters.

>fmin({{"f",1.1}},0)
0.60553006474
>fmin({{"x^3-a*x",a=1.1}},0)
0.60553006474


To achieve the same in Python, the easiest way is to use lambda functions.

def f(x,a):
return x**3-a*x

a = 1.1
fmin(lambda x: f(x,a),0) # -> 0.6055300647407058

If that is used within another function, the lambda function will be made referring to the current variable „a“. You need not worry about a variable „a“ in the function „fmin“.

Because Python does not separate the additional parameters with a semicolon, the „args“ method feels a bit strange. But it is possible. Have a look at this code.

def h(x,a,b):
return a*x**2+b

def evalf (f,x,*args):
return f(x,*args)

evalf(h,3,4,1) # -> 9

The function „evalf“ needs another function as an argument, just as „fmin“. We want to pass more than one argument to this function „f“. If we call „evalf(h,3,4,1)“, those arguments are „(4,1)“. They are passed as a tuple to the variable „args“. Saying „*args“ will expand this tupel and pass „4,1“ to „f“, i.e. to „h“.

I made a private Python module with some functions like „fmin“ where I did not use the „args“ method. The reason is that I want to overwrite named parameters with arguments, e.g., saying „fmin(f,a,b)“ with interval bounds, where the default for „b“ is „None“. This is not possible if „*args“ is present.

Finally, an application of „fmin“.

We minimize a multidimensional function with gradient search. The search goes down the gradient until a minimum is found, and repeats from that point. The handling of the accuracy during the algorithms need to be improved for maximal performance. Currently, it computes the accurate minimum in each step.

The method is best used for convex function, or at least locally convex function. It is guaranteed to find a minimum then, provided that the set of points where the function is less than the current value is compact.

epsilon = 1e-14

def vmin (f,x,v,tmax=1,tmin=epsilon):
""" Minimize f(x+t*v) assuming v is a direction of descent
"""
t=fmin(lambda t:f(x-t*v),0);
return (t,x-t*v)

def fminmult (f,Df,x):
res = 1e20
while True:
t,xnew = vmin(f,x,Df(x))
resnew = f(xnew)
if resnew>=res:
break
res = resnew; x = xnew
return x

def f(x,a):
return (x[0]-a)**2+2*a*x[1]**2

def Df(x,a):
return np.array([2*(x[0]-a),4*a*x[1]])

a = 4
fminmult(lambda x:f(x,a),lambda x:Df(x,a),np.array([1,1]))

# -> array([ 4.00000000e+00, -1.65436123e-24])

# The Speed of Python

You should be aware that Python is slow compared to native code like Java (which is compiled at runtime). The factor is always more than 10 and can be more than 100. So be warned!

Often, it is possible to improve the runtime by switching the algorithm. Let us study the Collatz problem an example. There is no need to explain this problem here. You can easily find a description on Wikipedia. For a start, we want to find the longest sequence for any starting point less than a million, and later for hundred millions.

The following is a recursive function to get the length of the Collatz sequence starting at n.

def get_collatz_length (n):
if n==1:
return 0
elif n%2:
return get_collatz_length(3*n+1)+1
return get_collatz_length(n//2)+1

To compute the lengths up to N=1’000’000 takes around 20 seconds. We added a code to compute the 10 worst starting points.

import time
start = time.process_time()
N=1_000_000
lengths = [(n,get_collatz_length(n)) for n in range(1,N)]
print(f"{time.process_time()-start:5.4} seconds")

records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(0,11):
print(f"{records[k][0]}: {records[k][1]}")
18.72 seconds
837799: 524
626331: 508
939497: 506
704623: 503
910107: 475
927003: 475
511935: 469
767903: 467
796095: 467
970599: 457
546681: 451

A comparable Java code is the following.

public class Main
{

public static int get_collatz_length (long n)
{
int res = 0;
if (n>1)
{
if (n%2 == 1)
res = get_collatz_length(3*n+1)+1;
else
res = get_collatz_length(n/2)+1;
}
return res;
}

public static void main(String[] args)
{
long time = System.currentTimeMillis();
int N=1000000;
int L[] = new int[N];
for (int k=0; k<N; k++)
L[k] = get_collatz_length(k);
int max=0,imax=0;
for (int k=1; k<N; k++)
if (L[k]>max)
{
max=L[k]; imax=k;
}
System.out.println("Maximal value " + max + " at "+imax);
time = System.currentTimeMillis()-time;
System.out.println((time/1000.0) + " seconds.");

}

}
Maximal value 524 at 837799
0.498 seconds.

This takes about half a second. The factor is 40.

What we also see is that the Python code looks more elegant and mighty. E.g., sorting the array of lengths and printing the 10 highest is done with three lines of code. In Java, the same can only be achieved by writing a Comparator class.

Consequently, I see Python as an interactive script language to do math. If speed is needed, native code must be used, as in Numpy. Unfortunately, the user cannot easily write a function in C or Java.

Can we speed up the process? We could try to avoid the doubled computations by remembering and using the already known lengths in a dictionary. The following code does this.

collatz_lengths = {}

def get_collatz_length (n, nmax):
res = collatz_lengths.get(n)
if res:
return res
elif n==1:
res = 0
elif n%2:
res = get_collatz_length(3*n+1,nmax)+1
else:
res = get_collatz_length(n//2,nmax)+1
if n<nmax:
collatz_lengths[n] = res
return res

The storage of the lengths is only done if n is not too high. For the call, we have set this maximal value equal to N=1’000’000. This code reduces the time to about 1 second. We gained a factor of 20 just by this simple trick. Obviously, an idea might be worth more than a compiled code.

Maybe, we can do oven better, if we do not use a dictionary. Instead, we can use a list and store the known lengths in it. Let us try.

def get_collatz_length (n, L, N):
if n<N and L[n]>=0:
return L[n]
if n==1:
res = 0
elif n%2:
res = get_collatz_length(3*n+1,L,N)+1
else:
res = get_collatz_length(n//2,L,N)+1
if n<N:
L[n]=res
return res

import time
start = time.process_time()
N=1_000_000
L = list(range(N))
for k in range(N):
L[k] = -1
lengths = [(n,get_collatz_length(n,L,N)) for n in range(1,N)]
records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(11):
print(f"{records[k][0]}: {records[k][1]}")
print(f"{time.process_time()-start:5.4} seconds")

This reduces the time only slightly. It is still one second.

However, implementing the same trick in Java, we get the lengths up to one million in 0.033 seconds, i.e., 30 times faster. The reason for this is partly the infinite integer type of Python, and also the use of a dictionary instead of an array. Here is the Java code.

public class Main
{

public static int get_collatz_length (long n, int L[], int N)
{
if (n<N && L[(int)n]>=0)
return L[(int)n];
int res = 0;
if (n>1)
{
if (n%2 == 1)
res = get_collatz_length(3*n+1,L,N)+1;
else
res = get_collatz_length(n/2,L,N)+1;
}
if (n<N) L[(int)n] = res;
return res;
}

public static void main(String[] args)
{
long time = System.currentTimeMillis();
int N=100000000;
int L[] = new int[N];
for (int k=0; k<N; k++)
L[k] = -1;
for (int k=0; k<N; k++)
L[k] = get_collatz_length(k,L,N);
int max=0,imax=0;
for (int k=1; k<N; k++)
if (L[k]>max)
{
max=L[k]; imax=k;
}
System.out.println("Maximal value " + max + " at "+imax);
time = System.currentTimeMillis()-time;
System.out.println((time/1000.0) + " seconds.");

}

}

Increasing N to 10’000’000, the Java computation time rises to 0.15 seconds. The more clever Python code was faster now and took 9 seconds, i.e., it was 60 times slower.

Going further to N=100’000’000 made Java run for 1.44 seconds. The more clever code in Python, however, broke down and took more than 2 minutes, again a factor of over 50.

Finally, I wanted a plot of the frequencies of the lengths. I had to write a procedure for the count, not trusting the performance of the Counter class.

lengths1 = [x[1] for x in lengths]
M = max(lengths1)
def get_count (L,M):
count = [0 for k in range(M)]
for x in L:
if x<M:
count[x] += 1
return count
count = get_count(lengths1,M+1)

This allows to plot the frequencies. The vertical axis is the number of times a length occurs among the first 100 million numbers. The same plot can be found on Wikipedia.

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

fig,ax = plt.subplots(figsize=(14,6))
ax.grid(True)
ax.plot(count);

In summary, Python did well. But it seems to be so flexible that it is not predictable in terms of performance and memory consumption. That is so with many modern software systems who develop a mind of their own from the perspective of a user.

Continuing my efforts to learn Python for Math, I wrote a function for adaptive integration. It uses 10-point Gauss quadrature. To check the accuracy, the algorithm uses a bipartition of the interval. So each step takes 30 evaluations of the function, one for the complete step and two for the check. If the accuracy is not high enough, the step size will decreased. Otherwise, the function will try to double the step size in the next step.

Here is the complete code.

def legendre (n):
""" Get the Legendre Polynomials

n : maximal degree

Return: list of Lenegdre Polynomals
"""
p0 = np.poly1d([1])
if n<=0:
return [p0]
p1 = np.poly1d([1,0])
px = p1
if n==1:
return [p0,p1]
v=[p0,p1]
for i in range(1,n):
p = ((2*i+1)*px*p1 - i*p0) / (i+1)
p0 = p1
p1 = p
v.append(p1)
return v

def make_gauss(n):
""" Compute the Coefficiens for Gauss Integration

n : Number of points

return: x,a
x : Gauss popints
a : Gauss coefficients
"""
v = legendre(n)
x = np.roots(v[n])
A = np.array([v[i](x) for i in range(n)])
w = np.zeros(n)
for i in range(n):
p = np.polyint(v[i])
w[i] = p(1)-p(-1)
return x,np.linalg.solve(A,w)

gauss_x5,gauss_a5 = make_gauss(5)

def gauss5 (f,a,b):
""" Gauss Integrate with 5 points

f : function of one variable
a,b : Inteterval bounds

returns: Integral
"""
return np.sum(f((a+b)/2+(b-a)/2*gauss_x5)*gauss_a5)*(b-a)/2

gauss_x10,gauss_a10 = make_gauss(10)

def gauss10 (f,a,b):
""" Gauss Integrate with 10 points

f : function of one variable
a,b : Inteterval bounds

returns: Integral
"""
return np.sum(f((a+b)/2+(b-a)/2*gauss_x10)*gauss_a10)*(b-a)/2

def integrate (f,a,b,n=1,eps=1e-14):
""" Adaptive Integral using 10 point Gauss

f : function of one variable
a,b : Interval bounds
n=1 : Initial step size is (b-a)/n
eps : Desired absolute accuracy

returns: Integral
"""
h = (b-a)/n
x = a
I = gauss10(f,x,x+h)
I1 = gauss10(f,x,x+h/2)
I2 = gauss10(f,x+h/2,x+h)
res = 0.
while x<b-eps:
## print(h)
if np.abs(I-(I1+I2))>h/(b-a)*eps:
h = h/2
I = I1;
I1 = gauss10(f,x,x+h/2)
I2 = gauss10(f,x+h/2,x+h)
else:
x = x+h
h=2*h
if x+h>=b-eps:
h=b-x
res = res+I1+I2
I = gauss10(f,x,x+h)
I1 = gauss10(f,x,x+h/2)
I2 = gauss10(f,x+h/2,x+h)
return res

This algorithm is usually fast enough and very accurate. The implementation is by no means optimal. E.g., if a factor of 2 is used in the step size, a lot of computations are redone. Moreover, it might not be optimal to try to double the step size in each step. However, it works for interactive use, and that is what it is intended for.

Let me try to explain the code a bit. The function legendre() computes the Legendre polynomials using the recursion formula. These polynomials are orthogonal with respect to the weight w=1. The function returns a list of the polynomials. It makes use of the polynomial handling in Python (in old style).

The function make_gauss() computes the zeros of the Legendre polynomials using a function from Python, and coefficients a[k], such that the following is exact for all polynomials p up to degree n-1:

$$\sum_{k=1}^n a_k p(x_k) = \int\limits_{-1}^1 p(t) \,dt$$

It sets up n linear equations from this requirement using the Legendre polynomials we just computed as test cases and solves the equations. To compute the correct integral for polynomials, there is a function in Python.

Now, we can define the functions for Gauss integration with 5 and 10 points, and use it for our adaptive integration.

For the demonstration, I created a very easy to use function fplot() which plots a function adaptively using the code of the last posting.

def fplot (f,a,b,n=100,nmax=100000):
""" Simple plot of a function

a,b : interval bounds
n=100 : initital step size (a-b)/n
nmax=100000 : minimal step size (a-b)/nmax

"""
fig,ax = plt.subplots(figsize=(8,8))
ax.grid(True)
ax.plot(x,y)

I now have put all this and the code of the last posting into a Python module. When using the Python notebook interface in a Browser, the file should be placed in the same directory where the „.ipynb“ file. The Python file can even be opened and edited in the Python notebook IDE.

Then it can be used as follows.

import renestools as rt
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

def f(x):
return np.abs(x)

rt.fplot(f,-np.sqrt(2),1,n=1000)
print(rt.integrate(f,-np.sqrt(2),1))
1.5

This is not easy to integrate numerically, since the function is not smooth. The result is exact to the last digit. Even the plot is not trivial. With the arbitrary step size, the point x=0 might not be met exactly, and the corner may look round.

Here is another example using the Gaussian distribution. All digits agree. Note, that the Gauss function is almost zero for large x.

def f(x):
return np.exp(-x**2)

print(rt.integrate(f,-20,20))
print(np.sqrt(np.pi))

1.7724538509055159
1.7724538509055159

You can even make the integral to a function and plot it adaptively.

def F(x):
return rt.integrate(f,0,x)/np.sqrt(np.pi)

rt.fplot(F,0,5)

I could not find a solution for adaptive plots in Matplotlib. So, I wrote a small script that can evaluate a function in adaptive step sizes. „Adaptive“ means that the step size adapts to the output of the plot. This is to prevent coarse line segments with corners in the plot.

Here is the simple code for a function of one variable.

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

xc = a
yc = f(xc)
x = [xc]
y = [yc]
h = (b-a)/n
uplim = (2*h)**2
lowlim = uplim/6
eps = (b-a)/100000.
while xc<b:
xcnew=xc+h
if xcnew>=b:
xcnew=b
ycnew=f(xcnew)
d = h**2+(ycnew-yc)**2
if d>uplim and h>eps:
h = h/2
else:
if d<lowlim/3:
h = 2*h
xc = xcnew
yc = ycnew
x.append(xc)
y.append(yc)
return np.array(x),np.array(y)

def func(x):
return np.sin(1/x)

print(x.size,' points generated')
fig,ax = plt.subplots(figsize=(14,10))
ax.grid(True)
ax.plot(x,y);

To apply the function adaptive_ev(), you need a function of one variable, an interval and the target number of plot points. The function will double or half the step size as needed to keep the output under control.

This is a screenshot of the graph generated by the code above. The function is difficult to plot near 0. Thus, adaptive evaluate is the only chance to do that. The routine generated 4270 data points to create this plot.

Euler Math Toolbox has the adaptive evaluation built into the plot2d() routine. So, a similar code would look like this.

>aspect(1.2); plot2d("sin(1/x)",0.001,1,thickness=1.5,color=blue):


Admittedly, the anti-aliasing of Matplotlib is outstanding. I like this library a lot and recommend using it.

I also did a routine for curves in the plane, which are parameterized by two functions for the x- and y-coordinate.

def adaptive_ev2 (fx,fy,a,b,size=1,n=1000):
tc = a
xc = fx(tc)
yc = fy(tc)
x = [xc]
y = [yc]
h = size/n
uplim = (2*h)**2
lowlim = uplim/6
eps = size/100000.
while tc<b:
tcnew=tc+h
if tcnew>=b:
tcnew=b
xcnew=fx(tcnew)
ycnew=fy(tcnew)
d = (xcnew-xc)**2+(ycnew-yc)**2
if d>uplim and h>eps:
h = h/2
else:
if d<lowlim/3:
h = 2*h
xc = xcnew
yc = ycnew
tc = tcnew
x.append(xc)
y.append(yc)
## print((tc,xc,yc,h,d))
return np.array(x),np.array(y)

tc = a
xc = fx(tc)
yc = fy(tc)
x = [xc]
y = [yc]
h = size/n
uplim = (2*h)**2
lowlim = uplim/6
eps = size/100000.
while tc<b:
tcnew=tc+h
if tcnew>=b:
tcnew=b
xcnew=fx(tcnew)
ycnew=fy(tcnew)
d = (xcnew-xc)**2+(ycnew-yc)**2
if d>uplim and h>eps:
h = h/2
else:
if d<lowlim/3:
h = 2*h
xc = xcnew
yc = ycnew
tc = tcnew
x.append(xc)
y.append(yc)
## print((tc,xc,yc,h,d))
return np.array(x),np.array(y)

def funcx(t):
return np.exp(-t/10)*np.cos(t)
def funcy(t):
return np.exp(-t/10)*np.sin(t)

print(x.size,' points generated')
fig,ax = plt.subplots(figsize=(10,10))
ax.grid(True)
ax.set_xlim(-1,1)
ax.set_ylim(-1,1)
ax.plot(x,y);

The code follows the same ideas. However, it uses a plot size to set the dimensions of the curve in the plane. You only need a guideline. The step size will be approximately 1/1000 of this size.

Close to 0, the plot should avoid generating too many points. But it should look smooth. This plot generates 14509 points. That is 14 times the given size of 1. But one should be aware that it winds 20 times around the origin!

Euler Math Toolbox does the same, if two functions are fed into plot2d(). It uses a function adaptive() to compute the data points, which works for vector valued functions. This function could be ported to Python easily.

function adaptive (f$: call, a: number, b: number, eps, .. dmin, dmax, dinit, factor) ## Default for eps : 0.01 ## Default for dmin : 1e-05 ## Default for dmax : 100000 ## Default for dinit : 0.1 ## Default for factor : 2 sn=sign(b-a); t=a; sy=f$(t;args());
if sn==0 then return {a,sy}; endif;
if cols(sy)>1 then
error(f$|" does not produce a real or column vector"); endif x=zeros(1,1000); y=zeros(rows(sy),1000); n=1; h=dinit; x[:,1]=t; y[:,1]=sy; repeat repeat tn=t+sn*h; if sn*(tn-b)>=0 then tn=b; endif; syn=f$(tn;args());
d=sqrt(colsum(abs(syn-sy)^2));
if isNAN(d) then break; endif;
if d>eps and h>dmin then h=max(h/factor,dmin);
elseif d<eps/factor then h=min(h*factor,dmax); break;
else break;
endif;
end;
n=n+1;
if n>cols(x) then
x=x|x;
y=y|y;
endif;
x[:,n]=tn; y[:,n]=syn;
sy=syn;
t=tn;
if sn*(t-b)>=0 then break; endif;
end;
return {x[:,1:n],y[:,1:n]};
endfunction

I find Python very aspiring for mathematics. The next step will be to write an adaptive integration using Gauss formulas. Stand by.

# EMT, Python, Numpy, Sympy on Macs with M1

I am currently using an 11′ Mac Air Notebook with the M1 processor as my laptop. We have to admit that Apple has made a big step forward again with this architecture. I never used Macs before because they were expensive and did not offer enough benefit to justify leaving Windows. Admittedly, they always had a much better user experience than Windows, let alone Linux. What works, does work well! But I have been judging the systems by more technical features, like processor power, ease of administrator work, and support for programmers. And in that respect, Macs were a step back. So, I could never justify using a system that serves me worse at a higher cost.

Now, things have changed with the M1 architecture. The laptop I am using is a bargain for what it offers. The processor is very fast, the display excellent, and the battery life extraordinary. The build quality is far better than any notebook I had before, with a rugged, indestructible case that still feels light and slim. For my retirement, that is precisely what I need.

The purchase of this laptop also gives me the chance to test my math programs on that operating system. There is a Windows emulation for Macs, but it does not work well with the M1 processor. So, a program like Euler Math Toolbox will have to be ported which is a lot of work, most likely not worth the effort in the end. I am going to produce a simplified version nevertheless.

Java is working well, and that gave me the chance to fix my program C.a.R. (Compass and Ruler) for Macs. Indeed, there were some annoying quirks that needed to be fixed. It is now running very well. (By the way, the same applies to my editor, JE, which I still use.)

Until I have a working version, Euler Math Toolbox needs to replaced by other, free tools on Macs. For the numerical side, there is Scilab. It has a working version for Macs, and I found no obvious flaw. For the numerical side, there is no free solution. Maxima can only be installed in a very complicated way using a sort of emulator, and I still have to explore this method. More on this on another day.

But, there is Python. We have a numerical matrix library „numpy“ and a symbolic library „sympy“. There is also a very nice plot library „matplotlib“. All those are not as easy to use as Euler Math Toolbox. I’d say, they are clumsy in comparison. Nevertheless, I am going to explore these libraries and compare to EMT in a series of postings on this blog.

You can follow me without a Mac. So let us start with an example which requires symbolic computations. We want to compute the polynomial of degree n with the property

$$p_{n+1}(x)-p_{n+1}(x-1)=x^n, \quad p_n(0)=0$$

for all real x, and natural n (investigated by Bernoulli and Euler). By this equation and induction, the polynomial is defined for all natural numbers x. Thus, the polynomial is uniquely determined if it exists. And it does indeed exist. You just need to interpolate in the first n+2 natural numbers, and since the equation holds there, it holds everywhere.

We now have

$$p_{n+1}(N) = \sum\limits_{k=1}^N k^n.$$

We could use interpolation to determine the polynomials. It is an exercise to do this in any symbolic program. For a start, we use Maxima in Euler Math Toolbox to determine the case n=3.

>p &= sum(a[k]*x^k,k,0,4)

4       3       2
a  x  + a  x  + a  x  + a  x + a
4       3       2       1      0

>equ &= makelist((p with [x=k]) = sum(j^3,j,1,k),k,1,5)

[a  + a  + a  + a  + a  = 1,
4    3    2    1    0
16 a  + 8 a  + 4 a  + 2 a  + a  = 9,
4      3      2      1    0
81 a  + 27 a  + 9 a  + 3 a  + a  = 36,
4       3      2      1    0
256 a  + 64 a  + 16 a  + 4 a  + a  = 100,
4       3       2      1    0
625 a  + 125 a  + 25 a  + 5 a  + a  = 225]
4        3       2      1    0

>&factor(p with solve(equ)[1])

2        2
x  (x + 1)
-----------
4


We have created a polynomial with indexed parameters using the sum() function of Maxima. The command makelist() then creates a list of equations which our polynomial has to satisfy (due to the sum requirement above). This can be solved with solve(). We use the first solution to substitute the values for a[k] back into the polynomial and factor the result. We get the well-known formula for the sum of the cubes of the first x integers.

Let us switch to Python. In Windows, you need to install „Anaconda“ for the following. Then start the „Anaconda Navigator“ and „jupyter notebook“. There you enter the following.

from sympy import *
x, k = symbols('x k')
a = IndexedBase('a')
p = sum([a[k]*x**k for k in range(5)])
print(p)
equ = [Eq(p.subs(x,k),sum([l**3 for l in range(1,k+1)])) for k in range(1,6)]
print(equ)
sol = solve(equ)
print(sol)
factor(p.subs([(a[k],sol[a[k]]) for k in range(5)]))

This is more difficult to understand than the EMT code above, but it follows the same ideas.

After importing everything from sympy, we need to define the symbols x and k, and the indexed symbol a[]. Then we use the sum function of Python on a list. The result is

x**4*a[4] + x**3*a[3] + x**2*a[2] + x*a[1] + a[0]

After that, we create a list of equations, just as in the Maxima code above. The problem is that we cannot use „==“. This would test the two sides for equality and evaluate to true or false. We need to use „Eq(a,b)“ instead.

[Eq(a[0] + a[1] + a[2] + a[3] + a[4], 1), Eq(a[0] + 2*a[1] + 4*a[2] + 8*a[3] + 16*a[4], 9), Eq(a[0] + 3*a[1] + 9*a[2] + 27*a[3] + 81*a[4], 36), Eq(a[0] + 4*a[1] + 16*a[2] + 64*a[3] + 256*a[4], 100), Eq(a[0] + 5*a[1] + 25*a[2] + 125*a[3] + 625*a[4], 225)]

Then we solve this equation, which yields a dictionary with solutions. Dictionaries are Python structures that contain entries and values. They print as follows.

{a[0]: 0, a[1]: 0, a[2]: 1/4, a[3]: 1/2, a[4]: 1/4}

We now use the subs() function to substitute the values of the parameters into the polynomial p and factor the result. In a jupyter notebook, it appears in pretty format automatically. Here is a screenshot.

So much for a start of the sequence of postings. Clearly, Euler Math Toolbox and Maxima re easier to use. But Python is available for Macs. On Linux, you can use EMT quite nicely with the modern Wine emulation. But you could also use Python.

Finally, as a mathematical side remark, you could differentiate the definition equation for our polynomials and get.

$$p_{n+1}'(x)-p_{n+1}'(x-1) = n x^{n-1}$$

Since our polynomials are uniquely defined by the equation, we get

$$p_n(x) = \dfrac{1}{n} (p_{n+1}'(x) -p_{n+1}'(0))$$

On the right hand side, we have a polynomial which satisfies the equation and has the property p(0)=0. We can use this to compute the next polynomial.

$$p_{n+1}(x) = n \int\limits_0^x p_n(t) \,dt + c x$$

To determine the constant c, we use p(1)=1 for our polynomials. Thus

$$p_{n+1}(x) = n \int\limits_0^x p_n(t) \,dt + (1- n\int\limits_0^1 p_n(t) \,dt) x$$

Let us make a function in Euler Math Toolbox from this formula. The function will call Maxima several times, and thus not be fast.

>function p(n) ...
$pn &= "x";$if n>1 then
$for k=1 to n-1;$    q &= "subst(t,x,pn)";
$pn &= "@k*integrate(q,t,0,x)+(1-@k*integrate(q,t,0,1))*x";$  end;
$endif;$return &pn;
$endfunction >p(4); &factor(%) 2 2 x (x + 1) ----------- 4  In Euler functions, you need to use „…“ for symbolic expressions. Moreover, you can use numerical values via @… in symbolic expressions. Of course, loops like this can also be programmed in Maxima direct. You will have to learn the syntax of Maxima for this. Here is a solution in Python. def p(n): t,x = symbols('t x') p = x if n<=1: return p for i in range(1,n): q = p.subs(x,t) p = i*integrate(q,(t,0,x)) + (1-i*integrate(q,(t,0,1)))*x return p factor(p(4)) This is a good start. But it turns out that the Anaconda Navigator does not run on Macs. It looks like it needs Rosetta2. Usually, a program asks if Rosetta should be installed. But Anaconda didn’t on my system. After installing Anaconda nevertheless, you have a Python 3.9 environment. So you can run „python“ to start the Python interpreter, and „ipython“ to start the advanced Python terminal. For this test, I saved the following to a file „test.py“ using an editor (I am using my JE, of course). from sympy import * def p(n): t,x = symbols('t x') p = x if n<=1: return p for i in range(1,n): q = p.subs(x,t) p = i*integrate(q,(t,0,x)) + (1-i*integrate(q,(t,0,1)))*x return p pprint(factor(p(10))) Then running „python test.py“ will run this script and output the result in pretty format.  2 2 ⎛ 2 ⎞ ⎛ 4 3 2 ⎞ x ⋅(x + 1) ⋅⎝x + x - 1⎠⋅⎝2⋅x + 4⋅x - x - 3⋅x + 3⎠ ───────────────────────────────────────────────────── 20  You can also try to run „jupyter notebook“ from the terminal. If things go well, your browser will open and you can start an ipython terminal, clicking on „New“. Use Option+Return to run any command. # Python or Java – What is better as first language? Whenever I have the pleasure to teach the beginner course in programming I start with the same question: Python or Java? Since colleagues have good success with Python and since Python is a useful language I am tempted ever again. However, I decide to go with Java every time. Let me explain the reasons. This year, I even started a discussion on a Python forum to push me towards Python and tell me why I should prefer it. The discussion was an interesting read, but nothing that convinced me. So I did my own research on Python sites designed for beginners and see how they handle the teaching. As expected, they dive into Python and use its data collections almost from the start, at least shortly after using the Python command line for interactive computations. Some even go quickly into real world usage with Python packages for graphics, artificial intelligence or numerical mathematics. If Python is that useful and if there are so many libraries available and also quite nice IDEs (my favorite is Spyder) why do I shy back to use it as first language? Here are some reasons. • I am teaching mathematicians. Mathematicians are meant to program, to test and to improve basic algorithms. They may also use high level libraries for research, but they should at least be able to start on the roots of the code, and often that is the only way to go. Python is not designed for this use. First, it is a lot slower than Java (which is on the level of C code, by the way). The difference can be a factor of 10 to 50. Then, it hides the basic data types (such byte, short, int, long, float, double) and their problems from the user and encourages to use its infinite arithmetic. I feel that mathematicians should understand the bits and bytes, and get full, speedy control of their code. • Python extends its language to compactify the handling of data sets at the cost of extending the language with almost cryptic syntax. E.g., Python can automatically and in one expression create a list of integers satisfying a certain condition or select from another list under a condition. I am convinced that this kind of behind-the-scene operation is not suited to teach beginners how things work and how to write good and clean code. The situation is a bit like Matlab where everything is vectored and loops are costly. You learn how to use the high-level constructs of your language, but not the basics. If you only have a hammer everything looks like a nail. • Python is as multi-platform or open-source as Java. Both need an interpreter and cannot be translated to native code to ease implementation of bigger programs on the user systems. Java is even a bit better in this respect. Both languages cannot easily use the native GUI library or other more hardware bound libraries. Python is a bit better here. But this is nothing to worry about as a first language anyway. • I also need to answer the question: Is it easier to go from Java to Python or the other way? And, for me, the answer is clearly that Java provides the more basic knowledge on programming. Python is on a higher level. It is more a scripting language than a basic programming language. One may argue that this is good for beginners since they get a mighty system right at the start. After all, you learn to drive a car without knowing how the motor works. But I have seen enough programs in Python to know that it spoils the programmer and drives the thoughts away from a fresh, more efficient solution. This said, I would very much welcome a second course building on the basics that uses Python and its advanced libraries. Many students are taught R and Matlab in the syllabus. I think Python would be a better choice than Matlab for numerical programming and data visualization. And, in contrast to Matlab, it is open and free. # The speed of C, Java, Python versus EMT I recently tested Python to see if it is good as a first programming language. The answer is a limited „yes“. Python has certainly a lot to offer. It is a mainstream language and thus there is a lot of support in the net, and there are tons of libraries for all sorts of applications. A modern version like Anaconda even installs all these tools effortlessly. However, I keep asking myself if it isn’t more important to learn the basics before flying high. Or to put it another way: Should we really start to teach the usage of something like a machine learning toolbox before the student has understood data types and simple loops? This question is important because most high-level libraries take away the tedious work. If you know the right commands you can start problem-solving without knowing much of the underlying programming language. I cannot decide for all teachers. It depends on the type of students and the type of class. But I can give you an idea of how much a high-level language like Python is holding you back if you want to implement some algorithm yourself. You have to rely on a well-implemented solution being available, or you will be lost. So here is some test code for the following simple problem. We want to count how many pairs (i,j) of numbers exist such that i^2+j^2 is prime. For the count, we restrict i and j to 1000. #include #include int isprime (int n) { if (n == 1 || n == 2) return 1; if (n % 2 == 0) return 0; int i = 3; while (i * i <= n) { if (n % i == 0) return 0; i = i + 2; } return 1; } int count(int n) { int c = 0; for (int i = 1; i <= n; i++) for (int j = 1; j <= n; j++) if (isprime(i * i + j * j)) c = c + 1; return c; } int main() { clock_t t = clock(); printf("%d\n",count(1000)); printf("%g", (clock() - t) / 1000.0); } /* Result was: 98023 0.132 */  So this takes 0.132 seconds. That is okay for one million prime-number checks. Below is a direct translation in Java. Surprisingly, this is even faster than C, taking 0.127 seconds. The reason is not clear. But I might not have switched on every optimization of C. public class Test { static boolean isprime (int n) { if (n == 1 || n == 2) return true; if (n % 2 == 0) return false; int i = 3; while (i * i <= n) { if (n % i == 0) return false; i = i + 2; } return true; } static int count(int n) { int c = 0; for (int i = 1; i <= n; i++) for (int j = 1; j <= n; j++) if (isprime(i * i + j * j)) c = c + 1; return c; } static double clock () { return System.currentTimeMillis(); } public static void main (String args[]) { double t = clock(); System.out.println(count(1000)); System.out.println((clock() - t) / 1000.0); } /* Result was: 98023 0.127 */ }  Below is the same code in Python. It is more than 30 times slower. This is unexpected even to me. I have to admit that I have no idea if some compilation trick can speed that up. But in any case, it is the behavior that an innocent student will see. Python should be seen as an interactive interpreter language, not a basic language to implement fast algorithms. def isprime (n): if n==2 or n==1: return True if n%2==0: return False i=3 while i*i<=n: if n%i==0: return False i=i+2 return True def count (n): c=0 for k in range(1,n+1): for l in range(1,n+1): if isprime(k*k+l*l): ## print(k*k+l*l) c=c+1 return c import time sec=time.time() print(count(1000)) print(time.time()-sec) ## Result was ## 98023 ## 4.791218519210815  I have a version in Euler Math Toolbox too. It uses matrix tricks and is thus very short, using built-in loops in C. But it still cannot compete with the versions above. It is about 3 times slower than Python and 100 times slower than C. The reason is that the isprime() function is not implemented in C but in the EMT language. >n=1000; k=1:n; l=k'; tic; totalsum(isprime(k^2+l^2)), toc; 98023 Used 13.352 seconds  Below is a version where I use TinyC to check for primes. The function isprimc() can only handle real numbers, no vectors. Thus I vectorize it with another function isprimecv(). This takes a bit of performance. >function tinyc isprimec (x) ...$  ARG_DOUBLE(x);
$int n = (int)x;$  int res = 1;
$if (n > 2)$  {
$if (n%2 == 0) res=0;$     int i=3;
$while (i*i<=n)$     {
$if (n%i==0) { res=0; break; }$         i = i+2;
$}$  }
$new_real(res);$  endfunction
>function map isprimecv (x) := isprimec(x);
>n=1000; k=1:n; l=k'; tic; totalsum(isprimecv(k^2+l^2)), toc;
98023
Used 0.849 seconds


# Python in EMT – Cross Sum of Cubes

When is the cross sum (sum of the digits) of the cube of a number equal to the number?

It is obviously true for n=0 and n=1. You might be able to find n=8 with the cube 512. Then, how do you prove that there are finitely many numbers? How long will it take to find all numbers?

For a dirty estimate we have

$$(9m)^3 \ge (a_0+\ldots+a_{m-1})^3 = a_0+a_1 10 + \ldots + a_{m-1} 10^{m-1} \ge 10^{m-1}$$

It is an easy exercise to show that m<7 is necessary. So we only have to check up to n=100. The cube of 100 has already 7 digits. This can be done by hand with some effort, but here is a Python program written in Euler Math Toolbox.

>function python qs (n) ...
$s=0$while n>0:
$s+=n%10;$   n/=10
$return s$endfunction
>function python test() ...
$v=[]$for k in range(100):
$if k == qs(k**3):$       v.append(k)
$return v$endfunction
>test()
[0,  1,  8,  17,  18,  26,  27]