Archiv der Kategorie: Uncategorized

Giving up on Macs

For a short time, I had an iMac Air M1 laptop. While I have to admit that the Apple system and the M1 chip impressed me a lot, I won’t keep that system. The reason can be summarized as the huge compatibility problem between Apples private garden and the outer world. But there are also issues with the system itself and the cloud services it offers or rather enforces.

This is by no means going to be a rant against Apple, at least I try not to let it become one. Their products are great. They are on the expensive side, but there are Windows laptops and Android smartphones with worse specs in the same price range. The common theme for Apple seems to be: What works, works well. But there are too many things that I do not like because they simply don’t work. Let me explain.

The compatibility issue has many facets. The last one that I encountered was indeed the nail in the coffin, and the end of my friendship with Apple: The Mac was unable to read my USB drive formatted on a Windows computer. Okay, you can format on the Mac. Windows can read this. I used the disc like that for a while until I found more and more corrupt files. The exact reason is impossible to detect. One possible culprit could be the indexing that Macs perform for minutes by default, in conjunction with my impatience. But it could also just have been the handling of the device on the Macs or on Windows. I never had such problems in the Windows ecosystem.

There were numerous issues with the M1 chip. It seems that this new technology is not supported widely. There is a translation tool, but it failed to work on many of the programs that I wished to use. Often, there is a work-around. An example is the Anaconda Navigator. You can use the Anaconda tools nevertheless. But frequently, there is no M1 version available. This may sort itself out with time.

One other problem is something I am not very proud to tell you. I could never get used to the four modifier keys of the laptop. And Macs seem to rely very much on the ability of the user to remember those key combinations. I always forgot which modifier with which key would scroll a page up, go up one directory, or delete a file on the desktop. Moreover, it is close to impossible to fluidly program or write Latex, even if you remember that the curly bracket is hidden under the number 8. If I continued to program for Macs, I’d have to connect an external keyboard with a reasonable layout.

The default use of Apple’s cloud service was also giving me more troubles than help. If you are willing to pay for sufficient storage, it might work well. But for free, you can’t use it. It took me a while to stop it from synchronizing my desktop. That is really a nuisance because you get asked every time you move a file from the desktop to your file system. I seem to use the desktop in unintended ways, more as a short time storage than a dashboard. In the end, I switch off the service completely, which Apple does not seem to like me doing, and complained here and there.

Speaking of the file system, there are also some system programs which seem to be unwilling to use all files on the disk, including the connected Google Drive. I was very dissatisfied with the behavior of the Photo app. The previewer is also very restricted and just shows the current file it just opened, unable to skip to the next one. I admit that all these problems might have to do with me coming from Windows, and not having enough patience to learn the new system or install a proper app. By the way, there seem to be less free apps for the Apple. I especially dislike the in-app payment, which makes it impossible to judge the real costs beforehand.

I did not talk about the iPad, which I also had and still have. This is again another marvel of technology. But I prefer a more open file system. Android is already not easy in this respect, but the iPad was a nightmare to me. After a while, I learned how to get the files where I want them, mostly using my Google cloud service. Still, that could be easier. And iTunes did not help either, by the way. It transferred photos at a rate of one per 5 seconds.

In summary, I can easily imagine that many users are satisfied with the Mac or the iPad. As I said, they are great systems. They are just not for made for my work.

Suggestions on Math Education

Here is an interesting article in the Scientific American on math education.

It seems to be based on a study, which claims that an application centered approach can double the interest of the students in the topic and improve the results.

This is nothing new. In fact, I read articles from around 1900 claiming just the same: Public math education should be for applications in physics and other fields, not for mathematicians. If you go through this blog or any other source devoted to math education, you find the same discussion over and over. I don’t know how often I have spent the lunch break with just that topic.

At 1900, math strived to become an axiomatic and logically complete system of reasoning. Even though that idea was shattered by Gödel later, math got more and more abstract, algebraic and guided by abstract functional analysis. Not even the strange consequences of the axiom of choice prevented mathematicians from using it for basic theorems. „It is a heaven that nobody can take away from us“, Hilbert said. So, it is no surprise that applications of mathematics and students of these fields felt uncomfortable in math classes.

However, times have changed long ago. At least, since computers are available, the more abstract aspects of mathematics didn’t make sense any longer for applied sciences. I need to explain this to prevent being misunderstood.

First, let me start to outline a more modern approach to math for applied sciences.

Let us take the topic of differential equations as an example. In the old school, the students learned some tricks to solve simple cases, involving great skill in calculus and learning the right „ansatz“. The existence and uniqueness of the solution was the next target, followed by theorems on stability. I can say, with confidence, that this approach fails on most students of applied sciences. They simply don’t have the skills, and they have no time to acquire them in their packed first semester schedule.

But I know that many profs now take a more relaxed approach for this audience. We can now use the computers in the classroom to use the Euler method to solve and visualize the results and the effects of the step size easily. These numerical methods have the further advantage that they are discrete and can use differences instead of differentials. Looking to a phenomenon in time deltas is always the first step in physics. Differentials are a higher step in abstraction which should be done after the first step. Then, the benefits and the limits of calculus will be apparent to the student.

Only after studying and seeing simple examples, more abstract questions like uniqueness or stability can be discussed. And we have now the tools to demonstrate these issues on the computer in real time.

The same can be done in probability theory. The computer can be easily used to do Monte-Carlo simulations. This not only verifies theoretical results, but also allows getting new ones which cannot be derived with algebraic methods. Moreover, it implants a solid meaning of the term „probability“, which so often is abused and misunderstood leading to self-contradicting cases.

Another change that computers force on us is that skills in computation are no longer essential to solve mathematical problems. This starts in primary classes, where calculators replace the external drill of written computations. Later, we should start to ask ourselves if extensive skill and practice in tricks of calculus is really needed any longer. Low-level computations in the head and an understanding of the sizes of numbers (which most people lack) looks more important to me. And I’d say we should rather concentrate on understanding the logic behind problem-solving and forget the drills. This brings me to my last topic.

I should end this blog entry with a warning: Logical reasoning won’t become easier with computers. From my teaching, I know that the students struggle most with the logic of a scientific idea. Neither can they verbalize the idea in precise form, nor can they apply it. Asking a precise definition of continuity or an eigenvalue is one of the most challenging things you can do in a verbal examination. Asking the students to explain what their computations mean and do, logically, is a good way to see their skills.

Logical reasoning needs to be learned by training and repetition. I cannot stress enough, how important it is for the teacher to not expect the students to immediately understand the meaning of a mathematical statement. Looking back, a teacher should remember that he did not understand most things immediately himself. Everybody struggled with logic at some point. Never forget that!

So, while it is an „old hat“ to talk about a more applied mathematical education for applied sciences, we still need to exploit what the computers can do for math education. And we still face the problem of logic reasoning, which doesn’t teach itself and needs a lot of attention.

DOF Calculator in Python

I have updated my example in Euler Math Toolbox for the computation of DOF (depth of field). Below is a version of Python, which you can save in a file like „“.

import math


These functions compute DOF, hyperfocal distance and Bokeh
of an ideal lens for photographers.


D : focussed distance
F : true focal length
A : true aperture (F-stop)

ff : form factor (aka crop factor) (def. 1)
c : accepted circle of diffusion (def. 0.03mm)
equiv : if false, D and F are true values times ff (def. False)

def endDOF (D, F, A, c=0.03*mm, ff=1, inf=useasinf):
    res = ff*D*F**2/(ff*F**2+c*A*F-c*ff*A*D)
    if res<0:
    return res

def startDOF (D, F, A, c=0.03*mm, ff=1):
    return ff*D*F**2/(ff*F**2-c*A*F+c*ff*A*D)

def hyperfocal (F, A, c=0.03*mm, ff=1):
    return F*(ff*F+c*A)/(c*ff*A);

def bokeh (D, F, A, ff=1):
    return 2*F**2/(A*(D-L))/math.sqrt(0.024**2+0.036**2)

def lens (D, F, A, c=0.03*mm, ff=1, equiv=False):
    Prints DOF, hyperfocal distance and Bokeh
    D : focussed distance
    F : true focal length
    A : true aperture (F-stop)

    ff : form factor (aka crop factor) (def. 1)
    c : accepted circle of diffusion (def. 0.03mm)
    equiv : if false, D and F are true values times ff (def. False)
    if ff!=1:
        print(f"{'Crop factor':30} : {ff:10.1f}")
    if not equiv:
        F = F*ff
        A = A*ff
    if ff!=1:
        print(f"{'True focal length':30} : {F/ff*1000:10.1f} mm")
    print(f"{'Equivalent focal length':30} : {F*1000:10.1f} mm")
    if ff!=1:
        print(f"{'True F-stop':30} : F {A/ff:10.1f}")
    print(f"{'Equivalent F-stop':30} : F {A:10.1f}")
    print(f"{'Focussed distance':30} : {D:10.2f} m")
    sd = startDOF(D,F,A,c,ff)
    ed = endDOF(D,F,A,c,ff)
    if ed>=useasinf:
        print(f"{'Start of DOF':30} : {sd:10.2f} m")
        print(f"{'End of DOF':30} : infinity")
    elif D<1:
        print(f"{'Start of DOF':30} : {sd*100:10.1f} cm")
        print(f"{'End of DOF':30} : {ed*100:10.1f} cm")
        print(f"{'Total DOF':30} : {(ed-sd)*100:10.1f} cm")
        print(f"{'Start of DOF':30} : {sd:10.2f} m")
        print(f"{'End of DOF':30} : {ed:10.2f} m")
        print(f"{'Total DOF':30} : {ed-sd:10.2f} m")
    print(f"{'Bokeh':30} : {bokeh(D,F,A,ff)*100:10.2f} %")

Here are some examples.


Equivalent focal length        :       85.0 mm
Equivalent F-stop              : F        4.0
Focussed distance              :       2.00 m

Start of DOF                   :       1.94 m
End of DOF                     :       2.07 m
Total DOF                      :       0.13 m

Bokeh                          :       4.36 %


Crop factor                    :        2.0
True focal length              :       42.5 mm
Equivalent focal length        :       85.0 mm
True F-stop                    : F        2.0
Equivalent F-stop              : F        4.0
Focussed distance              :       2.00 m

Start of DOF                   :       1.94 m
End of DOF                     :       2.07 m
Total DOF                      :       0.13 m

Bokeh                          :       4.36 %


Crop factor                    :        2.0
True focal length              :       85.0 mm
Equivalent focal length        :      170.0 mm
True F-stop                    : F        4.0
Equivalent F-stop              : F        8.0
Focussed distance              :       4.00 m

Start of DOF                   :       3.87 m
End of DOF                     :       4.13 m
Total DOF                      :       0.26 m

Bokeh                          :       4.36 %

The Bokeh in percentage just show the proportion between the diameters of the circles of blurriness and the diameter of the sensor. 4% looks somewhat creamy.

More Python Fun

Continuing to get acquainted with Python, I looked at the book „Introduction to Data Science“ by Joel Grus. I have it in German as an electronic book, which is unfortunately no fun to read, although the content is solid – with restrictions. The most important problem is the formatting of the Python code, which does not even work in landscape mode. The printed book uses a smaller font and lines are completely in one line. You can and should download the examples. There are also some misprints. I found one in the introduction to the Bayes formula, where the fraction is reversed. But most importantly, I have my doubts if the book is really a help for the starting data scientist. The presentation is very short. You will need a good background in math and programming, or have someone to ask if you do not understand the theory or the code. The code gets pretty advance pretty fast.

However, this is not a review of the book. But I take an example of the book to show how Python can be used from within Euler Math Toolbox. The following code reads the bible and prints the 10 most common words.

infile = open("bible.txt","r")

from collections import Counter

num_words = 10

counter = Counter(word.lower()
    for line in infile
    for word in line.strip().split()
    if word)

for word, count in counter.most_common(num_words):


I downloaded the file „bible.txt“ from the net and put it into the folder, where the IPython notebook resides. The output is the following.

64183	the
51379	and
34746	of
13643	to
12799	that
12559	in
10263	he
9840	shall
8987	unto
8835	for

Now, let us do this programming exercise in Euler Math Toolbox via Python.

>>> from collections import Counter
>function python testbible (filename) ...
$""" 10 most common words in the bible
$infile = open(filename,"r")
$num_words = 10
$counter = Counter(word.lower()
$    for line in infile
$    for word in line.strip().split()
$    if word)
$res = ""
$for word, count in counter.most_common(num_words):
$    res = res+str(count)+"\t"+word+"\n"
$return res
 64183   the
 51379   and
 34746   of
 13643   to
 12799   that
 12559   in
 10263   he
 9840    shall
 8987    unto
 8835    for
>>> help(testbible)
 Help on function testbible in module __main__:
     10 most common words in the bible

The import was done with a direct Python command. The function returns a string with the answer. But it could just as well use „eumat.dump(„….“)“ for a direct output. Note, that you do not need to indent the Python definition. That is done automatically by EMT.

The „$“-signs in the function definition allow copying and pasting the code into an EMT notebook. They are not visible in the EMT window.

Here is another example, done in IPython.

def sieve(N):
    v = [True for _ in range(N+1)]
    for k in range(2,N+1):
        if k*k>N:
        if v[k]:
            while m<=N:
    return [k for k in range(N+1) if v[k] and k>1]


from math import log
x=[k for k in range(1000,len(s),1000)]
y=[(s[k]/(k*log(k))-1-(log(log(k))-1)/log(k))*log(k)**2 for k in 

import matplotlib as mpl
import matplotlib.pyplot as plt
fig, ax = plt.subplots(figsize=(8,8))

This code computes the primes up to 100 Million and plots the actual size of the k-th prime, divided by a known asymptotics for the size of the k-th prime.

We only use Python to do the sieve. For the plot, we use Euler Math Toolbox.

>function python sieve (N) ...
$v = [True for _ in range(N+1)]
$for k in range(2,N+1):
$    if k*k>N:
$         break
$    if v[k]:
$         m=2*k
$         while m<=N:
$             v[m]=False
$             m=m+k
$return [k for k in range(N+1) if v[k] and k>1]
>tic; s = sieve(100 000 000); toc;
 Used 29.407 seconds
>n = length(s)
>k = 1000:1000:n;
>y = (s[k]/(k*log(k))-1-(log(log(k))-1)/log(k))*log(k)^2;

As you see, the sieve takes about 30 seconds on my computer (Core I7, 4.0GHz). Of course, it is much faster to do that in C.

>function tinyc csieve (N) ...
$int n=(int)N;
$CHECK(n>10,"Need a positive integer.");
$char *v = (char *)getram((n+1)*sizeof(char));
$CHECK(v,"Out of heap space.");
$for (int i=0; i<n+1; i++) v[i]=1; 
$for (int k=2; k<n+1; k++)
$   if (k*k>n) break;
$   if (v[k])
$   {
$       int m=2*k;
$       while (m<=n)
$       {
$           v[m]=0; m+=k;
$       }
$   }
$int count=0;
$for (int k=2; k<n+1; k++) count+=v[k];
$header *out=new_matrix(1,count);
$CHECK(out,"Out of heap space");
$double *res=matrixof(out);
$int j=0;
$for (int k=2; k<n+1; k++)
$   if (v[k])
$       res[j++]=k;
 [2,  3,  5,  7,  11,  13,  17,  19,  23,  29,  31,  37,  41,  43,  47,
 53,  59,  61,  67,  71,  73,  79,  83,  89,  97]
>tic; s=csieve(100 000 000); toc;
 Used 1.459 seconds

The procedures to communicate from EMT to Tiny C are a bit involved. But the result is 15 times faster. The basic code is a direct translation of the Python code.

Depending on your heap space of Euler Math Toolbox, you can do more primes. If you compress the sieve using bits, you could to a lot more. But for this code and the default size, 100 millions are the limit. Increasing the stack size to 2048 MB, I could do all primes up to 1 billion in 15 seconds.

Povray via Python

In my attempt to learn Python and its mathematical packages, I have translated a library for generating Povray plots in Euler Math Toolbox into Python. The module is not polished yet, but if you are interested, ask in the comments for a Beta version.

Have a look at this tutorial for Euler Math Toolbox to see what can be done with this interface to Povray. The code for the knot in EMT is here.

For a start, let us generate a simple plot with an axis. I am going to explain below what’s happening here.

from povray import *

import numpy as np

def f(x,y):
    return x**y

fg = graph(f,t,t,look=look(green))
for i in range(1,4):

The Povray engine I am using is a version of Povray for Windows (see here). The name of the executable is „povengine.exe“. It is in the search Path of Windows. For Mac M1, I found a „homebrew“ version of a command line tool named „povray“. It is not fully supported. (These problems are one reason I abandoned Macs for math.) In any case, you can set the correct program to start Povray with „setengine“.

The function „povstart“ opens an output file, by default „povout.pov“ and writes the necessary headers and includes to this file. It does also set the default view to the scene. You can change the light here too.

We then create x,y,z-matrices containing the coordinates of the surface inside the function „graph“. It uses „meshgrid“ internally and assumes that f can be vectorized.

The „graph“ function writes the scene to the file and embeds it in an object of Povray, returning the object. The scene consists of 4 triangles for each rectangle in the grid. The center of gravity is used as one corner in each of the four triangles. Povray needs a list of corners and a list of triangles, and „graph“ writes both lists to the file.

The intersection() function shows the way that this module works: It does not write to the file, but returns a Povray string representing the object, in this case the intersection of two other objects, both represented as strings. The cylinder() function is one of them, the other our graph. Here is, how the Povray strings typically look like.

cylinder { <0,0,0>, <0,0,1>, 1
    texture { pigment { color rgbf <0.470588,0.470588,0.470588> }  } 
    finish { ambient 0.2 }

That’s one string with lines separated by „\n“.

The alternative to this approach was to generate Povray objects and use these objects. But I found the string approach more appealing. However, for the surface, the string would be huge. Thus, the object is written to the file directly.

The axis() function creates an axis, as follows. The texture can be changed using a „look“ parameter for each object.

union {
  cylinder { <-0.1,0,0>,<1.1,0,0>,0.02 }
  cone { 
  texture { pigment { color rgbf <0.470588,0.470588,0.470588> } }

Now, let us try the knot.

from sympy import *
import numpy as np

t = symbols('t')
fx = 4*(1+sin(3*t)/3)*sin(2*t)
fy = 4*(1+sin(3*t)/3)*cos(2*t)
fz = 2*cos(3*t)

vt = np.linspace(-np.pi,np.pi,1000)
vs = np.linspace(-np.pi,np.pi,160)

f = [fx,fy,fz]
ffunc = lambdify(t,f)
df = [diff(f[k],t) for k in range(3)]
dffunc = lambdify(t,df)

w1 = [df[1],-df[0],0]
w1func = lambdify(t,w1)

def crossproduct (v,w):
    return np.array([v[1]*w[2]-v[2]*w[1],-v[0]*w[2]+v[2]*w[0],v[0]*w[1]-v[1]*w[0]])
def norm(v):
    return np.sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2])

def w1n(t):
    h = w1func(t)
    return h/norm(h)
def w2n(t):
    h = crossproduct(dffunc(t),w1func(t))
    return h/norm(h)

def S(t,s):
    return ffunc(t)+np.cos(s)*w1n(t)+np.sin(s)*w2n(t)
x = np.zeros((len(vt),len(vs)))
y = np.zeros((len(vt),len(vs)))
z = np.zeros((len(vt),len(vs)))
for i in range(len(vt)):
    for j in range(len(vs)):
        u = S(vt[i],vs[j])
        x[i,j] = u[0]
        y[i,j] = u[1]
        z[i,j] = u[2]

import povray as pv

surf = pv.xyz_surface(x,y,z,look=pv.look(pv.gray,phong=1))

This needs a lot of explaining. The center curve of the knot is a curve in space parameterized by „fx“, „fy“ and „fz“. I did some tests first, to make sure that this is indeed working, using a 3D plot in Matplotlib. The result was the following. It is a different style from the 3D plots you get in Euler Math Toolbox.

Then I build a tube around that curve. To achieve that, the expression „df“ is a vector representing the derivative of „f“, i.e., the vector pointing in the direction of the curve at each point. Then „w1“ is a vector perpendicular to „df“.

To get another perpendicular vector „w2“, I use the crossproduct of „df“ and „w1“. Both, „w1“ and „w2“ are then normalized to have length 1. However, this is already done numerically. To parameterize the tube, I use

\(S(t,s) = f(t) + \cos(s)w_1(t) + \sin(s)w_2(t)\)

This function is mapped to the all s and t, using a double loop.

Sympy and EMT – Comparison

In Euler Math Toolbox, symbolic expressions are seamlessly integrated by the &-syntax. The expressions and functions are handled externally via the CAS Maxima, but mirrors exists in the numerical side of EMT as simple strings or numerical functions.

Let me give an example.

>function fx(t) &= t*cos(2*pi*t)
                             t cos(2 pi t)
>function fy(t) &= t*sin(2*pi*t)
                             t sin(2 pi t)

The functions „fx“ and „fy“ are symbolic functions. I.e., Maxima defines the functions in its system, and they can be used in Maxima.

                        2                 2
                      (z  + 1) cos(2 pi (z  + 1))

But the functions are also defined in EMT as numerical functions. Thus, they can be used like any other numerical function in EMT.

 [0,  0.0809017,  0.0618034,  -0.0927051,  -0.323607,  -0.5,  -0.48541,
 -0.216312,  0.247214,  0.728115,  1]

Because the functions contain only elementary operations, they can work on vectors too. This is called a „vectorized“ or „mapped“ function. Consequently, they can be called in other functions of EMT. E.g., the „plot2d“ function can plot curves, if two functions (or two expressions) are given.

Continuing our example, we compute the length of the curve using

\(l(f) = \int\limits_0^1 \|f'(t)\| \,dt.\)

>function f(t) &= [fx(t),fy(t)]
                     [t cos(2 pi t), t sin(2 pi t)]
>function df(t) &= diff(f(t),t)
        [cos(2 pi t) - 2 pi t sin(2 pi t), 
                                      sin(2 pi t) + 2 pi t cos(2 pi t)]
>&integrate(norm(df(t)),t,0,1), %()
                   asinh(2 pi) + 2 pi sqrt(4 pi  + 1)
                                  4 pi

To compute the length of the curve, we define a vector valued symbolic function „f“. That function does work in Maxima symbolically, or in EMT numerically.

A side remark: If „f“ is applied to a vector instead of one point, it will compute „fx“ and „fy“ and concatenate these vectors, which is not useful most of the time. In future versions, EMT should be able to generate a matrix of values if a vector valued function is applied to a vector of values. In our application, we only need „f“ as a symbolic function.

In the same way, we can then define a function „df(t)“ for the derivative vector simply by symbolic differentiation. „df“ will also work symbolically and numerically. The function „norm“ is a pre-defined symbolic function. You can plug in a symbolic vector and get an expression which represents a function of „t“.

                         sqrt(v . transpose(v))
        sqrt((cos(2 pi t) - 2 pi t sin(2 pi t))
                                 + (sin(2 pi t) + 2 pi t cos(2 pi t)) )

We then use the adaptive integral of EMT to integrate the expression „norm(df(x))“ between 0 and 1.

Finally, Maxima can compute the „exact“ integral in symbolic form, which we evaluate numerically to verify our previous result.

How does this all translate to Python?

The first problem is that there are no symbolic functions in Sympy. There are only expressions, which can be handled by the symbolic features of Sympy. And, of course, there is no automatic parallel definition of a numerical function for Numpy as in EMT. To evaluate a symbolic expression at a point, „subs“ can be used, or it can be made a numerical function by „lambdify“.

Here is an example.

import sympy as sp
import numpy as np

t = sp.symbols('t')
fe = t*sp.cos(2*np.pi*t)
f = sp.lambdify(t,fe)


As you see, „subs“ just substitutes a value for „t“. This is not going to work of „sp.pi“ is in the expression instead of „np.pi“.

t = sp.symbols('t')
fe = t*sp.cos(2*sp.pi*t)
f = sp.lambdify(t,fe)


For Sympy, „pi“ is a mathematical variable with some properties and a value. Because „cos(2pi) „is known, „f(1.0)“ does work as expected. But „f(0.2)“ leaves the expression and is not evaluated numerically.

„lambdify“ translates a symbolic expression so that it can be used by Numpy, effectively translating „sp.pi“ to „np.pi“. Thus, „f“ works in both cases.

Back to our example. We use what we learned to trick Python into doing what we want.

import sympy as sp
import numpy as np

t = sp.symbols('t')
fxe = t*sp.cos(2*sp.pi*t)
fye = t*sp.sin(2*sp.pi*t)
fx = sp.lambdify(t,fxe)
fy = sp.lambdify(t,fye)

vt = np.linspace(0,1,400)
x = fx(vt)
y = fy(vt)
fig,ax = plt.subplots(figsize=(8,8))

It might be a good idea to define a function „plot_curve“ to produce the plot, like the following.

def plot_curve (fx,fy,xmin,xmax,n=100,figsize=(8,8),r=1):
    vt = np.linspace(0,1,400)
    x=fx(vt); y=fy(vt);
    fig,ax = plt.subplots(figsize=figsize)
    ax.set_xlim(-r,r); ax.set_ylim(-r,r)
    return ax.plot(x,y)


Such functions can go into a Python module. I already started a module „“ with functions I need to make my life in Python easier.

Now, let us compute the length. The integration is done using an adaptive Gauss quadrature in my package. I explained in a previous posting, how this is working. Note that we need to pass a vectorized function to the integration. „fnorm“ is not vectorized (in contrast to the expression „norm(f(t))“ which we used in EMT). „np.vectorize“ transforms any function for single real values into vectorized functions which will work on Numpy matrices or vectors.

fe = sp.Matrix([fxe,fye])
fde = sp.diff(fe,t)

def norm(v):
    return np.sqrt(np.sum(v*v))

fd = sp.lambdify(t,fde)
def fnorm(t):
    return norm(fd(t))

import renestools as rt

  --> 3.3830442855028062

To compute the integral in symbolic form, we need to help Sympy with „trigsimp“. We evaluate the result to 20 digits. (We could have done that in Maxima too, by the way.)

res = sp.integrate(sp.trigsimp(sp.sqrt(fde[0]**2+fde[1]**2)),(t,0,1))

  asinh(2*pi)/(4*pi) + sqrt(1 + 4*pi**2)/2

As you see, Maxima and EMT are much more intuitive than Sympy and Numpy. I say this in the hope that many of you find their way to Euler Math Toolbox and help the development. Currently, Python has a lot more potentially useful toolboxes than EMT.

Using Python – Introduction for EMT Users

Python is a full, object oriented, interpreted programming language. It has been around for a long time now and enjoys the attention of many good programmers who expanded it with packages. These packages often make use of the C API which lets native programs connect to the Python kernel. Thus, they are much faster and often state of the art. Python itself is more than ten times slower than native code.

Were are mainly going to use the following packages.

  • Numpy. To get around the slow loops and array handling in Python, this package provides operations on a matrix level, just like Euler Math Toolbox or Matlab.
  • Sympy. This is a package for symbolic computations. It provides symbolic variables and expressions.
  • Matplotlib. To get beautiful plots in Python, you will have to use this package.
  • Scilab. This is the high level package for mathematical computations, such as optimization or signal processing.

To install all this, the easiest and best way is Anaconda. It installs the recent Python version (3.10) and the packages mentioned above. Moreover, you get a user interface called IPython or Jupyter Notebooks.

Unfortunately, the Anaconda Navigator seems not to work on M1 processors. But you can open a command line and type

jupyter notebook

The default browser (Safari in my case) will open automatically and display the Jupyter notebook showing the files in your home directory. You can then switch to a subdirectory and create a new one. A notebook has input lines and output lines. You can just start typing in Python commands and execute with Ctrl-Return or Shift-Return, or start the Help to learn about this interface.

For users of Euler Math Toolbox (or Matlab), it is important to know, that Python does not handle operators on matrices element by element. In fact, using its list type like vectors, or lists of lists like matrices, yields surprising results.

2 * [4,5,6] 
  -> [4, 5, 6, 4, 5, 6]
[4,5,6] + [4,5,6]
  -> [4, 5, 6, 4, 5, 6]
[4,5,6] * [4,5,6]
TypeError: can't multiply sequence by non-int of type 'list'

I have used an indented „->“ to indicate the output of the command. This convention will be used in the future on this blog.

Obviously, the Python folks have used math operators to manipulate lists in „intuitive“ ways.

This can be fixed with Numpy. You get a new data type, the „ndarray“. It is easily created, albeit requiring an additional function call. It provides the matrix functionality that we are used to from matrix languages like Matlab, R or Euler Math Toolbox.

import numpy
v = numpy.array([4,5,6])
  -> array([ 8, 10, 12])

  -> array([ 8, 10, 12])

  -> array([16, 25, 36])

Euler Math Toolbox often demonstrates the value of matrix language by creating tables of function values y=f(x). Here is a short demonstration and a plot.

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

x = np.arange(0,1,0.01)
y = x**3-x

print(f"Riemann-Summe: {np.sum(y)*0.01:0.4}")

fig, ax = plt.subplots(figsize=(10,6))

  -> Riemann-Summe: -0.25

As I said, there is some learning curve. Here are some differences to Euler Math Toolbox to start with.

  • You need to import the packages. The „load“ command of EMT is similar. But EMT loads most packages at the start. You can import and use a package under a different, shorter name, as shown in the example.
  • The „arange(a,b,delta)“ function works similar to the ranges in EMT. But it has the step size as a third parameter. There is also a „linspace(a,b,n)“ function like „equispace“ in EMT. But it generates exactly n points, not n intervals.
  • In the example, an important difference is not visible: Array start with index 0 in Python and Numpy. This is a source of much confusion.
  • For the plot, we cannot use a function, but must provide vectors of data.
  • The plot will be inserted automatically into to Jupyter notebook. Of course, the graphics can also be saved to a file. That file will be in the directory of your notebook.

You will find numerous tutorials on Numpy and Matplotlib, explaining the steps above in much detail. So, I won’t give an introduction in this blog. Rather, my job will be to show you how EMT compares to the things you can do in Python, and to point out the main differences and problems that could occur.

Euler Math Toolbox on Mac M1?

It is not going to happen any time soon. There are several reasons.

  • The user interface would have to be rewritten completely, which isn’t a trivial thing to do. EMT uses the Windows API heavily, and thus a deep knowledge of the target OS is required. Spawning processes with pipes, handling images and files, DLL loading on runtime etc. are not easily translated to Macs, even if you know how to do that in Unix. So, it is not only about the user interface itself.
  • I could adapt a web interface to EMT, like the one that Sage and IPython are using. In fact, someone did a Sage interface for the numerical part of EMT before. This would lead to some restrictions, but might be the best route on the long run. I will show you in the next postings how this is working in the case of IPython. A port would require a deeper knowledge of this kind of web service. But that should be manageable.
  • Maxima is currently not available on M1 in precompiled form. In fact, I failed to get it to run following complicated instructions I found in the net. Without the symbolic part, EMT makes much less sense.
  • There is no TinyC compiler. One of the outstanding features of EMT is the option for the user to easily compile critical parts of the code. EMT, Matlab or Python are relatively slow in comparison to native code. I am going to discuss the issue of speed in later postings in more detail.

To replace EMT, your best free option on MacOS is currently Python, together with Numpy, Sympy and Matplotlib. The learning curve is steep, but Python is Interactive and can be approached by trial and error. Moreover, it is heavily supported in the net. You will find an answer to almost any question in some forum, web site, or on sites like StackOverflow. I aim to give you some hints in subsequent postings.

Besides Python, there are some more options for MacOS.

  • Geogebra (free). This is meant to be an advanced do-everything calculator for schools. It is not designed to be a scientific tool. The program originated as a program for dynamic geometry, much like my program C.a.R., but less advanced (allow me to add). Geogebra now got options for symbolic math and even statistics. I still do not think it is the right tool to start with, although it has built up a big community around the world. It is too restricted in the end.
  • R (free). This is a nice choice and is used on a scientific level in universities. There is a version for Macs with M1. The statistical part is state of the art and can be used on a professional level. The syntax is much like Matlab or Euler Math Toolbox, although specialized for doing statistics and statistical plots. The tool is command line oriented, but there are GUIs like Shiny and R-Studio to allow a more comfortable editing, mostly free for academic use.
  • Matlab (payware). This software is most famous for the Simulink toolbox, which allows fast data analysis of analogue input. If you are not using an academic license or participate in the license of your institution, Matlab is rather expensive at around 800€ yearly. Consequently, it is mainly used in the industry where it is paid by the company. I have never really understood why companies trust Matlab so much, and probably they don’t. They might have contracts with third-party developers who use Matlab, but are responsible for the delivered software. Matlab seems to be available in student labs too. But I do not think that is a good idea. It is better to use open and fast software and a good programming language instead.
  • Octave (free). That’s a Matlab clone. It is Gnu software and thus open and free. It is good choice if you have to learn Matlab. But it does not offer all the toolboxes that Matlab offers.
  • Mathematica (payware). This would be my choice of software if I had to buy one. The alternative is Maple, but it is not yet fit for M1 processors. Mathematica costs 420€ for home use, and 200€ for students, in one time payment. It probably does the best all-in-one job for mathematics of those packages. There is also an online version, and many problems can be handled by the free Wolfram Alpha service.

So far about other software. I omitted a lot of smaller packages which are not much more than a calculator or a plotter.

The next postings will be about Python, Numpy, Sympy and Matplotlib, and how it can be used to replace a software like Euler Math Toolbox on Mac OS with M1.

Functions of Functions in EMT and Python

Because Euler Math Toolbox is designed to be an interactive way to do mathematics, its algorithms for numerical problems are easy to use, such as the solution of an equation, integration, minimization, or even to plot a function. For a demonstration and the adaptation to Python, we find the minimum of a one dimensional function.

The algorithm used is the Golden Ratio subdivision of an interval. It spares one function evaluation per step, and is guaranteed to yield an inclusion of the minimum for a convex function (assuming exact computations).

Here is an example:

>function f(x) := x^3-x
>xmin = fmin("f",0)
>plot2d("f",0,1); plot2d(xmin,f(xmin),>points,>add):

It is even easier to use an expression. Expressions are strings which are evaluated for the parameters x,y,z (by default), and thus work almost like functions.


The same function can be coded in Python. I did a line-by-line translation of the code of EMT.

def fmin (f,a,b=None,d=0.01,dmax=1e+20,eps=1e-14):
    """ Compute the minimum of a function
    Using the Golden Cut.
    a: Start
    b: End of interval or None (end is searched first)
    d: Initial step size
    dmax: Maximal step size to search for b
    eps: Desired Accuracy
    Returns: Point of minimum.
    if b == None:
        x0=a; y0=f(x0); 
        x1=x0+d; y1=f(x1)
        if y1>y0:
            x1=x0+d; y1=f(x1)
        while True:
            if not y1<y0:
            x0=x1; x1=x0+d
            y0=y1; y1=f(x1)
            if abs(d)>dmax:
    x0=a; x3=b
    y0=f(x0); y3=f(x3)
    x1=x0+l*(x3-x0); x2=x3-l*(x3-x0)
    y1=f(x1); y2=f(x2)
    while True:
        if y1>y2:
            x0=x1; x1=x2; x2=x3-l*(x3-x0)
            y0=y1; y1=y2; y2=f(x2);
            x3=x2; x2=x1; x1=x0+l*(x3-x0)
            y3=y2; y2=y1; y1=f(x1)
        if np.abs(x0-x3)<eps:
            return x0

To use this for the example above is easy.

def f(x):
    return x**3-x

fmin(f,0) # -> 0.5773502652484417

Via lambda functions, an expression works just as well.

fmin(lambda x: x**3-x,0)

The main motivation for this posting is more complicated. We want to provide additional parameters to the function.

Euler Math Toolbox has two ways of doing this. The first way is via args(). It works as follows.

>function f(x,a) := x^3-a*x
>xmin = fmin("f",0;1)
>xmin = fmin("f",0;1.1)

The parameters after the semicolon are passed from „fmin“ to the function „f“ by calling it as „f(x,args())“.

The second way is with call collections. Those are collections of functions and parameters. They are available for expressions too, but need a named member for the additional parameters.


To achieve the same in Python, the easiest way is to use lambda functions.

def f(x,a):
    return x**3-a*x

a = 1.1
fmin(lambda x: f(x,a),0) # -> 0.6055300647407058

If that is used within another function, the lambda function will be made referring to the current variable „a“. You need not worry about a variable „a“ in the function „fmin“.

Because Python does not separate the additional parameters with a semicolon, the „args“ method feels a bit strange. But it is possible. Have a look at this code.

def h(x,a,b):
    return a*x**2+b

def evalf (f,x,*args):
    return f(x,*args)

evalf(h,3,4,1) # -> 9

The function „evalf“ needs another function as an argument, just as „fmin“. We want to pass more than one argument to this function „f“. If we call „evalf(h,3,4,1)“, those arguments are „(4,1)“. They are passed as a tuple to the variable „args“. Saying „*args“ will expand this tupel and pass „4,1“ to „f“, i.e. to „h“.

I made a private Python module with some functions like „fmin“ where I did not use the „args“ method. The reason is that I want to overwrite named parameters with arguments, e.g., saying „fmin(f,a,b)“ with interval bounds, where the default for „b“ is „None“. This is not possible if „*args“ is present.

Finally, an application of „fmin“.

We minimize a multidimensional function with gradient search. The search goes down the gradient until a minimum is found, and repeats from that point. The handling of the accuracy during the algorithms need to be improved for maximal performance. Currently, it computes the accurate minimum in each step.

The method is best used for convex function, or at least locally convex function. It is guaranteed to find a minimum then, provided that the set of points where the function is less than the current value is compact.

epsilon = 1e-14

def vmin (f,x,v,tmax=1,tmin=epsilon):
    """ Minimize f(x+t*v) assuming v is a direction of descent
    t=fmin(lambda t:f(x-t*v),0);
    return (t,x-t*v)

def fminmult (f,Df,x):
    res = 1e20
    while True:
        t,xnew = vmin(f,x,Df(x))
        resnew = f(xnew)
        if resnew>=res:
        res = resnew; x = xnew
    return x

def f(x,a):
    return (x[0]-a)**2+2*a*x[1]**2

def Df(x,a):
    return np.array([2*(x[0]-a),4*a*x[1]])

a = 4
fminmult(lambda x:f(x,a),lambda x:Df(x,a),np.array([1,1]))

# -> array([ 4.00000000e+00, -1.65436123e-24])

The Speed of Python

You should be aware that Python is slow compared to native code like Java (which is compiled at runtime). The factor is always more than 10 and can be more than 100. So be warned!

Often, it is possible to improve the runtime by switching the algorithm. Let us study the Collatz problem an example. There is no need to explain this problem here. You can easily find a description on Wikipedia. For a start, we want to find the longest sequence for any starting point less than a million, and later for hundred millions.

The following is a recursive function to get the length of the Collatz sequence starting at n.

def get_collatz_length (n):
    if n==1:
        return 0
    elif n%2:
        return get_collatz_length(3*n+1)+1
    return get_collatz_length(n//2)+1

To compute the lengths up to N=1’000’000 takes around 20 seconds. We added a code to compute the 10 worst starting points.

import time
start = time.process_time()
lengths = [(n,get_collatz_length(n)) for n in range(1,N)]
print(f"{time.process_time()-start:5.4} seconds")

records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(0,11):
    print(f"{records[k][0]}: {records[k][1]}")
18.72 seconds
837799: 524
626331: 508
939497: 506
704623: 503
910107: 475
927003: 475
511935: 469
767903: 467
796095: 467
970599: 457
546681: 451

A comparable Java code is the following.

public class Main 
	public static int get_collatz_length (long n)
		int res = 0;
		if (n>1)
			if (n%2 == 1)
				res = get_collatz_length(3*n+1)+1;
				res = get_collatz_length(n/2)+1;
		return res;

	public static void main(String[] args) 
		long time = System.currentTimeMillis();
		int N=1000000;
		int L[] = new int[N];
		for (int k=0; k<N; k++)
			L[k] = get_collatz_length(k);
		int max=0,imax=0;
		for (int k=1; k<N; k++)
			if (L[k]>max)
				max=L[k]; imax=k;
		System.out.println("Maximal value " + max + " at "+imax);
		time = System.currentTimeMillis()-time;
		System.out.println((time/1000.0) + " seconds.");

Maximal value 524 at 837799
0.498 seconds.

This takes about half a second. The factor is 40.

What we also see is that the Python code looks more elegant and mighty. E.g., sorting the array of lengths and printing the 10 highest is done with three lines of code. In Java, the same can only be achieved by writing a Comparator class.

Consequently, I see Python as an interactive script language to do math. If speed is needed, native code must be used, as in Numpy. Unfortunately, the user cannot easily write a function in C or Java.

Can we speed up the process? We could try to avoid the doubled computations by remembering and using the already known lengths in a dictionary. The following code does this.

collatz_lengths = {}

def get_collatz_length (n, nmax):
    res = collatz_lengths.get(n)
    if res:
        return res 
    elif n==1:
        res = 0
    elif n%2:
        res = get_collatz_length(3*n+1,nmax)+1
        res = get_collatz_length(n//2,nmax)+1
    if n<nmax:
        collatz_lengths[n] = res
    return res

The storage of the lengths is only done if n is not too high. For the call, we have set this maximal value equal to N=1’000’000. This code reduces the time to about 1 second. We gained a factor of 20 just by this simple trick. Obviously, an idea might be worth more than a compiled code.

Maybe, we can do oven better, if we do not use a dictionary. Instead, we can use a list and store the known lengths in it. Let us try.

def get_collatz_length (n, L, N):
    if n<N and L[n]>=0:
        return L[n]
    if n==1:
        res = 0
    elif n%2:
        res = get_collatz_length(3*n+1,L,N)+1
        res = get_collatz_length(n//2,L,N)+1
    if n<N:
    return res

import time
start = time.process_time()
L = list(range(N))
for k in range(N):
    L[k] = -1
lengths = [(n,get_collatz_length(n,L,N)) for n in range(1,N)]
records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(11):
    print(f"{records[k][0]}: {records[k][1]}")
print(f"{time.process_time()-start:5.4} seconds")

This reduces the time only slightly. It is still one second.

However, implementing the same trick in Java, we get the lengths up to one million in 0.033 seconds, i.e., 30 times faster. The reason for this is partly the infinite integer type of Python, and also the use of a dictionary instead of an array. Here is the Java code.

public class Main 
	public static int get_collatz_length (long n, int L[], int N)
		if (n<N && L[(int)n]>=0)
			return L[(int)n];
		int res = 0;
		if (n>1)
			if (n%2 == 1)
				res = get_collatz_length(3*n+1,L,N)+1;
				res = get_collatz_length(n/2,L,N)+1;
		if (n<N) L[(int)n] = res;
		return res;

	public static void main(String[] args) 
		long time = System.currentTimeMillis();
		int N=100000000;
		int L[] = new int[N];
		for (int k=0; k<N; k++)
			L[k] = -1;
		for (int k=0; k<N; k++)
			L[k] = get_collatz_length(k,L,N);
		int max=0,imax=0;
		for (int k=1; k<N; k++)
			if (L[k]>max)
				max=L[k]; imax=k;
		System.out.println("Maximal value " + max + " at "+imax);
		time = System.currentTimeMillis()-time;
		System.out.println((time/1000.0) + " seconds.");


Increasing N to 10’000’000, the Java computation time rises to 0.15 seconds. The more clever Python code was faster now and took 9 seconds, i.e., it was 60 times slower.

Going further to N=100’000’000 made Java run for 1.44 seconds. The more clever code in Python, however, broke down and took more than 2 minutes, again a factor of over 50.

Finally, I wanted a plot of the frequencies of the lengths. I had to write a procedure for the count, not trusting the performance of the Counter class.

lengths1 = [x[1] for x in lengths]
M = max(lengths1)
def get_count (L,M):
    count = [0 for k in range(M)]
    for x in L:
        if x<M:
            count[x] += 1
    return count
count = get_count(lengths1,M+1)

This allows to plot the frequencies. The vertical axis is the number of times a length occurs among the first 100 million numbers. The same plot can be found on Wikipedia.

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

fig,ax = plt.subplots(figsize=(14,6))

In summary, Python did well. But it seems to be so flexible that it is not predictable in terms of performance and memory consumption. That is so with many modern software systems who develop a mind of their own from the perspective of a user.