Euler Math Toolbox on Mac M1?

It is not going to happen any time soon. There are several reasons.

  • The user interface would have to be rewritten completely, which isn’t a trivial thing to do. EMT uses the Windows API heavily, and thus a deep knowledge of the target OS is required. Spawning processes with pipes, handling images and files, DLL loading on runtime etc. are not easily translated to Macs, even if you know how to do that in Unix. So, it is not only about the user interface itself.
  • I could adapt a web interface to EMT, like the one that Sage and IPython are using. In fact, someone did a Sage interface for the numerical part of EMT before. This would lead to some restrictions, but might be the best route on the long run. I will show you in the next postings how this is working in the case of IPython. A port would require a deeper knowledge of this kind of web service. But that should be manageable.
  • Maxima is currently not available on M1 in precompiled form. In fact, I failed to get it to run following complicated instructions I found in the net. Without the symbolic part, EMT makes much less sense.
  • There is no TinyC compiler. One of the outstanding features of EMT is the option for the user to easily compile critical parts of the code. EMT, Matlab or Python are relatively slow in comparison to native code. I am going to discuss the issue of speed in later postings in more detail.

To replace EMT, your best free option on MacOS is currently Python, together with Numpy, Sympy and Matplotlib. The learning curve is steep, but Python is Interactive and can be approached by trial and error. Moreover, it is heavily supported in the net. You will find an answer to almost any question in some forum, web site, or on sites like StackOverflow. I aim to give you some hints in subsequent postings.

Besides Python, there are some more options for MacOS.

  • Geogebra (free). This is meant to be an advanced do-everything calculator for schools. It is not designed to be a scientific tool. The program originated as a program for dynamic geometry, much like my program C.a.R., but less advanced (allow me to add). Geogebra now got options for symbolic math and even statistics. I still do not think it is the right tool to start with, although it has built up a big community around the world. It is too restricted in the end.
  • R (free). This is a nice choice and is used on a scientific level in universities. There is a version for Macs with M1. The statistical part is state of the art and can be used on a professional level. The syntax is much like Matlab or Euler Math Toolbox, although specialized for doing statistics and statistical plots. The tool is command line oriented, but there are GUIs like Shiny and R-Studio to allow a more comfortable editing, mostly free for academic use.
  • Matlab (payware). This software is most famous for the Simulink toolbox, which allows fast data analysis of analogue input. If you are not using an academic license or participate in the license of your institution, Matlab is rather expensive at around 800€ yearly. Consequently, it is mainly used in the industry where it is paid by the company. I have never really understood why companies trust Matlab so much, and probably they don’t. They might have contracts with third-party developers who use Matlab, but are responsible for the delivered software. Matlab seems to be available in student labs too. But I do not think that is a good idea. It is better to use open and fast software and a good programming language instead.
  • Octave (free). That’s a Matlab clone. It is Gnu software and thus open and free. It is good choice if you have to learn Matlab. But it does not offer all the toolboxes that Matlab offers.
  • Mathematica (payware). This would be my choice of software if I had to buy one. The alternative is Maple, but it is not yet fit for M1 processors. Mathematica costs 420€ for home use, and 200€ for students, in one time payment. It probably does the best all-in-one job for mathematics of those packages. There is also an online version, and many problems can be handled by the free Wolfram Alpha service.

So far about other software. I omitted a lot of smaller packages which are not much more than a calculator or a plotter.

The next postings will be about Python, Numpy, Sympy and Matplotlib, and how it can be used to replace a software like Euler Math Toolbox on Mac OS with M1.

Functions of Functions in EMT and Python

Because Euler Math Toolbox is designed to be an interactive way to do mathematics, its algorithms for numerical problems are easy to use, such as the solution of an equation, integration, minimization, or even to plot a function. For a demonstration and the adaptation to Python, we find the minimum of a one dimensional function.

The algorithm used is the Golden Ratio subdivision of an interval. It spares one function evaluation per step, and is guaranteed to yield an inclusion of the minimum for a convex function (assuming exact computations).

Here is an example:

>function f(x) := x^3-x
>xmin = fmin("f",0)
 0.577350265248
>plot2d("f",0,1); plot2d(xmin,f(xmin),>points,>add):

It is even easier to use an expression. Expressions are strings which are evaluated for the parameters x,y,z (by default), and thus work almost like functions.

 >fmin("x^3-x",0)
 0.577350265248

The same function can be coded in Python. I did a line-by-line translation of the code of EMT.

def fmin (f,a,b=None,d=0.01,dmax=1e+20,eps=1e-14):
    """ Compute the minimum of a function
    Using the Golden Cut.
    
    a: Start
    b: End of interval or None (end is searched first)
    d: Initial step size
    dmax: Maximal step size to search for b
    eps: Desired Accuracy
    
    Returns: Point of minimum.
    """
    if b == None:
        x0=a; y0=f(x0); 
        x1=x0+d; y1=f(x1)
        if y1>y0:
            d=-d
            x1=x0+d; y1=f(x1)
        while True:
            if not y1<y0:
                break
            d=2*d
            x0=x1; x1=x0+d
            y0=y1; y1=f(x1)
            if abs(d)>dmax:
                break
        a=x0-d 
        b=x1;
    x0=a; x3=b
    y0=f(x0); y3=f(x3)
    l=(3-np.sqrt(5))/2
    x1=x0+l*(x3-x0); x2=x3-l*(x3-x0)
    y1=f(x1); y2=f(x2)
    while True:
        if y1>y2:
            x0=x1; x1=x2; x2=x3-l*(x3-x0)
            y0=y1; y1=y2; y2=f(x2);
        else:
            x3=x2; x2=x1; x1=x0+l*(x3-x0)
            y3=y2; y2=y1; y1=f(x1)
        if np.abs(x0-x3)<eps:
            return x0

To use this for the example above is easy.

def f(x):
    return x**3-x

fmin(f,0) # -> 0.5773502652484417

Via lambda functions, an expression works just as well.

fmin(lambda x: x**3-x,0)

The main motivation for this posting is more complicated. We want to provide additional parameters to the function.

Euler Math Toolbox has two ways of doing this. The first way is via args(). It works as follows.

>function f(x,a) := x^3-a*x
>xmin = fmin("f",0;1)
 0.577350265248
>xmin = fmin("f",0;1.1)
 0.60553006474

The parameters after the semicolon are passed from „fmin“ to the function „f“ by calling it as „f(x,args())“.

The second way is with call collections. Those are collections of functions and parameters. They are available for expressions too, but need a named member for the additional parameters.

>fmin({{"f",1.1}},0)
 0.60553006474
>fmin({{"x^3-a*x",a=1.1}},0)
 0.60553006474

To achieve the same in Python, the easiest way is to use lambda functions.

def f(x,a):
    return x**3-a*x

a = 1.1
fmin(lambda x: f(x,a),0) # -> 0.6055300647407058

If that is used within another function, the lambda function will be made referring to the current variable „a“. You need not worry about a variable „a“ in the function „fmin“.

Because Python does not separate the additional parameters with a semicolon, the „args“ method feels a bit strange. But it is possible. Have a look at this code.

def h(x,a,b):
    return a*x**2+b

def evalf (f,x,*args):
    return f(x,*args)

evalf(h,3,4,1) # -> 9

The function „evalf“ needs another function as an argument, just as „fmin“. We want to pass more than one argument to this function „f“. If we call „evalf(h,3,4,1)“, those arguments are „(4,1)“. They are passed as a tuple to the variable „args“. Saying „*args“ will expand this tupel and pass „4,1“ to „f“, i.e. to „h“.

I made a private Python module with some functions like „fmin“ where I did not use the „args“ method. The reason is that I want to overwrite named parameters with arguments, e.g., saying „fmin(f,a,b)“ with interval bounds, where the default for „b“ is „None“. This is not possible if „*args“ is present.

Finally, an application of „fmin“.

We minimize a multidimensional function with gradient search. The search goes down the gradient until a minimum is found, and repeats from that point. The handling of the accuracy during the algorithms need to be improved for maximal performance. Currently, it computes the accurate minimum in each step.

The method is best used for convex function, or at least locally convex function. It is guaranteed to find a minimum then, provided that the set of points where the function is less than the current value is compact.

epsilon = 1e-14

def vmin (f,x,v,tmax=1,tmin=epsilon):
    """ Minimize f(x+t*v) assuming v is a direction of descent
    """
    t=fmin(lambda t:f(x-t*v),0);
    return (t,x-t*v)

def fminmult (f,Df,x):
    res = 1e20
    while True:
        t,xnew = vmin(f,x,Df(x))
        resnew = f(xnew)
        if resnew>=res:
            break
        res = resnew; x = xnew
    return x

def f(x,a):
    return (x[0]-a)**2+2*a*x[1]**2

def Df(x,a):
    return np.array([2*(x[0]-a),4*a*x[1]])

a = 4
fminmult(lambda x:f(x,a),lambda x:Df(x,a),np.array([1,1]))

# -> array([ 4.00000000e+00, -1.65436123e-24])

The Speed of Python

You should be aware that Python is slow compared to native code like Java (which is compiled at runtime). The factor is always more than 10 and can be more than 100. So be warned!

Often, it is possible to improve the runtime by switching the algorithm. Let us study the Collatz problem an example. There is no need to explain this problem here. You can easily find a description on Wikipedia. For a start, we want to find the longest sequence for any starting point less than a million, and later for hundred millions.

The following is a recursive function to get the length of the Collatz sequence starting at n.

def get_collatz_length (n):
    if n==1:
        return 0
    elif n%2:
        return get_collatz_length(3*n+1)+1
    return get_collatz_length(n//2)+1

To compute the lengths up to N=1’000’000 takes around 20 seconds. We added a code to compute the 10 worst starting points.

import time
start = time.process_time()
N=1_000_000
lengths = [(n,get_collatz_length(n)) for n in range(1,N)]
print(f"{time.process_time()-start:5.4} seconds")

records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(0,11):
    print(f"{records[k][0]}: {records[k][1]}")
18.72 seconds
837799: 524
626331: 508
939497: 506
704623: 503
910107: 475
927003: 475
511935: 469
767903: 467
796095: 467
970599: 457
546681: 451

A comparable Java code is the following.

public class Main 
{
	
	public static int get_collatz_length (long n)
	{
		int res = 0;
		if (n>1)
		{
			if (n%2 == 1)
				res = get_collatz_length(3*n+1)+1;
			else
				res = get_collatz_length(n/2)+1;
		}
		return res;
	}

	public static void main(String[] args) 
	{
		long time = System.currentTimeMillis();
		int N=1000000;
		int L[] = new int[N];
		for (int k=0; k<N; k++)
			L[k] = get_collatz_length(k);
		int max=0,imax=0;
		for (int k=1; k<N; k++)
			if (L[k]>max)
			{
				max=L[k]; imax=k;
			}
		System.out.println("Maximal value " + max + " at "+imax);
		time = System.currentTimeMillis()-time;
		System.out.println((time/1000.0) + " seconds.");
		
	}

}
Maximal value 524 at 837799
0.498 seconds.

This takes about half a second. The factor is 40.

What we also see is that the Python code looks more elegant and mighty. E.g., sorting the array of lengths and printing the 10 highest is done with three lines of code. In Java, the same can only be achieved by writing a Comparator class.

Consequently, I see Python as an interactive script language to do math. If speed is needed, native code must be used, as in Numpy. Unfortunately, the user cannot easily write a function in C or Java.

Can we speed up the process? We could try to avoid the doubled computations by remembering and using the already known lengths in a dictionary. The following code does this.

collatz_lengths = {}

def get_collatz_length (n, nmax):
    res = collatz_lengths.get(n)
    if res:
        return res 
    elif n==1:
        res = 0
    elif n%2:
        res = get_collatz_length(3*n+1,nmax)+1
    else: 
        res = get_collatz_length(n//2,nmax)+1
    if n<nmax:
        collatz_lengths[n] = res
    return res

The storage of the lengths is only done if n is not too high. For the call, we have set this maximal value equal to N=1’000’000. This code reduces the time to about 1 second. We gained a factor of 20 just by this simple trick. Obviously, an idea might be worth more than a compiled code.

Maybe, we can do oven better, if we do not use a dictionary. Instead, we can use a list and store the known lengths in it. Let us try.

def get_collatz_length (n, L, N):
    if n<N and L[n]>=0:
        return L[n]
    if n==1:
        res = 0
    elif n%2:
        res = get_collatz_length(3*n+1,L,N)+1
    else: 
        res = get_collatz_length(n//2,L,N)+1
    if n<N:
        L[n]=res
    return res

import time
start = time.process_time()
N=1_000_000
L = list(range(N))
for k in range(N):
    L[k] = -1
lengths = [(n,get_collatz_length(n,L,N)) for n in range(1,N)]
records = sorted(lengths,key=lambda x:x[1],reverse=True)
for k in range(11):
    print(f"{records[k][0]}: {records[k][1]}")
print(f"{time.process_time()-start:5.4} seconds")

This reduces the time only slightly. It is still one second.

However, implementing the same trick in Java, we get the lengths up to one million in 0.033 seconds, i.e., 30 times faster. The reason for this is partly the infinite integer type of Python, and also the use of a dictionary instead of an array. Here is the Java code.

public class Main 
{
	
	public static int get_collatz_length (long n, int L[], int N)
	{
		if (n<N && L[(int)n]>=0)
			return L[(int)n];
		int res = 0;
		if (n>1)
		{
			if (n%2 == 1)
				res = get_collatz_length(3*n+1,L,N)+1;
			else
				res = get_collatz_length(n/2,L,N)+1;
		}
		if (n<N) L[(int)n] = res;
		return res;
	}

	public static void main(String[] args) 
	{
		long time = System.currentTimeMillis();
		int N=100000000;
		int L[] = new int[N];
		for (int k=0; k<N; k++)
			L[k] = -1;
		for (int k=0; k<N; k++)
			L[k] = get_collatz_length(k,L,N);
		int max=0,imax=0;
		for (int k=1; k<N; k++)
			if (L[k]>max)
			{
				max=L[k]; imax=k;
			}
		System.out.println("Maximal value " + max + " at "+imax);
		time = System.currentTimeMillis()-time;
		System.out.println((time/1000.0) + " seconds.");
		
	}

}

Increasing N to 10’000’000, the Java computation time rises to 0.15 seconds. The more clever Python code was faster now and took 9 seconds, i.e., it was 60 times slower.

Going further to N=100’000’000 made Java run for 1.44 seconds. The more clever code in Python, however, broke down and took more than 2 minutes, again a factor of over 50.

Finally, I wanted a plot of the frequencies of the lengths. I had to write a procedure for the count, not trusting the performance of the Counter class.

lengths1 = [x[1] for x in lengths]
M = max(lengths1)
def get_count (L,M):
    count = [0 for k in range(M)]
    for x in L:
        if x<M:
            count[x] += 1
    return count
count = get_count(lengths1,M+1)

This allows to plot the frequencies. The vertical axis is the number of times a length occurs among the first 100 million numbers. The same plot can be found on Wikipedia.

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

fig,ax = plt.subplots(figsize=(14,6))
ax.grid(True)
ax.plot(count);

In summary, Python did well. But it seems to be so flexible that it is not predictable in terms of performance and memory consumption. That is so with many modern software systems who develop a mind of their own from the perspective of a user.

Adaptive Integral in Python

Continuing my efforts to learn Python for Math, I wrote a function for adaptive integration. It uses 10-point Gauss quadrature. To check the accuracy, the algorithm uses a bipartition of the interval. So each step takes 30 evaluations of the function, one for the complete step and two for the check. If the accuracy is not high enough, the step size will decreased. Otherwise, the function will try to double the step size in the next step.

Here is the complete code.

def legendre (n):
    """ Get the Legendre Polynomials
    
    n : maximal degree
    
    Return: list of Lenegdre Polynomals
    """
    p0 = np.poly1d([1])
    if n<=0:
        return [p0]
    p1 = np.poly1d([1,0])
    px = p1
    if n==1:
        return [p0,p1]
    v=[p0,p1]
    for i in range(1,n):
        p = ((2*i+1)*px*p1 - i*p0) / (i+1)
        p0 = p1
        p1 = p
        v.append(p1)
    return v

def make_gauss(n):
    """ Compute the Coefficiens for Gauss Integration
    
    n : Number of points
    
    return: x,a
    x : Gauss popints
    a : Gauss coefficients
    """
    v = legendre(n)
    x = np.roots(v[n])
    A = np.array([v[i](x) for i in range(n)])
    w = np.zeros(n)
    for i in range(n):
        p = np.polyint(v[i])
        w[i] = p(1)-p(-1)
    return x,np.linalg.solve(A,w)

gauss_x5,gauss_a5 = make_gauss(5)

def gauss5 (f,a,b):
    """ Gauss Integrate with 5 points
    
    f : function of one variable
    a,b : Inteterval bounds
    
    returns: Integral
    """
    return np.sum(f((a+b)/2+(b-a)/2*gauss_x5)*gauss_a5)*(b-a)/2

gauss_x10,gauss_a10 = make_gauss(10)

def gauss10 (f,a,b):
    """ Gauss Integrate with 10 points
    
    f : function of one variable
    a,b : Inteterval bounds
    
    returns: Integral
    """
    return np.sum(f((a+b)/2+(b-a)/2*gauss_x10)*gauss_a10)*(b-a)/2

def integrate (f,a,b,n=1,eps=1e-14):
    """ Adaptive Integral using 10 point Gauss
    
    f : function of one variable
    a,b : Interval bounds
    n=1 : Initial step size is (b-a)/n
    eps : Desired absolute accuracy
    
    returns: Integral
    """
    h = (b-a)/n
    x = a
    I = gauss10(f,x,x+h)
    I1 = gauss10(f,x,x+h/2)
    I2 = gauss10(f,x+h/2,x+h)
    res = 0.
    while x<b-eps:
        ## print(h)
        if np.abs(I-(I1+I2))>h/(b-a)*eps:
            h = h/2
            I = I1;
            I1 = gauss10(f,x,x+h/2)
            I2 = gauss10(f,x+h/2,x+h)
        else:
            x = x+h
            h=2*h
            if x+h>=b-eps:
                h=b-x
            res = res+I1+I2
            I = gauss10(f,x,x+h)
            I1 = gauss10(f,x,x+h/2)
            I2 = gauss10(f,x+h/2,x+h)
    return res

This algorithm is usually fast enough and very accurate. The implementation is by no means optimal. E.g., if a factor of 2 is used in the step size, a lot of computations are redone. Moreover, it might not be optimal to try to double the step size in each step. However, it works for interactive use, and that is what it is intended for.

Let me try to explain the code a bit. The function legendre() computes the Legendre polynomials using the recursion formula. These polynomials are orthogonal with respect to the weight w=1. The function returns a list of the polynomials. It makes use of the polynomial handling in Python (in old style).

The function make_gauss() computes the zeros of the Legendre polynomials using a function from Python, and coefficients a[k], such that the following is exact for all polynomials p up to degree n-1:

\(\sum_{k=1}^n a_k p(x_k) = \int\limits_{-1}^1 p(t) \,dt\)

It sets up n linear equations from this requirement using the Legendre polynomials we just computed as test cases and solves the equations. To compute the correct integral for polynomials, there is a function in Python.

Now, we can define the functions for Gauss integration with 5 and 10 points, and use it for our adaptive integration.

For the demonstration, I created a very easy to use function fplot() which plots a function adaptively using the code of the last posting.

def fplot (f,a,b,n=100,nmax=100000):
    """ Simple plot of a function
    
    a,b : interval bounds
    n=100 : initital step size (a-b)/n
    nmax=100000 : minimal step size (a-b)/nmax
    
    """
    x,y = adaptive_ev(f,a,b,n,nmax)
    fig,ax = plt.subplots(figsize=(8,8))
    ax.grid(True)
    ax.plot(x,y)

I now have put all this and the code of the last posting into a Python module. When using the Python notebook interface in a Browser, the file should be placed in the same directory where the „.ipynb“ file. The Python file can even be opened and edited in the Python notebook IDE.

Then it can be used as follows.

import renestools as rt
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

def f(x):
    return np.abs(x)

rt.fplot(f,-np.sqrt(2),1,n=1000)
print(rt.integrate(f,-np.sqrt(2),1))
1.5

This is not easy to integrate numerically, since the function is not smooth. The result is exact to the last digit. Even the plot is not trivial. With the arbitrary step size, the point x=0 might not be met exactly, and the corner may look round.

Here is another example using the Gaussian distribution. All digits agree. Note, that the Gauss function is almost zero for large x.

def f(x):
    return np.exp(-x**2)

print(rt.integrate(f,-20,20))
print(np.sqrt(np.pi))

1.7724538509055159
1.7724538509055159

You can even make the integral to a function and plot it adaptively.

def F(x):
    return rt.integrate(f,0,x)/np.sqrt(np.pi)

rt.fplot(F,0,5)

Adaptive Plots in Matplotlib

I could not find a solution for adaptive plots in Matplotlib. So, I wrote a small script that can evaluate a function in adaptive step sizes. „Adaptive“ means that the step size adapts to the output of the plot. This is to prevent coarse line segments with corners in the plot.

Here is the simple code for a function of one variable.

import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np

def adaptive_ev (f,a,b,n=100):
    xc = a
    yc = f(xc)
    x = [xc]
    y = [yc]
    h = (b-a)/n
    uplim = (2*h)**2
    lowlim = uplim/6
    eps = (b-a)/100000.
    while xc<b:
        xcnew=xc+h
        if xcnew>=b:
            xcnew=b
        ycnew=f(xcnew)
        d = h**2+(ycnew-yc)**2 
        if d>uplim and h>eps:
            h = h/2
        else:
            if d<lowlim/3:
                h = 2*h
            xc = xcnew
            yc = ycnew
            x.append(xc)
            y.append(yc)
    return np.array(x),np.array(y)

def func(x):
    return np.sin(1/x)

x,y=adaptive_ev(func,0.001,1)
print(x.size,' points generated')
fig,ax = plt.subplots(figsize=(14,10))
ax.grid(True)
ax.plot(x,y);

To apply the function adaptive_ev(), you need a function of one variable, an interval and the target number of plot points. The function will double or half the step size as needed to keep the output under control.

This is a screenshot of the graph generated by the code above. The function is difficult to plot near 0. Thus, adaptive evaluate is the only chance to do that. The routine generated 4270 data points to create this plot.

Euler Math Toolbox has the adaptive evaluation built into the plot2d() routine. So, a similar code would look like this.

>aspect(1.2); plot2d("sin(1/x)",0.001,1,thickness=1.5,color=blue):

Admittedly, the anti-aliasing of Matplotlib is outstanding. I like this library a lot and recommend using it.

I also did a routine for curves in the plane, which are parameterized by two functions for the x- and y-coordinate.

def adaptive_ev2 (fx,fy,a,b,size=1,n=1000):
    tc = a
    xc = fx(tc)
    yc = fy(tc)
    x = [xc]
    y = [yc]
    h = size/n
    uplim = (2*h)**2
    lowlim = uplim/6
    eps = size/100000.
    while tc<b:
        tcnew=tc+h
        if tcnew>=b:
            tcnew=b
        xcnew=fx(tcnew)
        ycnew=fy(tcnew)
        d = (xcnew-xc)**2+(ycnew-yc)**2 
        if d>uplim and h>eps:
            h = h/2
        else:
            if d<lowlim/3:
                h = 2*h
            xc = xcnew
            yc = ycnew
            tc = tcnew
            x.append(xc)
            y.append(yc)
        ## print((tc,xc,yc,h,d))
    return np.array(x),np.array(y)

def adaptive_ev2 (fx,fy,a,b,size=1,n=1000):
    tc = a
    xc = fx(tc)
    yc = fy(tc)
    x = [xc]
    y = [yc]
    h = size/n
    uplim = (2*h)**2
    lowlim = uplim/6
    eps = size/100000.
    while tc<b:
        tcnew=tc+h
        if tcnew>=b:
            tcnew=b
        xcnew=fx(tcnew)
        ycnew=fy(tcnew)
        d = (xcnew-xc)**2+(ycnew-yc)**2 
        if d>uplim and h>eps:
            h = h/2
        else:
            if d<lowlim/3:
                h = 2*h
            xc = xcnew
            yc = ycnew
            tc = tcnew
            x.append(xc)
            y.append(yc)
        ## print((tc,xc,yc,h,d))
    return np.array(x),np.array(y)

def funcx(t):
    return np.exp(-t/10)*np.cos(t)
def funcy(t):
    return np.exp(-t/10)*np.sin(t)

x,y=adaptive_ev2(funcx,funcy,0,40*np.pi)
print(x.size,' points generated')
fig,ax = plt.subplots(figsize=(10,10))
ax.grid(True)
ax.set_xlim(-1,1)
ax.set_ylim(-1,1)
ax.plot(x,y);

The code follows the same ideas. However, it uses a plot size to set the dimensions of the curve in the plane. You only need a guideline. The step size will be approximately 1/1000 of this size.

Close to 0, the plot should avoid generating too many points. But it should look smooth. This plot generates 14509 points. That is 14 times the given size of 1. But one should be aware that it winds 20 times around the origin!

Euler Math Toolbox does the same, if two functions are fed into plot2d(). It uses a function adaptive() to compute the data points, which works for vector valued functions. This function could be ported to Python easily.

function adaptive (f$: call, a: number, b: number, eps,  ..
dmin, dmax, dinit, factor)

## Default for eps : 0.01
## Default for dmin : 1e-05
## Default for dmax : 100000
## Default for dinit : 0.1
## Default for factor : 2

    sn=sign(b-a);
    t=a;
    sy=f$(t;args());
    if sn==0 then return {a,sy}; endif;
    if cols(sy)>1 then 
        error(f$|" does not produce a real or column vector"); 
    endif
    x=zeros(1,1000); y=zeros(rows(sy),1000); 
    n=1; h=dinit;
    x[:,1]=t;
    y[:,1]=sy;
    repeat
        repeat
            tn=t+sn*h;
            if sn*(tn-b)>=0 then tn=b; endif;
            syn=f$(tn;args());
            d=sqrt(colsum(abs(syn-sy)^2));
            if isNAN(d) then break; endif;
            if d>eps and h>dmin then h=max(h/factor,dmin);
            elseif d<eps/factor then h=min(h*factor,dmax); break;
            else break;
            endif;
        end;
        n=n+1;
        if n>cols(x) then
            x=x|x;
            y=y|y;
        endif;
        x[:,n]=tn; y[:,n]=syn;
        sy=syn;
        t=tn;
        if sn*(t-b)>=0 then break; endif;
    end;
    return {x[:,1:n],y[:,1:n]};
endfunction

I find Python very aspiring for mathematics. The next step will be to write an adaptive integration using Gauss formulas. Stand by.

EMT, Python, Numpy, Sympy on Macs with M1

I am currently using an 11′ Mac Air Notebook with the M1 processor as my laptop. We have to admit that Apple has made a big step forward again with this architecture. I never used Macs before because they were expensive and did not offer enough benefit to justify leaving Windows. Admittedly, they always had a much better user experience than Windows, let alone Linux. What works, does work well! But I have been judging the systems by more technical features, like processor power, ease of administrator work, and support for programmers. And in that respect, Macs were a step back. So, I could never justify using a system that serves me worse at a higher cost.

Now, things have changed with the M1 architecture. The laptop I am using is a bargain for what it offers. The processor is very fast, the display excellent, and the battery life extraordinary. The build quality is far better than any notebook I had before, with a rugged, indestructible case that still feels light and slim. For my retirement, that is precisely what I need.

The purchase of this laptop also gives me the chance to test my math programs on that operating system. There is a Windows emulation for Macs, but it does not work well with the M1 processor. So, a program like Euler Math Toolbox will have to be ported which is a lot of work, most likely not worth the effort in the end. I am going to produce a simplified version nevertheless.

Java is working well, and that gave me the chance to fix my program C.a.R. (Compass and Ruler) for Macs. Indeed, there were some annoying quirks that needed to be fixed. It is now running very well. (By the way, the same applies to my editor, JE, which I still use.)

Until I have a working version, Euler Math Toolbox needs to replaced by other, free tools on Macs. For the numerical side, there is Scilab. It has a working version for Macs, and I found no obvious flaw. For the numerical side, there is no free solution. Maxima can only be installed in a very complicated way using a sort of emulator, and I still have to explore this method. More on this on another day.

But, there is Python. We have a numerical matrix library „numpy“ and a symbolic library „sympy“. There is also a very nice plot library „matplotlib“. All those are not as easy to use as Euler Math Toolbox. I’d say, they are clumsy in comparison. Nevertheless, I am going to explore these libraries and compare to EMT in a series of postings on this blog.

You can follow me without a Mac. So let us start with an example which requires symbolic computations. We want to compute the polynomial of degree n with the property

\(p_{n+1}(x)-p_{n+1}(x-1)=x^n, \quad p_n(0)=0\)

for all real x, and natural n (investigated by Bernoulli and Euler). By this equation and induction, the polynomial is defined for all natural numbers x. Thus, the polynomial is uniquely determined if it exists. And it does indeed exist. You just need to interpolate in the first n+2 natural numbers, and since the equation holds there, it holds everywhere.

We now have

\(p_{n+1}(N) = \sum\limits_{k=1}^N k^n.\)

We could use interpolation to determine the polynomials. It is an exercise to do this in any symbolic program. For a start, we use Maxima in Euler Math Toolbox to determine the case n=3.

>p &= sum(a[k]*x^k,k,0,4)
 
                       4       3       2
                   a  x  + a  x  + a  x  + a  x + a
                    4       3       2       1      0
 
>equ &= makelist((p with [x=k]) = sum(j^3,j,1,k),k,1,5)
 
        [a  + a  + a  + a  + a  = 1, 
          4    3    2    1    0
 16 a  + 8 a  + 4 a  + 2 a  + a  = 9, 
     4      3      2      1    0
 81 a  + 27 a  + 9 a  + 3 a  + a  = 36, 
     4       3      2      1    0
 256 a  + 64 a  + 16 a  + 4 a  + a  = 100, 
      4       3       2      1    0
 625 a  + 125 a  + 25 a  + 5 a  + a  = 225]
      4        3       2      1    0
 
>&factor(p with solve(equ)[1])

 
                               2        2
                              x  (x + 1)
                              -----------
                                   4

We have created a polynomial with indexed parameters using the sum() function of Maxima. The command makelist() then creates a list of equations which our polynomial has to satisfy (due to the sum requirement above). This can be solved with solve(). We use the first solution to substitute the values for a[k] back into the polynomial and factor the result. We get the well-known formula for the sum of the cubes of the first x integers.

Let us switch to Python. In Windows, you need to install „Anaconda“ for the following. Then start the „Anaconda Navigator“ and „jupyter notebook“. There you enter the following.

from sympy import *
x, k = symbols('x k')
a = IndexedBase('a')
p = sum([a[k]*x**k for k in range(5)])
print(p)
equ = [Eq(p.subs(x,k),sum([l**3 for l in range(1,k+1)])) for k in range(1,6)]
print(equ)
sol = solve(equ)
print(sol)
factor(p.subs([(a[k],sol[a[k]]) for k in range(5)]))

This is more difficult to understand than the EMT code above, but it follows the same ideas.

After importing everything from sympy, we need to define the symbols x and k, and the indexed symbol a[]. Then we use the sum function of Python on a list. The result is

x**4*a[4] + x**3*a[3] + x**2*a[2] + x*a[1] + a[0]

After that, we create a list of equations, just as in the Maxima code above. The problem is that we cannot use „==“. This would test the two sides for equality and evaluate to true or false. We need to use „Eq(a,b)“ instead.

[Eq(a[0] + a[1] + a[2] + a[3] + a[4], 1), Eq(a[0] + 2*a[1] + 4*a[2] + 8*a[3] + 16*a[4], 9), Eq(a[0] + 3*a[1] + 9*a[2] + 27*a[3] + 81*a[4], 36), Eq(a[0] + 4*a[1] + 16*a[2] + 64*a[3] + 256*a[4], 100), Eq(a[0] + 5*a[1] + 25*a[2] + 125*a[3] + 625*a[4], 225)]

Then we solve this equation, which yields a dictionary with solutions. Dictionaries are Python structures that contain entries and values. They print as follows.

{a[0]: 0, a[1]: 0, a[2]: 1/4, a[3]: 1/2, a[4]: 1/4}

We now use the subs() function to substitute the values of the parameters into the polynomial p and factor the result. In a jupyter notebook, it appears in pretty format automatically. Here is a screenshot.

So much for a start of the sequence of postings. Clearly, Euler Math Toolbox and Maxima re easier to use. But Python is available for Macs. On Linux, you can use EMT quite nicely with the modern Wine emulation. But you could also use Python.

Finally, as a mathematical side remark, you could differentiate the definition equation for our polynomials and get.

\(p_{n+1}'(x)-p_{n+1}'(x-1) = n x^{n-1}\)

Since our polynomials are uniquely defined by the equation, we get

\(p_n(x) = \dfrac{1}{n} (p_{n+1}'(x) -p_{n+1}'(0))\)

On the right hand side, we have a polynomial which satisfies the equation and has the property p(0)=0. We can use this to compute the next polynomial.

\(p_{n+1}(x) = n \int\limits_0^x p_n(t) \,dt + c x\)

To determine the constant c, we use p(1)=1 for our polynomials. Thus

\( p_{n+1}(x) = n \int\limits_0^x p_n(t) \,dt + (1- n\int\limits_0^1 p_n(t) \,dt) x\)

Let us make a function in Euler Math Toolbox from this formula. The function will call Maxima several times, and thus not be fast.

>function p(n) ...
$pn &= "x";
$if n>1 then
$  for k=1 to n-1;
$    q &= "subst(t,x,pn)";
$    pn &= "@k*integrate(q,t,0,x)+(1-@k*integrate(q,t,0,1))*x";
$  end;
$endif;
$return &pn;
$endfunction
>p(4); &factor(%)
 
                               2        2
                              x  (x + 1)
                              -----------
                                   4

In Euler functions, you need to use „…“ for symbolic expressions. Moreover, you can use numerical values via @… in symbolic expressions.

Of course, loops like this can also be programmed in Maxima direct. You will have to learn the syntax of Maxima for this.

Here is a solution in Python.

def p(n):
    t,x = symbols('t x')
    p = x
    if n<=1:
        return p
    for i in range(1,n):
        q = p.subs(x,t)
        p = i*integrate(q,(t,0,x)) + (1-i*integrate(q,(t,0,1)))*x
    return p
factor(p(4))

This is a good start.

But it turns out that the Anaconda Navigator does not run on Macs. It looks like it needs Rosetta2. Usually, a program asks if Rosetta should be installed. But Anaconda didn’t on my system.

After installing Anaconda nevertheless, you have a Python 3.9 environment. So you can run „python“ to start the Python interpreter, and „ipython“ to start the advanced Python terminal. For this test, I saved the following to a file „test.py“ using an editor (I am using my JE, of course).

from sympy import *

def p(n):
    t,x = symbols('t x')
    p = x
    if n<=1:
        return p
    for i in range(1,n):
        q = p.subs(x,t)
        p = i*integrate(q,(t,0,x)) + (1-i*integrate(q,(t,0,1)))*x
    return p
    
pprint(factor(p(10)))

Then running „python test.py“ will run this script and output the result in pretty format.

 2        2 ⎛ 2        ⎞ ⎛   4      3    2          ⎞
x ⋅(x + 1) ⋅⎝x  + x - 1⎠⋅⎝2⋅x  + 4⋅x  - x  - 3⋅x + 3⎠
─────────────────────────────────────────────────────
                          20    

You can also try to run „jupyter notebook“ from the terminal. If things go well, your browser will open and you can start an ipython terminal, clicking on „New“. Use Option+Return to run any command.

Cambrige Entry Examination

I recently came across a video with the problem to find the sum of the solutions of

\(3^x – \sqrt{3}^{(x+4)} + 20 = 0\)

Maybe anybody who is intimate with this kind of computations sees the trick. You can set

\(y = \sqrt{3}^x,\)

and get the equivalent equation

\(y^2-9y+20=0.\)

Solving this, you get

\(y = \sqrt{3}^x = 3^{x/2} = \dfrac{9 \pm 1}{2}.\)

Thus

\(x_1+x_2 = 2 \log_3(20) = \dfrac{2 \ln(20)}{\ln(3)}.\)

That’s not very exciting.

But why are they asking to find the sum of the solutions? If you hear that, you might think of the Vieta’s fact that the sum and the product of the solutions of a quadratic equation are the coefficients. Using this, you get

\(y_1y_2 = \sqrt{3}^{(x_1+x_2)} = 20\)

immediately.

That’s were I decided to write about this problem here. E.g., you might change the original problem to

\(3^x – \sqrt{3}^{(x+4)} + 9 = 0\)

and get the much more pleasing solution

\(x_1 + x_2 = 2 \log_3(9) = 4.\)

Nice trick! But it is dangerous. Applying the trick to

\(3^x – \sqrt{3}^{(x+4)} + 81 = 0\)

yields the false sum 8.

In fact, this equation does not have a solution at all! Unless you allow complex numbers, and are careful with complex logarithms. In that case, you get multiple solutions.

So be careful with tricks!

Integrals and Monte Carlo Methods

I am teaching surface integrals right now. Searching for some interesting applications away from physics, I came up with mean values along surfaces. After all, mean or „expected“ values are a center part of probability theory. It is tempting to try to verify the analytic results by Monte Carlo methods. This can be done nicely in Euler Math Toolbox.

E.g., an integral on a finite interval [a,b] can be simulated this way. We generate random numbers in [a,b] uniformly, and compute the mean of the values of the function. The expected mean value is the integral of the function divided by the length of the interval.

>&integrate(log(x),x,0,2), %()
 
                              2 log(2) - 2
 
 -0.61370563888
>N = 10 000 000;
>x = random(N)*2;
>mean(log(x))*2
 -0.61394326247

Syntax notes for EMT: The blanks in integer numbers can be generated by pressing Ctrl-Space. This generates a hard space which is skipped when scanning the number. The random() function generates uniform numbers in [0,1] which we need to be scaled properly to fall into [0,2].

The accuracy of these Monte Carlo methods are like 1/√N. So, we cannot expect more than 4 digits, even with ten million samples.

As another example, the area of the circle can be simulated by throwing random numbers into the unit cube and counting the number of hits inside the circle. We expect a proportion of pi/4 there.

>N = 10 000 000;
>x = 2*random(N)-1; y=2*random(N)-1;
>sum(x^2+y^2<=1)/N
 0.7854814
>pi/4
 0.785398163397

Syntax notes for EMT: Applying the boolean „≤“ to a vector generates a vector of zeros and ones. Summing this vector up counts the „true“ cases.

This rejection method is also a simple way to create random numbers inside the unit circle. It is not as bad as it sounds, since more than every second pair of numbers hits inside the circle.

For a further test, let us compute the integral of r^2 on the unit circle (which is proportional to the energy in a rotating disk). The integral is the mean value of r^2 on the unit circle times the area of the unit circle.

>N = 10 000 000;
>x = 2*random(N)-1; y=2*random(N)-1;
>r = sqrt(x^2+y^2);
>i = nonzeros(r<=1);
>mean(r[i]^2)*pi
 1.57088571913
>&2*pi*integrate(r^3,r,0,1), %()
 
                                   pi
                                   --
                                   2
 
 1.57079632679

Syntax notes for EMT: The nonzeros() function generates a vector of indices of nonzero elements. We can use that vector to select the hits from the vector r.

For the correct integral, we used the formula for rotational invariant functions on circles.

As a side remark, there is a direct method to generate random numbers inside the unit circle. For this, we need to select a random angle between 0 and 2pi. The radius must be selected in proportion to the length of the circle with this radius, i.e. with density 0≤r≤1. The following does the trick. We explain it after the code.

>function circrandom (N) ...
$  t = random(N)*2*pi;
$  r = sqrt(1-random(N));
$  return {r*cos(t),r*sin(t)}
$endfunction
>{x,y} = circrandom(10000);
>plot2d(x,y,>points,style=".",r=1.2):

This will generate the following points, which look quite nicely distributed in the unit circle.

Why does this work? The reason is that any probability distribution can be simulated by inverting its accumulated distribution, in this case

\(F(R) = p(r \ge R) = \int\limits_R^1 2r \, dr = 1-r^2\)

Here, we take 2r because this generates a probability distribution with density proportionally to r. Then we create a random number between 0 and 1 and apply the inverted function.

Using this trick, we can do another Monte Carlo simulation of the integral.

>{x,y} = circrandom(N);
>r = sqrt(x^2+y^2);
>i = nonzeros(r<=1);
>mean(r[i]^2)*pi
 1.57153749805

Syntax notes on EMT: Note that circrandom() returns two values. It is possible to assign these two values to two variables.

Finally, we try an integral on a surface. We want to know the mean geographical width of a point in the Northern Hemisphere.

We try a simple rejection method. For this, we generate random numbers in the unit cube and reject those with z<0, r>1 or r<1/2. The remaining points, projected to the surface, are randomly distributed on the Northern Hemisphere.

>N = 10 000 000;
>x = random(N)*2-1; y = random(N)*2-1; z=random(N)*2-1;
>r = sqrt(x^2+y^2+z^2);
>i = nonzeros(z>0 && r<1 && r>1/2);
>mean(asin(z[i]/r[i]))
 0.570688989539
>&2*pi*integrate(asin(z),z,0,1) / (2*pi) | factor, ...
>   %(), %->"°"
 
                                 pi - 2
                                 ------
                                   2
 
 0.570796326795
 32.7042204869°

The analytic surface integral is done by parameterizing the surface using

\(g(z,t) = [\sqrt{1-z^2}\cos(t),\sqrt{1-z^2} \sin(t),z]\)

We can compute the Gram determinant for this parameterization, which is actually 1 (or use a geometric reasoning with the same result). Thus, for any function on the hemisphere H which depends on the height z only, we get

\(\int_H f(x,y,z) \,dS = 2 \pi \int\limits_0^1h(z) \,dz\)

The rejection trick works on symmetric surfaces only, where we can easily do an orthogonal projection and know the distance to the surface. Otherwise, we have to distribute the random points on the parameters in accordance with the distribution of the surface measure. This can be very tricky.

Corona, Impfen und Ethik

Dieser Blogeintrag muss auf Deutsch sein. Genauso wie das Wort „Querdenker“, das nun sein unschuldige Anmutung von kritischem Denken verloren hat und mehr an Flacherdler erinnert als an Kant und Schopenhauer. Wir schätzen alle das Denken, wenn es stattfindet, und wir tolerieren andere Gedanken. Das bedeutet, wir halten sie aus. Schwerer auszuhalten sind die Konsequenzen dieser Gedanken. Und noch schwerer auszuhalten ist, wenn das Querdenken zum Querglauben mutiert, der einfach nur noch behauptet und für Argumente nicht mehr zu erreichen ist. Wir müssen die Welt schon so sehen, wie sie ist, bevor wir anfangen über sie nachzudenken.

Schauen wir einmal hin.

  • Das Corona-Virus ist ein neuartiges Virus, gegen das wir fast keine Abwehrkräfte besitzen. Das bedeutet, dass fast jeder, der mit dem Virus in Kontakt kommt, erkrankt. Es ist nicht möglich, sich durch Stärkung der Abwehrkräfte, eine spezielle Ernährung oder eine gesunde Lebensführung vor der Ansteckung zu schützen.
  • Das Virus hat eine optimale Strategie zur Verbreitung gefunden. Es ist weder zu tödlich wie das Ebola-Virus, noch verwendet es einen selteneren Übertragungsweg wir das HIV-Virus. Auch der Zeitraum, in dem es keine Symptome zeigt und dennoch ansteckend ist, ist lang genug. Zudem nimmt es bei einem Großteil der Infizierten einen fast unbemerkten Verlauf, wird aber dennoch weitergetragen.
  • Das Virus killt insbesondere die Schwachen und Alten. Die Risikofaktoren umfassen Dinge, die nicht zu beeinflussen sind, wie Krebsbehandlungen, Diabetes aller Typen und Alter. Es trifft aber auch Starke und Junge, oft mit überraschender Härte und Langzeitwirkung.
  • Die Behandlung der schweren Erkrankungsfälle ist enorm aufwendig. Ein Intensivbett ist eine Bett mit intensivem Personalbedarf. Noch mehr gilt das für künstliche Beatmung. Wer darüber mehr erfahren will, sollte im Netz oder im TV nicht weit suchen müssen.
  • Eine Erkrankung hat häufig Spätfolgen. Wir wissen nicht genau, wie häufig. Aber es ist weitaus drastischer als bei einer Grippe. Selbst Windpocken können da nicht mithalten.
  • Das Virus mutiert. Je langsamer es bekämpft wird, umso mehr wird es mutieren. Es werden Virusvarianten kommen, die sich besser durchsetzen können, also noch infektiöser sind. Auch die Impfungen werden vom Virus umgangen werden. Je langsamer und lascher gehandelt wird, desto schlimmer wird die Situation.
  • Viele Menschen fallen auf das Vorsorgeparadoxon herein. Nachdem Maßnahmen gewirkt haben, gibt es immer wieder Besserwisser, die behaupten, dass der ganze Alarmismus unnötig war. War doch nicht so schlimm, oder? Beim Coronavirus wissen wir aber genau, wie es ausgeht, wenn man nichts tut. Das wurde schon ausprobiert, und wir müssen es nicht nochmal probieren.

Was schließen wir daraus? Am besten macht man sich Gedanken, wie man selbst die Krise gemanagt hätte, wenn man diese Verantwortung hätte tragen müssen. Es gibt mehrere unmögliche Szenarien, die alle schon durchgedacht wurden.

  • Wir lassen alles einfach laufen. Das bedeutet, wir opfern die Alten und Schwachen, zugunsten des Restes der Bevölkerung. Obwohl das ethisch, politisch und praktisch völlig undenkbar ist, wird eine Variante dieser „Lösung“ immer wieder vorgetragen. Die Sterberate wäre in der Tat „nur“ um maximal 1/3 höher. Aber die große Anzahl der Erkrankten, um die man sich nicht kümmern könnte und die still und alleine leiden müssten, wäre unerträglich. Wer besucht schon einen ansteckenden Todkranken? Auch die Wirtschaft würde zusammenbrechen, ebenso wie jeder Zusammenhalt in der Bevölkerung. Die Amish in den USA scheinen genau so die Krise zu „meistern“. Genaue Details über die Interna sind nicht bekannt. Man stirbt im Schoße der Gemeinschaft. Ich frage mich aber, was die Amish mit Blinddarmentzündungen oder Unfällen bei Kindern machen. Ebenfalls nichts? In Deutschland wäre das strafbar.
  • Wir isolieren die Schwachen und Alten vollständig und lassen ansonsten alles laufen. Diese Variante geht davon aus, dass im Rest der Bevölkerung das Virus nach eine gewissen Zeit verschwindet. Sonst müsste man ja die Alten ewig isolieren. Es kann sein, dass das der Fall ist oder auch nicht. Das Schnupfen- oder Influenza-Virus verschwindet ja auch nicht. Es mutiert und kommt jedes Jahr wieder. Außerdem geht diese Strategie von der unethischen Vorstellung aus, dass sich Risikogruppen, auch junge, aus dem Leben zurückziehen, bis alles vorbei ist.
  • Wir machen die Grenzen zu. Das haben wir in der Tat gemacht, allerdings nicht vollständig und präventiv, sondern je nach Lage. Es ist in dieser heutigen Welt nicht mehr möglich, sich als Land zu isolieren. Diese Isolation müsste vollständig und klinisch sein, wenn sie wirken soll. Denn das Virus ist zu leicht zu übertragen. Man würde also an den Grenzen Güter umladen und notwendige Geschäftsreisen mit langen Quarantänelagern verbinden. Das alles ist nicht denkbar. Selbst wenn der politische Wille einer faschistoiden Regierung so etwas zustande brächte, wäre es wohl immer schon zu spät. Unsere Welt ist inzwischen viel zu dicht besiedelt und die Verbindungen zu global, um eine Radikalkur durchzustehen.
  • Wir isolieren einfach nur die Kranken. Das tun wir schon. Positiv Getestete müssen auch jetzt in Quarantäne. Das geht erst, nachdem die Kranken von ihrer Erkrankung wissen. Davor können sie das Virus weitergeben. Diese Lösung funktioniert daher einfach nicht. Eine zivil gestaltete Quarantäne lässt sich auch nicht so einfach überwachen.

Man kann diese Gedanken beliebig weiterspinnen. Man kommt letztlich immer auf genau die Maßnahmen, die nun getroffen wurden. Einfach, weil es nicht anders geht. Nicht aus diktatorischer Grausamkeit.

  • Kontaktbeschränkungen und Abstandsregeln bis hin zum Shutdown sowie Tests. Diese Maßnahmen verzögern natürlich nur die Ausbreitung des Virus. Damit gewinnt man lediglich Zeit. Es entlastet die Kliniken und ihr Personal und ermöglicht einen menschlichen Umgang mit den Schwerkranken. Außerdem kann man durch eine verstärkte Anwendung an Orten, wo es nötig ist, schutzbedürftige Gruppen vor dem Virus bewahren. Die Maßnahmen werfen natürlich Sand in das Getriebe der Wirtschaft. Die ethische Abwägung sollte aber doch eindeutig sein.
  • Impfungen. Impfungen schützen den Einzelnen vor schweren Verläufen und extremen Leiden. Sie helfen davor nicht sicher, aber mit einer Wahrscheinlichkeit, die viel, viel höher ist, als die, an Impffolgen leiden zu müssen. Sie sind in diesem Punkt besser als die meisten anderen Arzneien. Noch wichtiger ist aber, mit Impfungen die Anderen schützen. Nach einer Impfung ist die Viruslast bei den allermeisten Menschen sehr viel niedriger, so dass folglich Ansteckungen sehr viel seltener sind. Impfungen sind das Mittel, um Pandemiewellen zu brechen. Ethisch sind sie geboten, weil sich manche Menschen nicht impfen können oder bei ihnen die Impfungen nicht wirken. Werden übrigens Impfverweigerer im Ernstfall auch das Intensivbett verweigern? Verweigern Sie ihren Kindern die Impfung gegen Polio, Masern oder Tetanus?

Ich sehe keine wirkliche Alternative zu den getroffenen Maßnahmen und plädiere im Extremfall für eine Impfpflicht, sollte das Virus weiter an Fahrt gewinnen.

Don’t talk about „Chances“!

I came across a YouTube video featuring another well-known paradoxical problem that is complete bogus. The video is done in a very stressful style with a sort of speaking that I can’t stand very long. I am still not sure, but I think the author just wanted to confuse.

The problem is rather simple. Assume you pick two strangers on the streets of New York and ask them to play a game: They should open their purses and count the money they are carrying. The one with the smaller sum wins the money of the one with the larger sum.

The confusion is now spread by claiming that each of the two strangers must be interested in the deal because the possible loss is always less than the possible win by the very nature of the game.

That’s nonsense. The argument falls apart if you think of a person with an empty purse, who would be overjoyed to play this game, while someone who just picked up a large amount of money from the bank should be very hesitant to make the deal.

The reason for the confusion is that it does not make sense to talk about „expected“ outcome without the assumption of an a priori distribution. I have written about this before in the two-envelopes-problem. This is just a variation.

The best way to avoid this error is to think of a simulation of the experiment on a computer. For this, you would have to fix a distribution which determines the amount of money that you put into the purses of the two strangers. With that knowledge, you can compute the expected outcome of the experiment. Without that knowledge, it does not make sense to talk about „expectation“.