# Another Nice Math Problem

YouTube is full of math videos now. That is a good thing if you know how to select the good ones. Unfortunately, there are numerous videos where math remains a mystery solvable only be geniuses. Some are just a recollection of achievements by some math hero of times long ago.

I believe that math should be entertaining too. Yes, it is demanding for the brain. But why not? And yes, you might not be able to solve any problem. Nobody is. Einstein was once quoted to have said: „To anybody who has troubles with math: Rest assured that my troubles have been bigger.“

I found the problem of this posting in this video by „DorFuchs“ (along with the unavoidable message from the sponsor NorthVPN).

The question is simply: We have a well with two sticks of 2m and 3m length leaning at the walls as shown, crossing at the water top at 1m. Can you determine the width of the well from the geometric configuration and the given values?

The geometrical equations are not easy to get if you are not used to that kind of problems. However, my first sketch is the following.

This is a very rough sketch, done with pen on my Asus. After trying for a while, I came up with the following equations.

By Pythagoras, we see

$$a^2+b^2=4, \quad c^2+b^2=9 \qquad \Longrightarrow \qquad c^2-a^2 = 5$$

Using similarity, we get

$$\dfrac{b}{a}=y, \quad \dfrac{b}{c}=x, \quad x+y=b \qquad\Longrightarrow\qquad \dfrac{1}{a}+\dfrac{1}{c} = 1$$

These are two equations for a and c. It looks as if the problem has at least locally a unique solution. However, it is involved to eliminate one variable. I did it automatically in Euler Math Toolbox.

>e1 &= c^2-a^2=5 with solve(1/a+1/c=1,a)

2
2      c
c  - -------- = 5
2
(c - 1)

>e2 &= ratsimp(e1)

4      3
c  - 2 c
------------ = 5
2
c  - 2 c + 1

>csol = solve(&lhs(e2) with c=x,4,y=5)
2.73572325237
>sqrt(9-csol^2)
1.23118572378


Note, that I did the solution numerically. If it is done algebraically, Maxima yields a complicated expression involving third roots. The reason is that we are solving a fourth order polynomial. There are formulas for polynomials up to this order by Cardano. But for practical applications it is much easier to find the solution numerically.

Have a look:

I then tried to do a geometrical solution with my program C.a.R. Have a look at the following sketch.

We start the origin in the green dot. The blue dot is moveable on the x-axis. From this, we construct the points in the indicated order. In the proper solution, the red dot must fall onto the y-axis.

C.a.R. can generate automatic tracks of points while other points move on given lines. That has been done here. The track is clearly not a curve of second order, a circle or a parabola, or a line. Thus, it is no surprise that the problem cannot easily be solved algebraically.

It is possible to intersect such tracks with objects. That way we can find the value of c. However, the solution is only correct up to 5 digits due to the coarse way the track is generated.

# A nice Math Problem

I recently got recommended a YouTube video by MindYourDecisions with the following problem:

Which number doubles when the last digit becomes the first digit?

What is meant by „becomes the first digit“ is that the last digit is moved to the front and all others shift right one place, a rotation like „12345 -> 51234“ (which does not double 12345 as desired). It turns out that there are more videos with the same or a similar problem. I did not watch one of them. That’s the kind of problem I like to solve on my own. Let me share my thought process.

Everybody will probably try to find the number by using an algebraic equation. It turns out that rotation is surprisingly difficult to represent. A simple form should look like this:

$$2 \, (10x+d) = 10^nd + x$$

There are some side conditions, however. The number d must be a single digit from 1 to 10, and x must be a natural number with exactly n digits. The equation above is equivalent to

$$x = \dfrac{10^n-2}{19} \,d$$

Thus, 19 must dived 10^n-2. Now, if you know a bit about Fermat’s little theorem and Algebra, you notice that 19 is prime and 10^n modulo 19 will run through all numbers 1 to 18. In fact, we get for the remainders of 10^n if divided by 10 the following sequence:

$$1, 10, 5, 12, 6, 3, 11, 15, 17, 18, 9, 14, 7, 13, 16,8, 4, 2, 1$$

We find the remainder 2 for 10^17. In fact,

$$\dfrac{10^17-2}{19} = 5263157894736842$$

Now, we have some choices for d. Note, that the number above has only 16 digits, but our n was 17. But already with d=2 we get a 17-digit number. We found the following solution (remember N=10x+d):

$N = 2 \cdot 5263157894736842 \cdot 10 + 2 = 105263157894736842$

Indeed

$2N = 210526315789473684$

A mathematician should never be satisfied with a solution that has been found in this fiddling way. The first obvious question is: Are there more solutions?

Indeed, we can set d=2,3,4,5,6,7,8,9 and get 8 different solutions:

$$105263157894736842, \ldots, 473684210526315789$$

For n=17, there are no more than these. But Fermat’s theorem and our computations above tell us that 10^18=1 modulo 19. Thus, we get more solutions by adding multiples of 18, e.g.:

$$(10^{35}-2)/19 \cdot 2 \cdot 10+2 = 105263157894736842105263157894736842$$

At that point, you should get suspicious and try to reflect on previous knowledge, in this case periodic decimal representations. I noticed this only after a while. But then it dawned to me:

$$\dfrac{2}{19} = 0.\overline{105263157894736842}$$

and

$$\dfrac{4}{19} = 0.\overline{210526315789473684}$$

The reason is that 40 divided by 19 yields 2 with a remainder of 2. Then 20 divided by 19 yields 1 with a remainder of 1. Then comes a 0 with a remainder of 10 etc. The process goes through all 18 remainders, before it repeats itself.

Starting at half of 20, i.e. with 10, we get the 1 from the second place of our digits. Thus, we get the sequence rotated to the left. This easily explains our result.

Why doesn’t the process work with 1/7? There, we never meet that d/7 is 2d/7 shifted one place. However,

$$5 \cdot 142857 = 714285$$

This can be explained in the same way using the decimal representation of 1/7.

Try 4/39, 16/39, 7/69, 49/69 for more.

Nice, ain’t it?

# Retired

Just for your information, I am now retired after forty years and more of teaching. Retirement feels good still.

I am going to keep Euler Math Toolbox and my Geometry program CaR updated as good as I can. The scripts will also be updated soon, especially the one about Numerical Math. The new version will contain more examples in Python.

Otherwise, I keep myself busy with Music, Photography, Flight Simulation, Bridge and especially grandkids, as well as travelling as much as possible.

Have fun!

# Giving up on Macs

For a short time, I had an iMac Air M1 laptop. While I have to admit that the Apple system and the M1 chip impressed me a lot, I won’t keep that system. The reason can be summarized as the huge compatibility problem between Apples private garden and the outer world. But there are also issues with the system itself and the cloud services it offers or rather enforces.

This is by no means going to be a rant against Apple, at least I try not to let it become one. Their products are great. They are on the expensive side, but there are Windows laptops and Android smartphones with worse specs in the same price range. The common theme for Apple seems to be: What works, works well. But there are too many things that I do not like because they simply don’t work. Let me explain.

The compatibility issue has many facets. The last one that I encountered was indeed the nail in the coffin, and the end of my friendship with Apple: The Mac was unable to read my USB drive formatted on a Windows computer. Okay, you can format on the Mac. Windows can read this. I used the disc like that for a while until I found more and more corrupt files. The exact reason is impossible to detect. One possible culprit could be the indexing that Macs perform for minutes by default, in conjunction with my impatience. But it could also just have been the handling of the device on the Macs or on Windows. I never had such problems in the Windows ecosystem.

There were numerous issues with the M1 chip. It seems that this new technology is not supported widely. There is a translation tool, but it failed to work on many of the programs that I wished to use. Often, there is a work-around. An example is the Anaconda Navigator. You can use the Anaconda tools nevertheless. But frequently, there is no M1 version available. This may sort itself out with time.

One other problem is something I am not very proud to tell you. I could never get used to the four modifier keys of the laptop. And Macs seem to rely very much on the ability of the user to remember those key combinations. I always forgot which modifier with which key would scroll a page up, go up one directory, or delete a file on the desktop. Moreover, it is close to impossible to fluidly program or write Latex, even if you remember that the curly bracket is hidden under the number 8. If I continued to program for Macs, I’d have to connect an external keyboard with a reasonable layout.

The default use of Apple’s cloud service was also giving me more troubles than help. If you are willing to pay for sufficient storage, it might work well. But for free, you can’t use it. It took me a while to stop it from synchronizing my desktop. That is really a nuisance because you get asked every time you move a file from the desktop to your file system. I seem to use the desktop in unintended ways, more as a short time storage than a dashboard. In the end, I switch off the service completely, which Apple does not seem to like me doing, and complained here and there.

Speaking of the file system, there are also some system programs which seem to be unwilling to use all files on the disk, including the connected Google Drive. I was very dissatisfied with the behavior of the Photo app. The previewer is also very restricted and just shows the current file it just opened, unable to skip to the next one. I admit that all these problems might have to do with me coming from Windows, and not having enough patience to learn the new system or install a proper app. By the way, there seem to be less free apps for the Apple. I especially dislike the in-app payment, which makes it impossible to judge the real costs beforehand.

I did not talk about the iPad, which I also had and still have. This is again another marvel of technology. But I prefer a more open file system. Android is already not easy in this respect, but the iPad was a nightmare to me. After a while, I learned how to get the files where I want them, mostly using my Google cloud service. Still, that could be easier. And iTunes did not help either, by the way. It transferred photos at a rate of one per 5 seconds.

In summary, I can easily imagine that many users are satisfied with the Mac or the iPad. As I said, they are great systems. They are just not for made for my work.

# Suggestions on Math Education

Here is an interesting article in the Scientific American on math education.

It seems to be based on a study, which claims that an application centered approach can double the interest of the students in the topic and improve the results.

This is nothing new. In fact, I read articles from around 1900 claiming just the same: Public math education should be for applications in physics and other fields, not for mathematicians. If you go through this blog or any other source devoted to math education, you find the same discussion over and over. I don’t know how often I have spent the lunch break with just that topic.

At 1900, math strived to become an axiomatic and logically complete system of reasoning. Even though that idea was shattered by Gödel later, math got more and more abstract, algebraic and guided by abstract functional analysis. Not even the strange consequences of the axiom of choice prevented mathematicians from using it for basic theorems. „It is a heaven that nobody can take away from us“, Hilbert said. So, it is no surprise that applications of mathematics and students of these fields felt uncomfortable in math classes.

However, times have changed long ago. At least, since computers are available, the more abstract aspects of mathematics didn’t make sense any longer for applied sciences. I need to explain this to prevent being misunderstood.

First, let me start to outline a more modern approach to math for applied sciences.

Let us take the topic of differential equations as an example. In the old school, the students learned some tricks to solve simple cases, involving great skill in calculus and learning the right „ansatz“. The existence and uniqueness of the solution was the next target, followed by theorems on stability. I can say, with confidence, that this approach fails on most students of applied sciences. They simply don’t have the skills, and they have no time to acquire them in their packed first semester schedule.

But I know that many profs now take a more relaxed approach for this audience. We can now use the computers in the classroom to use the Euler method to solve and visualize the results and the effects of the step size easily. These numerical methods have the further advantage that they are discrete and can use differences instead of differentials. Looking to a phenomenon in time deltas is always the first step in physics. Differentials are a higher step in abstraction which should be done after the first step. Then, the benefits and the limits of calculus will be apparent to the student.

Only after studying and seeing simple examples, more abstract questions like uniqueness or stability can be discussed. And we have now the tools to demonstrate these issues on the computer in real time.

The same can be done in probability theory. The computer can be easily used to do Monte-Carlo simulations. This not only verifies theoretical results, but also allows getting new ones which cannot be derived with algebraic methods. Moreover, it implants a solid meaning of the term „probability“, which so often is abused and misunderstood leading to self-contradicting cases.

Another change that computers force on us is that skills in computation are no longer essential to solve mathematical problems. This starts in primary classes, where calculators replace the external drill of written computations. Later, we should start to ask ourselves if extensive skill and practice in tricks of calculus is really needed any longer. Low-level computations in the head and an understanding of the sizes of numbers (which most people lack) looks more important to me. And I’d say we should rather concentrate on understanding the logic behind problem-solving and forget the drills. This brings me to my last topic.

I should end this blog entry with a warning: Logical reasoning won’t become easier with computers. From my teaching, I know that the students struggle most with the logic of a scientific idea. Neither can they verbalize the idea in precise form, nor can they apply it. Asking a precise definition of continuity or an eigenvalue is one of the most challenging things you can do in a verbal examination. Asking the students to explain what their computations mean and do, logically, is a good way to see their skills.

Logical reasoning needs to be learned by training and repetition. I cannot stress enough, how important it is for the teacher to not expect the students to immediately understand the meaning of a mathematical statement. Looking back, a teacher should remember that he did not understand most things immediately himself. Everybody struggled with logic at some point. Never forget that!

So, while it is an „old hat“ to talk about a more applied mathematical education for applied sciences, we still need to exploit what the computers can do for math education. And we still face the problem of logic reasoning, which doesn’t teach itself and needs a lot of attention.

# DOF Calculator in Python

I have updated my example in Euler Math Toolbox for the computation of DOF (depth of field). Below is a version of Python, which you can save in a file like „dof.py“.

import math

m=1
mm=1/1000
cm=1/100
useasinf=1e5

"""
These functions compute DOF, hyperfocal distance and Bokeh
of an ideal lens for photographers.

Try:
lens(D,F,A)

D : focussed distance
F : true focal length
A : true aperture (F-stop)

ff : form factor (aka crop factor) (def. 1)
c : accepted circle of diffusion (def. 0.03mm)
equiv : if false, D and F are true values times ff (def. False)
"""

def endDOF (D, F, A, c=0.03*mm, ff=1, inf=useasinf):
res = ff*D*F**2/(ff*F**2+c*A*F-c*ff*A*D)
if res<0:
res=inf
return res

def startDOF (D, F, A, c=0.03*mm, ff=1):
return ff*D*F**2/(ff*F**2-c*A*F+c*ff*A*D)

def hyperfocal (F, A, c=0.03*mm, ff=1):
return F*(ff*F+c*A)/(c*ff*A);

def bokeh (D, F, A, ff=1):
return 2*F**2/(A*(D-L))/math.sqrt(0.024**2+0.036**2)

def lens (D, F, A, c=0.03*mm, ff=1, equiv=False):
"""
Prints DOF, hyperfocal distance and Bokeh

D : focussed distance
F : true focal length
A : true aperture (F-stop)

ff : form factor (aka crop factor) (def. 1)
c : accepted circle of diffusion (def. 0.03mm)
equiv : if false, D and F are true values times ff (def. False)
"""
if ff!=1:
print(f"{'Crop factor':30} : {ff:10.1f}")
if not equiv:
F = F*ff
A = A*ff
if ff!=1:
print(f"{'True focal length':30} : {F/ff*1000:10.1f} mm")
print(f"{'Equivalent focal length':30} : {F*1000:10.1f} mm")
if ff!=1:
print(f"{'True F-stop':30} : F {A/ff:10.1f}")
print(f"{'Equivalent F-stop':30} : F {A:10.1f}")
print(f"{'Focussed distance':30} : {D:10.2f} m")
print("")
sd = startDOF(D,F,A,c,ff)
ed = endDOF(D,F,A,c,ff)
if ed>=useasinf:
print(f"{'Start of DOF':30} : {sd:10.2f} m")
print(f"{'End of DOF':30} : infinity")
elif D<1:
print(f"{'Start of DOF':30} : {sd*100:10.1f} cm")
print(f"{'End of DOF':30} : {ed*100:10.1f} cm")
print(f"{'Total DOF':30} : {(ed-sd)*100:10.1f} cm")
else:
print(f"{'Start of DOF':30} : {sd:10.2f} m")
print(f"{'End of DOF':30} : {ed:10.2f} m")
print(f"{'Total DOF':30} : {ed-sd:10.2f} m")
print("")
print(f"{'Bokeh':30} : {bokeh(D,F,A,ff)*100:10.2f} %")

Here are some examples.


>lens(2,85*mm,4)

Equivalent focal length        :       85.0 mm
Equivalent F-stop              : F        4.0
Focussed distance              :       2.00 m

Start of DOF                   :       1.94 m
End of DOF                     :       2.07 m
Total DOF                      :       0.13 m

Bokeh                          :       4.36 %

>lens(2,42.5*mm,2,ff=2)

Crop factor                    :        2.0
True focal length              :       42.5 mm
Equivalent focal length        :       85.0 mm
True F-stop                    : F        2.0
Equivalent F-stop              : F        4.0
Focussed distance              :       2.00 m

Start of DOF                   :       1.94 m
End of DOF                     :       2.07 m
Total DOF                      :       0.13 m

Bokeh                          :       4.36 %

>lens(4,85*mm,4,ff=2)

Crop factor                    :        2.0
True focal length              :       85.0 mm
Equivalent focal length        :      170.0 mm
True F-stop                    : F        4.0
Equivalent F-stop              : F        8.0
Focussed distance              :       4.00 m

Start of DOF                   :       3.87 m
End of DOF                     :       4.13 m
Total DOF                      :       0.26 m

Bokeh                          :       4.36 %

The Bokeh in percentage just show the proportion between the diameters of the circles of blurriness and the diameter of the sensor. 4% looks somewhat creamy.

# More Python Fun

Continuing to get acquainted with Python, I looked at the book „Introduction to Data Science“ by Joel Grus. I have it in German as an electronic book, which is unfortunately no fun to read, although the content is solid – with restrictions. The most important problem is the formatting of the Python code, which does not even work in landscape mode. The printed book uses a smaller font and lines are completely in one line. You can and should download the examples. There are also some misprints. I found one in the introduction to the Bayes formula, where the fraction is reversed. But most importantly, I have my doubts if the book is really a help for the starting data scientist. The presentation is very short. You will need a good background in math and programming, or have someone to ask if you do not understand the theory or the code. The code gets pretty advance pretty fast.

However, this is not a review of the book. But I take an example of the book to show how Python can be used from within Euler Math Toolbox. The following code reads the bible and prints the 10 most common words.

infile = open("bible.txt","r")

from collections import Counter

num_words = 10

counter = Counter(word.lower()
for line in infile
for word in line.strip().split()
if word)

for word, count in counter.most_common(num_words):
print(str(count)+"\t"+word)

infile.close()

I downloaded the file „bible.txt“ from the net and put it into the folder, where the IPython notebook resides. The output is the following.

64183	the
51379	and
34746	of
13643	to
12799	that
12559	in
10263	he
9840	shall
8987	unto
8835	for

Now, let us do this programming exercise in Euler Math Toolbox via Python.

>>> from collections import Counter
>function python testbible (filename) ...
$""" 10 most common words in the bible$"""
$infile = open(filename,"r")$num_words = 10
$counter = Counter(word.lower()$    for line in infile
$for word in line.strip().split()$    if word)
$res = ""$for word, count in counter.most_common(num_words):
$res = res+str(count)+"\t"+word+"\n"$infile.close()
$return res$endfunction
>testbible("bible.txt")
64183   the
51379   and
34746   of
13643   to
12799   that
12559   in
10263   he
9840    shall
8987    unto
8835    for

>>> help(testbible)
Help on function testbible in module __main__:

testbible(filename)
10 most common words in the bible


The import was done with a direct Python command. The function returns a string with the answer. But it could just as well use „eumat.dump(„….“)“ for a direct output. Note, that you do not need to indent the Python definition. That is done automatically by EMT.

The „$“-signs in the function definition allow copying and pasting the code into an EMT notebook. They are not visible in the EMT window. Here is another example, done in IPython. def sieve(N): v = [True for _ in range(N+1)] for k in range(2,N+1): if k*k>N: break if v[k]: m=2*k while m<=N: v[m]=False m=m+k return [k for k in range(N+1) if v[k] and k>1] s=sieve(100_000_000) from math import log x=[k for k in range(1000,len(s),1000)] y=[(s[k]/(k*log(k))-1-(log(log(k))-1)/log(k))*log(k)**2 for k in range(1000,len(s),1000)] import matplotlib as mpl import matplotlib.pyplot as plt fig, ax = plt.subplots(figsize=(8,8)) ax.plot(x,y)  This code computes the primes up to 100 Million and plots the actual size of the k-th prime, divided by a known asymptotics for the size of the k-th prime. We only use Python to do the sieve. For the plot, we use Euler Math Toolbox. >function python sieve (N) ...$v = [True for _ in range(N+1)]
$for k in range(2,N+1):$    if k*k>N:
$break$    if v[k]:
$m=2*k$         while m<=N:
$v[m]=False$             m=m+k
$return [k for k in range(N+1) if v[k] and k>1]$endfunction
>tic; s = sieve(100 000 000); toc;
Used 29.407 seconds
>n = length(s)
5761455
>k = 1000:1000:n;
>y = (s[k]/(k*log(k))-1-(log(log(k))-1)/log(k))*log(k)^2;
>plot2d(k,y):


As you see, the sieve takes about 30 seconds on my computer (Core I7, 4.0GHz). Of course, it is much faster to do that in C.

>function tinyc csieve (N) ...
$ARG_DOUBLE(N);$int n=(int)N;
$CHECK(n>10,"Need a positive integer.");$char *v = (char *)getram((n+1)*sizeof(char));
$CHECK(v,"Out of heap space.");$
$for (int i=0; i<n+1; i++) v[i]=1;$for (int k=2; k<n+1; k++)
${$   if (k*k>n) break;
$if (v[k])$   {
$int m=2*k;$       while (m<=n)
${$           v[m]=0; m+=k;
$}$   }
$}$int count=0;
$for (int k=2; k<n+1; k++) count+=v[k];$
$header *out=new_matrix(1,count);$CHECK(out,"Out of heap space");
$double *res=matrixof(out);$
$int j=0;$for (int k=2; k<n+1; k++)
$if (v[k])$       res[j++]=k;
moveresults(out);
\$endfunction
>s=csieve(100)
[2,  3,  5,  7,  11,  13,  17,  19,  23,  29,  31,  37,  41,  43,  47,
53,  59,  61,  67,  71,  73,  79,  83,  89,  97]
>tic; s=csieve(100 000 000); toc;
Used 1.459 seconds


The procedures to communicate from EMT to Tiny C are a bit involved. But the result is 15 times faster. The basic code is a direct translation of the Python code.

Depending on your heap space of Euler Math Toolbox, you can do more primes. If you compress the sieve using bits, you could to a lot more. But for this code and the default size, 100 millions are the limit. Increasing the stack size to 2048 MB, I could do all primes up to 1 billion in 15 seconds.

# Povray via Python

In my attempt to learn Python and its mathematical packages, I have translated a library for generating Povray plots in Euler Math Toolbox into Python. The module is not polished yet, but if you are interested, ask in the comments for a Beta version.

Have a look at this tutorial for Euler Math Toolbox to see what can be done with this interface to Povray. The code for the knot in EMT is here.

For a start, let us generate a simple plot with an axis. I am going to explain below what’s happening here.

from povray import *
setengine("pvengine")

import numpy as np

def f(x,y):
return x**y

t=np.arange(0,1,0.01)
fg = graph(f,t,t,look=look(green))
writeln(intersection([fg,cylinder([0,0,0],[0,0,1],1)]))
for i in range(1,4):
writeln(axis(0,1,axis=i))
povend()

The Povray engine I am using is a version of Povray for Windows (see here). The name of the executable is „povengine.exe“. It is in the search Path of Windows. For Mac M1, I found a „homebrew“ version of a command line tool named „povray“. It is not fully supported. (These problems are one reason I abandoned Macs for math.) In any case, you can set the correct program to start Povray with „setengine“.

The function „povstart“ opens an output file, by default „povout.pov“ and writes the necessary headers and includes to this file. It does also set the default view to the scene. You can change the light here too.

We then create x,y,z-matrices containing the coordinates of the surface inside the function „graph“. It uses „meshgrid“ internally and assumes that f can be vectorized.

The „graph“ function writes the scene to the file and embeds it in an object of Povray, returning the object. The scene consists of 4 triangles for each rectangle in the grid. The center of gravity is used as one corner in each of the four triangles. Povray needs a list of corners and a list of triangles, and „graph“ writes both lists to the file.

The intersection() function shows the way that this module works: It does not write to the file, but returns a Povray string representing the object, in this case the intersection of two other objects, both represented as strings. The cylinder() function is one of them, the other our graph. Here is, how the Povray strings typically look like.

cylinder { <0,0,0>, <0,0,1>, 1
texture { pigment { color rgbf <0.470588,0.470588,0.470588> }  }
finish { ambient 0.2 }
}

That’s one string with lines separated by „\n“.

The alternative to this approach was to generate Povray objects and use these objects. But I found the string approach more appealing. However, for the surface, the string would be huge. Thus, the object is written to the file directly.

The axis() function creates an axis, as follows. The texture can be changed using a „look“ parameter for each object.

union {
cylinder { <-0.1,0,0>,<1.1,0,0>,0.02 }
cone {
<1.22,0,0>,0
<1.1,0,0>,0.08
}
texture { pigment { color rgbf <0.470588,0.470588,0.470588> } }
}

Now, let us try the knot.

from sympy import *
import numpy as np

t = symbols('t')
fx = 4*(1+sin(3*t)/3)*sin(2*t)
fy = 4*(1+sin(3*t)/3)*cos(2*t)
fz = 2*cos(3*t)

vt = np.linspace(-np.pi,np.pi,1000)
vs = np.linspace(-np.pi,np.pi,160)

f = [fx,fy,fz]
ffunc = lambdify(t,f)
df = [diff(f[k],t) for k in range(3)]
dffunc = lambdify(t,df)

w1 = [df[1],-df[0],0]
w1func = lambdify(t,w1)

def crossproduct (v,w):
return np.array([v[1]*w[2]-v[2]*w[1],-v[0]*w[2]+v[2]*w[0],v[0]*w[1]-v[1]*w[0]])
def norm(v):
return np.sqrt(v[0]*v[0]+v[1]*v[1]+v[2]*v[2])

def w1n(t):
h = w1func(t)
return h/norm(h)
def w2n(t):
h = crossproduct(dffunc(t),w1func(t))
return h/norm(h)

def S(t,s):
return ffunc(t)+np.cos(s)*w1n(t)+np.sin(s)*w2n(t)
x = np.zeros((len(vt),len(vs)))
y = np.zeros((len(vt),len(vs)))
z = np.zeros((len(vt),len(vs)))
for i in range(len(vt)):
for j in range(len(vs)):
u = S(vt[i],vs[j])
x[i,j] = u[0]
y[i,j] = u[1]
z[i,j] = u[2]

import povray as pv

pv.povstart(distance=15)
surf = pv.xyz_surface(x,y,z,look=pv.look(pv.gray,phong=1))
pv.writeln(surf)
pv.povend(engine="pvengine")

This needs a lot of explaining. The center curve of the knot is a curve in space parameterized by „fx“, „fy“ and „fz“. I did some tests first, to make sure that this is indeed working, using a 3D plot in Matplotlib. The result was the following. It is a different style from the 3D plots you get in Euler Math Toolbox.

Then I build a tube around that curve. To achieve that, the expression „df“ is a vector representing the derivative of „f“, i.e., the vector pointing in the direction of the curve at each point. Then „w1“ is a vector perpendicular to „df“.

To get another perpendicular vector „w2“, I use the crossproduct of „df“ and „w1“. Both, „w1“ and „w2“ are then normalized to have length 1. However, this is already done numerically. To parameterize the tube, I use

$$S(t,s) = f(t) + \cos(s)w_1(t) + \sin(s)w_2(t)$$

This function is mapped to the all s and t, using a double loop.

# Sympy and EMT – Comparison

In Euler Math Toolbox, symbolic expressions are seamlessly integrated by the &-syntax. The expressions and functions are handled externally via the CAS Maxima, but mirrors exists in the numerical side of EMT as simple strings or numerical functions.

Let me give an example.

>function fx(t) &= t*cos(2*pi*t)

t cos(2 pi t)

>function fy(t) &= t*sin(2*pi*t)

t sin(2 pi t)

>plot2d("fx","fy",xmin=0,xmax=1,r=1):


The functions „fx“ and „fy“ are symbolic functions. I.e., Maxima defines the functions in its system, and they can be used in Maxima.

>&fx(z^2+1)

2                 2
(z  + 1) cos(2 pi (z  + 1))


But the functions are also defined in EMT as numerical functions. Thus, they can be used like any other numerical function in EMT.

>fx(linspace(0,1,10))
[0,  0.0809017,  0.0618034,  -0.0927051,  -0.323607,  -0.5,  -0.48541,
-0.216312,  0.247214,  0.728115,  1]


Because the functions contain only elementary operations, they can work on vectors too. This is called a „vectorized“ or „mapped“ function. Consequently, they can be called in other functions of EMT. E.g., the „plot2d“ function can plot curves, if two functions (or two expressions) are given.

Continuing our example, we compute the length of the curve using

$$l(f) = \int\limits_0^1 \|f'(t)\| \,dt.$$

>function f(t) &= [fx(t),fy(t)]

[t cos(2 pi t), t sin(2 pi t)]

>function df(t) &= diff(f(t),t)

[cos(2 pi t) - 2 pi t sin(2 pi t),
sin(2 pi t) + 2 pi t cos(2 pi t)]

>integrate(&norm(df(x)),0,1)
3.3830442855
>&integrate(norm(df(t)),t,0,1), %()

2
asinh(2 pi) + 2 pi sqrt(4 pi  + 1)
----------------------------------
4 pi

3.3830442855


To compute the length of the curve, we define a vector valued symbolic function „f“. That function does work in Maxima symbolically, or in EMT numerically.

A side remark: If „f“ is applied to a vector instead of one point, it will compute „fx“ and „fy“ and concatenate these vectors, which is not useful most of the time. In future versions, EMT should be able to generate a matrix of values if a vector valued function is applied to a vector of values. In our application, we only need „f“ as a symbolic function.

In the same way, we can then define a function „df(t)“ for the derivative vector simply by symbolic differentiation. „df“ will also work symbolically and numerically. The function „norm“ is a pre-defined symbolic function. You can plug in a symbolic vector and get an expression which represents a function of „t“.

>&norm(v)

sqrt(v . transpose(v))

>&norm(df(t))

2
sqrt((cos(2 pi t) - 2 pi t sin(2 pi t))
2
+ (sin(2 pi t) + 2 pi t cos(2 pi t)) )


We then use the adaptive integral of EMT to integrate the expression „norm(df(x))“ between 0 and 1.

Finally, Maxima can compute the „exact“ integral in symbolic form, which we evaluate numerically to verify our previous result.

How does this all translate to Python?

The first problem is that there are no symbolic functions in Sympy. There are only expressions, which can be handled by the symbolic features of Sympy. And, of course, there is no automatic parallel definition of a numerical function for Numpy as in EMT. To evaluate a symbolic expression at a point, „subs“ can be used, or it can be made a numerical function by „lambdify“.

Here is an example.

import sympy as sp
import numpy as np

t = sp.symbols('t')
fe = t*sp.cos(2*np.pi*t)
print(fe.subs(t,0.2))
f = sp.lambdify(t,fe)
print(f(0.2))

-->
0.0618033988749895
0.061803398874989326

As you see, „subs“ just substitutes a value for „t“. This is not going to work of „sp.pi“ is in the expression instead of „np.pi“.

t = sp.symbols('t')
fe = t*sp.cos(2*sp.pi*t)
print(fe.subs(t,0.2))
f = sp.lambdify(t,fe)
print(f(0.2))

-->
0.2*cos(0.4*pi)
0.06180339887498949

For Sympy, „pi“ is a mathematical variable with some properties and a value. Because „cos(2pi) „is known, „f(1.0)“ does work as expected. But „f(0.2)“ leaves the expression and is not evaluated numerically.

„lambdify“ translates a symbolic expression so that it can be used by Numpy, effectively translating „sp.pi“ to „np.pi“. Thus, „f“ works in both cases.

Back to our example. We use what we learned to trick Python into doing what we want.

import sympy as sp
import numpy as np

t = sp.symbols('t')
fxe = t*sp.cos(2*sp.pi*t)
fye = t*sp.sin(2*sp.pi*t)
fx = sp.lambdify(t,fxe)
fy = sp.lambdify(t,fye)

vt = np.linspace(0,1,400)
x = fx(vt)
y = fy(vt)
fig,ax = plt.subplots(figsize=(8,8))
ax.set_xlim(-1,1)
ax.set_ylim(-1,1)
ax.grid(True)
ax.plot(x,y);


It might be a good idea to define a function „plot_curve“ to produce the plot, like the following.

def plot_curve (fx,fy,xmin,xmax,n=100,figsize=(8,8),r=1):
vt = np.linspace(0,1,400)
x=fx(vt); y=fy(vt);
fig,ax = plt.subplots(figsize=figsize)
ax.set_xlim(-r,r); ax.set_ylim(-r,r)
ax.grid(True)
return ax.plot(x,y)

plot_curve(fx,fy,0,1,400);

Such functions can go into a Python module. I already started a module „renestools.py“ with functions I need to make my life in Python easier.

Now, let us compute the length. The integration is done using an adaptive Gauss quadrature in my package. I explained in a previous posting, how this is working. Note that we need to pass a vectorized function to the integration. „fnorm“ is not vectorized (in contrast to the expression „norm(f(t))“ which we used in EMT). „np.vectorize“ transforms any function for single real values into vectorized functions which will work on Numpy matrices or vectors.

fe = sp.Matrix([fxe,fye])
fde = sp.diff(fe,t)

def norm(v):
return np.sqrt(np.sum(v*v))

fd = sp.lambdify(t,fde)
def fnorm(t):
return norm(fd(t))

import renestools as rt
rt.integrate(np.vectorize(fnorm),0,1)

--> 3.3830442855028062

To compute the integral in symbolic form, we need to help Sympy with „trigsimp“. We evaluate the result to 20 digits. (We could have done that in Maxima too, by the way.)

res = sp.integrate(sp.trigsimp(sp.sqrt(fde[0]**2+fde[1]**2)),(t,0,1))
print(res)
print(res.evalf(20))

-->
asinh(2*pi)/(4*pi) + sqrt(1 + 4*pi**2)/2
3.3830442855028069597

As you see, Maxima and EMT are much more intuitive than Sympy and Numpy. I say this in the hope that many of you find their way to Euler Math Toolbox and help the development. Currently, Python has a lot more potentially useful toolboxes than EMT.

# Using Python – Introduction for EMT Users

Python is a full, object oriented, interpreted programming language. It has been around for a long time now and enjoys the attention of many good programmers who expanded it with packages. These packages often make use of the C API which lets native programs connect to the Python kernel. Thus, they are much faster and often state of the art. Python itself is more than ten times slower than native code.

Were are mainly going to use the following packages.

• Numpy. To get around the slow loops and array handling in Python, this package provides operations on a matrix level, just like Euler Math Toolbox or Matlab.
• Sympy. This is a package for symbolic computations. It provides symbolic variables and expressions.
• Matplotlib. To get beautiful plots in Python, you will have to use this package.
• Scilab. This is the high level package for mathematical computations, such as optimization or signal processing.

To install all this, the easiest and best way is Anaconda. It installs the recent Python version (3.10) and the packages mentioned above. Moreover, you get a user interface called IPython or Jupyter Notebooks.

Unfortunately, the Anaconda Navigator seems not to work on M1 processors. But you can open a command line and type

jupyter notebook

The default browser (Safari in my case) will open automatically and display the Jupyter notebook showing the files in your home directory. You can then switch to a subdirectory and create a new one. A notebook has input lines and output lines. You can just start typing in Python commands and execute with Ctrl-Return or Shift-Return, or start the Help to learn about this interface.

For users of Euler Math Toolbox (or Matlab), it is important to know, that Python does not handle operators on matrices element by element. In fact, using its list type like vectors, or lists of lists like matrices, yields surprising results.

2 * [4,5,6]
-> [4, 5, 6, 4, 5, 6]
[4,5,6] + [4,5,6]
-> [4, 5, 6, 4, 5, 6]
[4,5,6] * [4,5,6]
------------------------------------------------------
TypeError
...
TypeError: can't multiply sequence by non-int of type 'list'

I have used an indented „->“ to indicate the output of the command. This convention will be used in the future on this blog.

Obviously, the Python folks have used math operators to manipulate lists in „intuitive“ ways.

This can be fixed with Numpy. You get a new data type, the „ndarray“. It is easily created, albeit requiring an additional function call. It provides the matrix functionality that we are used to from matrix languages like Matlab, R or Euler Math Toolbox.

import numpy
v = numpy.array([4,5,6])
2*v
-> array([ 8, 10, 12])

v+v
-> array([ 8, 10, 12])

v*v
-> array([16, 25, 36])

Euler Math Toolbox often demonstrates the value of matrix language by creating tables of function values y=f(x). Here is a short demonstration and a plot.

import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

x = np.arange(0,1,0.01)
y = x**3-x

print(f"Riemann-Summe: {np.sum(y)*0.01:0.4}")

fig, ax = plt.subplots(figsize=(10,6))
ax.grid(True)
ax.plot(x,y);

-> Riemann-Summe: -0.25

As I said, there is some learning curve. Here are some differences to Euler Math Toolbox to start with.

• You need to import the packages. The „load“ command of EMT is similar. But EMT loads most packages at the start. You can import and use a package under a different, shorter name, as shown in the example.
• The „arange(a,b,delta)“ function works similar to the ranges in EMT. But it has the step size as a third parameter. There is also a „linspace(a,b,n)“ function like „equispace“ in EMT. But it generates exactly n points, not n intervals.
• In the example, an important difference is not visible: Array start with index 0 in Python and Numpy. This is a source of much confusion.
• For the plot, we cannot use a function, but must provide vectors of data.
• The plot will be inserted automatically into to Jupyter notebook. Of course, the graphics can also be saved to a file. That file will be in the directory of your notebook.

You will find numerous tutorials on Numpy and Matplotlib, explaining the steps above in much detail. So, I won’t give an introduction in this blog. Rather, my job will be to show you how EMT compares to the things you can do in Python, and to point out the main differences and problems that could occur.