Because Euler Math Toolbox is designed to be an interactive way to do mathematics, its algorithms for numerical problems are easy to use, such as the solution of an equation, integration, minimization, or even to plot a function. For a demonstration and the adaptation to Python, we find the minimum of a one dimensional function.
The algorithm used is the Golden Ratio subdivision of an interval. It spares one function evaluation per step, and is guaranteed to yield an inclusion of the minimum for a convex function (assuming exact computations).
Here is an example:
>function f(x) := x^3-x
>xmin = fmin("f",0)
0.577350265248
>plot2d("f",0,1); plot2d(xmin,f(xmin),>points,>add):

It is even easier to use an expression. Expressions are strings which are evaluated for the parameters x,y,z (by default), and thus work almost like functions.
>fmin("x^3-x",0)
0.577350265248
The same function can be coded in Python. I did a line-by-line translation of the code of EMT.
def fmin (f,a,b=None,d=0.01,dmax=1e+20,eps=1e-14):
""" Compute the minimum of a function
Using the Golden Cut.
a: Start
b: End of interval or None (end is searched first)
d: Initial step size
dmax: Maximal step size to search for b
eps: Desired Accuracy
Returns: Point of minimum.
"""
if b == None:
x0=a; y0=f(x0);
x1=x0+d; y1=f(x1)
if y1>y0:
d=-d
x1=x0+d; y1=f(x1)
while True:
if not y1<y0:
break
d=2*d
x0=x1; x1=x0+d
y0=y1; y1=f(x1)
if abs(d)>dmax:
break
a=x0-d
b=x1;
x0=a; x3=b
y0=f(x0); y3=f(x3)
l=(3-np.sqrt(5))/2
x1=x0+l*(x3-x0); x2=x3-l*(x3-x0)
y1=f(x1); y2=f(x2)
while True:
if y1>y2:
x0=x1; x1=x2; x2=x3-l*(x3-x0)
y0=y1; y1=y2; y2=f(x2);
else:
x3=x2; x2=x1; x1=x0+l*(x3-x0)
y3=y2; y2=y1; y1=f(x1)
if np.abs(x0-x3)<eps:
return x0
To use this for the example above is easy.
def f(x):
return x**3-x
fmin(f,0) # -> 0.5773502652484417
Via lambda functions, an expression works just as well.
fmin(lambda x: x**3-x,0)
The main motivation for this posting is more complicated. We want to provide additional parameters to the function.
Euler Math Toolbox has two ways of doing this. The first way is via args(). It works as follows.
>function f(x,a) := x^3-a*x
>xmin = fmin("f",0;1)
0.577350265248
>xmin = fmin("f",0;1.1)
0.60553006474
The parameters after the semicolon are passed from „fmin“ to the function „f“ by calling it as „f(x,args())“.
The second way is with call collections. Those are collections of functions and parameters. They are available for expressions too, but need a named member for the additional parameters.
>fmin({{"f",1.1}},0)
0.60553006474
>fmin({{"x^3-a*x",a=1.1}},0)
0.60553006474
To achieve the same in Python, the easiest way is to use lambda functions.
def f(x,a):
return x**3-a*x
a = 1.1
fmin(lambda x: f(x,a),0) # -> 0.6055300647407058
If that is used within another function, the lambda function will be made referring to the current variable „a“. You need not worry about a variable „a“ in the function „fmin“.
Because Python does not separate the additional parameters with a semicolon, the „args“ method feels a bit strange. But it is possible. Have a look at this code.
def h(x,a,b):
return a*x**2+b
def evalf (f,x,*args):
return f(x,*args)
evalf(h,3,4,1) # -> 9
The function „evalf“ needs another function as an argument, just as „fmin“. We want to pass more than one argument to this function „f“. If we call „evalf(h,3,4,1)“, those arguments are „(4,1)“. They are passed as a tuple to the variable „args“. Saying „*args“ will expand this tupel and pass „4,1“ to „f“, i.e. to „h“.
I made a private Python module with some functions like „fmin“ where I did not use the „args“ method. The reason is that I want to overwrite named parameters with arguments, e.g., saying „fmin(f,a,b)“ with interval bounds, where the default for „b“ is „None“. This is not possible if „*args“ is present.
Finally, an application of „fmin“.
We minimize a multidimensional function with gradient search. The search goes down the gradient until a minimum is found, and repeats from that point. The handling of the accuracy during the algorithms need to be improved for maximal performance. Currently, it computes the accurate minimum in each step.
The method is best used for convex function, or at least locally convex function. It is guaranteed to find a minimum then, provided that the set of points where the function is less than the current value is compact.
epsilon = 1e-14
def vmin (f,x,v,tmax=1,tmin=epsilon):
""" Minimize f(x+t*v) assuming v is a direction of descent
"""
t=fmin(lambda t:f(x-t*v),0);
return (t,x-t*v)
def fminmult (f,Df,x):
res = 1e20
while True:
t,xnew = vmin(f,x,Df(x))
resnew = f(xnew)
if resnew>=res:
break
res = resnew; x = xnew
return x
def f(x,a):
return (x[0]-a)**2+2*a*x[1]**2
def Df(x,a):
return np.array([2*(x[0]-a),4*a*x[1]])
a = 4
fminmult(lambda x:f(x,a),lambda x:Df(x,a),np.array([1,1]))
# -> array([ 4.00000000e+00, -1.65436123e-24])