Hello,
Is there a way to use fmincon to solve a leastsquare problem like:
I have 100’s of x and y data and I want to fit a 6th order polynom.
BR
Hello,
Is there a way to use fmincon to solve a leastsquare problem like:
I have 100’s of x and y data and I want to fit a 6th order polynom.
BR
Hello,
You don’t need fmincon, just the backslash operator. For example if x and y are column vectors, you will get the coefficients, then the polynomial with
a = [ones(x) x x.^2 x.^3 x.^4 x.^5 x.^6] \ y;
p = poly(a, "x", "coeff");
S.
Hi Stephane,
Thank you for this trick for polynoms. Great learning for me.
I use the case of polynoms to start with a simple case.
Let’s say my problem is y=a+bexp(x-c) or y=a+btanh(x-c) or … much more complex functions. (x & y vectors of data)
I’m usually using lsqrsolve function for this kind of work but I’m not happy with the final answers. I want to know if fmincon/IPOpt can give better/faster answers.
BR
F
Yes, least squares polynomial approximation is just a matter of linear algebra !
It’s not linear least squares anymore, hence it needs a nonlinear solver. You can solve this kind of data fitting problem with datafit
, which uses internally leastsq
, which itself uses optim
. Using leastsq
is more robust than lsqrsolve
which uses the Levenberg-Marquardt (with trust-region). The latter is sometimes faster (when it converges) but does not handle constraints on the parameters.
In fact you should always check the return flag in order to see if the solver converged or not. My experience on difficult/large/ill-posed data fitting problems is that interior-points solvers are the more robust ones. I use IPOpt and I often recommend fmincon to my collegues who come from the Matlab world.
There is a simple example of data fitting with IPopt and fmincon in the demos of each package.
S.
I have a start of answer to my question
There are examples of non linear data fitting inside FOT toolbox (FOSSEE Optimization Toolbox - using IPOpt). Had to use an old SCILAB version as it not available for SCILAB 2024. FOT_lsqnonlin seems to be much slower then leastsq or lsqrsolve. It is possible.
For the demo, are you talking about “fitting problem” demo?
This one is failling to run in Scilab 2024
Needs a “.” before the “^2” line 47
y=cos(2*t+1).exp(-0.1t.^2);
Almost there…
Yes there is a problem because of the new strict rules of the power operator. I just updated the package this morning. You can uninstall, refresh the list of packages then reinstall fmincon.
S.
I have compared the lsqnonlin example in the documentation of FOTtoolbox in Scilab 6.1 and Scilab 2024.0 fmincon:
function [y, dy]=myfun(x, tm, ym, wm)
y = wm.*( x(1)*exp(-x(2)*tm) - ym )
v = wm.*exp(-x(2)*tm)
dy = [v , -x(1)*tm.*v]
endfunction
m = 10;
tm = [0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5]';
ym = [0.79, 0.59, 0.47, 0.36, 0.29, 0.23, 0.17, 0.15, 0.12, 0.08]';
wm = ones(m,1);
x0 = [1.5 ; 0.8];
lb = [-5,-5];
ub = [10,10];
options = list("GradObj", "on")
[xopt,resnorm,residual,exitflag,output,lambda,gradient] = lsqnonlin(myfun,x0,lb,ub,options)
and the same problem with Scilab 2024.0 and fmincon 1.0.7 (reformulated as a general minimization problem):
function [y, dy]=myfun(x, tm, ym, wm)
y = wm.*( x(1)*exp(-x(2)*tm) - ym )
v = wm.*exp(-x(2)*tm)
dy = [v , -x(1)*tm.*v]
endfunction
function [f,g] = costf(x)
[y, dy]=myfun(x, tm, ym, wm);
f = sum(y.*y);
g = 2*dy'*y;
endfunction
m = 10;
tm = [0.25, 0.5, 0.75, 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5]';
ym = [0.79, 0.59, 0.47, 0.36, 0.29, 0.23, 0.17, 0.15, 0.12, 0.08]';
// measure weights (here all equal to 1...)
wm = ones(m,1);
opt = optimoptions("fmincon",...
"SpecifyObjectiveGradient",1);
problem = struct();
problem.x0 = [1.5;0.8];
problem.objective = costf;
problem.lb = [-5;-5];
problem.ub = [10;10];
problem.options = opt;
[x,fval,exitflag,output,lambda] = fmincon(problem);
On the same Windows PC, FOT lsqnonlin on Scilab 6.1 takes 5 seconds whereas new fmincon on Scilab 2024.0 takes 0.1 seconds. This is not a matter of formulation, as FOT lsqnonlin also reformulates the least squares problem as a general optimization problem given to IPopt, rather a matter of implementation of the interface.
S.
Great … fmincon is fast … faster than FOT
In parallel, I tried to compare “datafit”, “leastsq”, “lsqrsolve” and “fmincon” to a different problem called MGH17. (StRD Dataset MGH17)
I was not able to make “datafit” (0.503sec) find the expected solution.
I tried many options without success.
Maybe you will find the issue.
“leastsq” (0.39sec) and “lsqrsolve” (0.002sec) are working fine.
I used your example above to try “fmincon” (0.947sec) without the gradient.
Probably it needs the gradient to be faster
Did I missed something?
BR.
F
MGH17_datafit.sce (1.2 KB)
MGH17_fmincon.sce (1.3 KB)
MGH17_leastsq.sce (1.1 KB)
MGH17_lsqrsolve.sce (1.2 KB)
Hello,
Like I said above, lsqrsolve is sometimes faster than classical (not least-squares dedicated) methods with good initial guesses, but it does not handle constraints on the parameters and leastsq does not handle other constraints than bounds. Moreover, comparing methods with a single initial guess does not test the global convergence properties.
To me, fmincon is a kind of swissknife for all kinds (with or without linear or nonlinear constraints) and sizes of problems and there will always be other methods showing better performance on specific and/or small size problems (see e.g. https://scilab.discourse.group/t/large-scale-nonlinear-optimization-with-fmincon-in-scilab-vs-matlab which highlights a use case out of reach of lsqrsolve or leastsq).
S.
Thank you for your help
F.