Scilab can solve many kinds of optimization problems, without or with constraints, needing derivatives or not. The following chart on www.scilab.org gives the name of the Scilab functions for each category of optimization problem :
The toolboxes sci_ipopt and fmincon allow to consider general non-linear problems with non-linear constraints (interior point method).
The available solvers, like e.g. fminsearch, does a good job, but I’d like to see CMA-ES implemented as well.
I tried (some years ago) and was sidelined by the trouble. It exist in ATOMS, but probably doesn’t work:
If anyone is willing to help, I’d like to pick it up (and give credit = we share the job).
I have some ‘help’ or ‘demo’ code implemented (somewhere, maybe not on my latest computer).
Cheers,
Claus
Hello Claus,
Did you try to recompile the toolbox ? As the package is of “noarch” type (no compiled gateways) there is a high probability that the toolbox could work as is in the latest release of Scilab. Anyway, thanks for pinging us on that subject as it helps to focus the work of the community on packages that are still of interest.
S.
Sometimes, the most easy linear Least-Squares fit is used. We are inverting a rectangular matrix like below. Best greetings. Heinz
A=[0.0675954 0.1493283
0.3516595 0.7155661
0.5497236 1.0542451
0.5852641 1.0899078
0.5945036 1.1216345
0.6952329 1.2505237
0.7327987 1.2878655
0.7935976 1.3476958
0.8308286 1.4032977
0.9171937 1.474964 ];
x=A(:,1); y=A(:,2);
plot(x,y,'o');xgrid();
M=[x x^3];
b=M\y
plot(x,M*b,'-');
xlabel('x');ylabel('y');title('Linear Least-Squares Fit');
legend('data','fit y=b(1).x+b(2).x^3',2);
Hi Stéphane
I would like to work on updating the CMA-ES ATOMS package, but only if someone with experience helps me out.
Best regards,
Claus
Hello Heinz,
Please have a look to this topic:
https://scilab.discourse.group/t/welcome-to-the-scilab-discourse-forum/7
You will learn how to format your code so that it can be copy-pasted by readers of you message. To include a screenshot (which could be also nice) click on the “upload” button.
S.
Sometimes, the most easy linear Least-Squares fit can be used: we are inverting a rectangular matrix like below. Best greetings. Heinz
A=[0.0675954 0.1493283
0.3516595 0.7155661
0.5497236 1.0542451
0.5852641 1.0899078
0.5945036 1.1216345
0.6952329 1.2505237
0.7327987 1.2878655
0.7935976 1.3476958
0.8308286 1.4032977
0.9171937 1.474964 ];
x=A(:,1); y=A(:,2);
plot(x,y,'o');xgrid();
M=[x x^3];
b=M\y
plot(x,M*b,'-');
xlabel('x');ylabel('y');title('Linear Least-Squares Fit');
legend('data','fit y=b(1).x+b(2).x^3',2);
xstring(0.1,1.2,'b(1)=2.0756'); xstring(0.1,1.1,'b(2)=-0.5688');
version = “scilab-2023.0.0” on macOS 10.15.7 (19H2026)
options = “GCC” “x64” “release” “Mar 10 2023” “16:02:56”
OS = “Darwin”, version = “19.6.0”
Hi,
You can edit your post (to include the graphics) by clicking on the pen icon :
S.
I am doing my linear least-squares fits without any subrouten, just by inversion of a rectangular array. How do I handle this, when every y-value has its individual error range?
Heinz
Dead simple: divide your measurement vector and you rectangular matrix by the error vector. How could I forget?
Heinz