Data=[   10     50     70    80     100.    170.    200.
  0.09   0.49   0.6   0.68   0.71   0.89   0.91];
function YP=Model(p,x)
 YP=p(1) +p(2)*(1-exp(-x/p(3)))
endfunction
function e=G(p, Data)
    e = Data(2,:) - FF(Data(1,:), p);  // vertical distance
endfunction
p0=[0;1;80];
[p, dmin] = datafit(G, Data, p0)
What is FF ?
--> [p, dmin] = datafit(G, Data, p0)
at line     2 of function G       
at line     5 of function costf   
at line   -12 of function optim   
at line   243 of function datafit ( /Users/mottelet/Desktop/scilab-2024.0.0 (x86_64).app/Contents/share/scilab/modules/optimization/macros/datafit.sci line 255 )
Undefined variable: FF
maybe FF() is Model(), in that case arguments have to be swapped:
Data=[   10     50     70    80     100.    170.    200.
  0.09   0.49   0.6   0.68   0.71   0.89   0.91];
function YP=Model(p,x)
 YP=p(1) +p(2)*(1-exp(-x/p(3)))
endfunction
function e=G(p, Data)
    e = Data(2,:) - Model(p,Data(1,:));  // vertical distance
endfunction
p0=[0;1;80];
[p, dmin] = datafit(G, Data, p0)
yields
 p = [3x1 double]
  -0.0486532
   1.0097587
   66.657787
 dmin = 
   0.0139127
As I already said in the arm64 issue, you should use the x86_64 build under macOS.
S.
Sorry, this was an early version with silly mistakes. After correction, datafit worked fine.
Main problen still is code below with indirect referencing to predicted values. And I really need that thing.
Heinz
function YP=Model2(p,xc)
x=1:300; df2=(exp(-x/p))/p;f2=cumsum(df2)
YP=f2(xc)
endfunction
time=[10     50     70    80     100  170  200];
value= [   0.09   0.49   0.6   0.68   0.71   0.89   0.91];
function DV=cost(pv, xv, yv); DV= Model2(pv,xv)-yv; endfunction;
[time; value; Model2(80,time); cost(80,time,value)]
p=80.8; [fopt,xopt,gopt]=leastsq(list(cost,time,value),p)
For this kind of problems, i would use the below code
tic()
data=[
   10.    0.09
   50.    0.49
   70.    0.60 
   80.    0.68
   100.   0.71
   170.   0.89
   200.   0.91]';
xx = data(1,:);
yy = data(2,:);
function f = model(par, x)
  f = par(1) +par(2)*(1-exp(-x./par(3)));
endfunction
function dist = cost(par, z)
    dist = yy - model(par, xx);
endfunction
eps=%eps;
level=1000
p_in =[0,1,80]
[p_out,residuals,info] = lsqrsolve(p_in, cost, size(xx,"*"),[eps,eps,eps,level,eps,level])
Rsq=1-(sum(residuals.^2)/sum((yy-mean(yy)).^2))
mprintf("\n\tpar(1):         %12.7f ",p_out(1))
mprintf("\n\tpar(2):         %12.7f ",p_out(2))
mprintf("\n\tpar(3):         %12.7f ",p_out(3))
mprintf("\n\tR²    :         %12.7f ",Rsq)
mprintf("\n\tValue at 80.8 : %12.7f ",model(p_out,80.8))
mprintf("\n\n\tDuration:         %8.5f ms",1000*toc())
which will give the following output
	par(1):           -0.0486531 
	par(2):            1.0097587 
	par(3):           66.6578012 
	R²    :            0.9970972 
	Value at 80.8 :    0.6606484 
	Duration:          1.14920 ms
The outputs will match “professional/specialized” software answers
Response #1: great many thanks. How would you compute the standard error of the fitted parameters? [Sorry,Stéphane told me 2 years ago, but I forgot].
Best greetings
Heinz
Response #1a: Here is the solution providing the standard errors of the fitted parameters. Heinz.
// Function “model” is non-linear in parameters
function [y] = model(x,p)
y = p(1) +p(2)*(1-exp(-x./p(3)));
endfunction
function e=distance(p,x,y)
e=model(x,p)-y;
endfunction
// Measurement data
x=[10    50    70  80   100   170  200]‘;
y=[0.09   0.49   0.6   0.68   0.71   0.89   0.91]’;
function g=jacobi(p,x,y)
g1=ones(x);
g2=1-exp(-x./p(3));
g3= -((p(2).*x)/(p(3)^2)).*exp(-x./p(3));
g= [g1 g2 g3];
endfunction
p_in =[0,1,80]‘;
// Non-linear least-squares fit
[f,popt, gropt] = leastsq(list(distance,x,y), p_in)
sigma2=1/(length(x)-length(popt))*f
g = jacobi(popt, x, y);  //Jacobian matrix of the fit formula
//covariance matrix of fitting parameters
pcovar=inv(g’*g)*sigma2
//  1σ confidence interval for each parameter
ci=(sqrt(diag(pcovar)));
// Fitted parameters and single standard deviation
[popt ci]
// -0.0486531   0.025962
//  1.0097587   0.0273949
//  66.657801   4.6657479
Response #1a: Here is the solution providing the standard errors of the fitted parameters. Heinz.
// Function "model" is non-linear in parameters
function [y]=model(x, p)
  y = p(1) +p(2)*(1-exp(-x./p(3)));
endfunction
function e=distance(p, x, y)
   e=model(x,p)-y;
endfunction
// Measurement data
x=[10    50    70  80   100   170  200]';
y=[0.09   0.49   0.6   0.68   0.71   0.89   0.91]';
function g=jacobi(p, x, y)
 g1=ones(x);
 g2=1-exp(-x./p(3));
 g3= -((p(2).*x)/(p(3)^2)).*exp(-x./p(3));
 g= [g1 g2 g3];
endfunction
p_in =[0,1,80]';
// Non-linear least-squares fit
[f,popt, gropt] = leastsq(list(distance,x,y), p_in)
sigma2=1/(length(x)-length(popt))*f
g = jacobi(popt, x, y);  //Jacobian matrix of the fit formula
//covariance matrix of fitting parameters
pcovar=inv(g'*g)*sigma2
//  1σ confidence interval for each parameter
ci=(sqrt(diag(pcovar)));
// Fitted parameters and single standard deviation
[popt ci]
 // -0.0486531   0.025962 
 //  1.0097587   0.0273949
 //  66.657801   4.6657479