I have recently answered this question on Stack Overflow and I think it would be valuable to share it. When the derivative (with respect to time) of the solution of an ode is expected, the usual way is to call the right hand side function after the solver call, like in this example:
function yd = rhs(t,y)
yd = [0 1;-1 0]*y;
end
y0 = [1;0];
t = linspace(0,3,100);
y = ode(y0,0,t,rhs);
dy = zeros(y);
for i = 1:size(y,2)
dy(:,i) = rhs(t(i),y(:,i));
end
in that simple example, instead of iterating on the columns of y
we can call rhs
with y
as first argument since “vectorized” computations make sense:
dy = rhs(t,y)
However, in both cases extra computations are needed.
In fact the fastest way to obtain y
and dy
is to reformulate the original ode as a dae (differential-algebraic equation) and solve it with dassl
(see dassl - Differential algebraic equation), like this:
// equivalent DAE residual
function [r,flag] = res(t,y,yd)
flag = 0;
r = yd-rhs(t,y);
end
// solve with dassl and extract y, dy
sol = dassl([y0 rhs(0,y0)],0,t,res);
y = sol(2:3,:);
dy = sol(4:5,:);
1 Like
Hi Stephane,
thank you for sharing this clever approach.
Does it lead that it’s systematically more relevant to use dassl() than ode() ?
David
No, an ODE should always be solved with an ODE scheme because DAEs need a more complex machinery. DAE solvers are always implicit, i.e. they solve a system of (generally nonlinear) equations at each step, whereas ODE schemes can be explicit (nothing to solve) or implicit (which is recommended when the ODE is stiff).
As I already explained, it is always possible to compute the rhs values afterwards, but if the computation of the rhs is not trivial (in the Stack Overflow message I mentioned the computation needed to solve a scalar nonlinear equation), and many time steps are required, it may take the same time as the ODE solving itself…
then I try to summarize the key message: for implicit ODE schemes with non linear equations to solve in the rhs to evaluate the outputs, it’s more relevant to use dassl than ode (since it is already calculated in native code of dassl while it is additional calculations in scilab macro in the case of ode call).
Thank you
David
The proposed solution holds for actual Scilab ode() API. This solver uses legacy lsoda/lsode steppers which are now outperformed by Sundials’s cvode. But all these allow the evaluation of continuous interpolation formulas to yield solution and derivatives of the solution at arbitrary time points. So this is just a question of API at the Scilab user level. The integration of Sundials solvers in Scilab will particularly focus on this aspect (try to transfer as much internal functionality of the solver as possible to the user level).