I am wondering if Scilab has a built-in function similar to Python numpy.gradient, which takes an ‘edge-order’ … I can manually create a first-order gradient function in Scilab like this:
function [grad]=dydx(y, x)
k = length(x);
grad = zeros(k); // initialize
grad(1) = (y(2) - y(1))./(x(2) - x(1));
grad(k) = (y(k) - y(k-1))./(x(k) - x(k-1));
if k>2 then
// for i = 2:k-1 do
// grad(i) = ( (y(i+1) - y(i))./(x(i+1) - x(i)) + ..
// (y(i) - y(i-1))./(x(i) - x(i-1)) ) / 2;
// end
grad(2:k-1) = ( (y(3:k) - y(2:k-1))./(x(3:k) - x(2:k-1)) + ..
(y(2:k-1) - y(1:k-2))./(x(2:k-1) - x(1:k-2)) ) / 2;
end
endfunction
What I am asking for is a more advanced gradient calculation with Scilab.
The output of splin: splin - Cubic spline interpolation in “fast” mode will give you the same output as numpy.gradient when edge_order=2:
“fast” in this case a sub-spline is also computed by using a simple local scheme for the di : d(i) is the derivative at x(i) of the interpolation polynomial of (x(i-1),y(i-1)), (x(i),y(i)),(x(i+1),y(i+1)), except for the end points (d1 being computed from the 3 left most points and dn from the 3 right most points).
In “fast” mode the spline is not cubic but quadratic, hence is only C_1 (the first derivative of the spline is continuous) instead of C_2 (second derivative is continuous) in the cubic case. There is a connection between interpolation and numerical derivative/integration because approximate derivation/integration formulas always amounts to first consider an interpolation polynomial using some of your data points then derivate/integrate the polynomial.
Since this post was referenced elsewhere, let me improve a bit on the dydx function, to have a bit more fail safe operation = a bit more input check:
function [grad]=dydx(y, x)
k = length(x);
grad = zeros(k); // initialize
if k>=2 then // gradient calculation needs a minimum of two points
grad(1) = (y(2) - y(1))./(x(2) - x(1));
grad(k) = (y(k) - y(k-1))./(x(k) - x(k-1));
if k>2 then
// for i = 2:k-1 do
// grad(i) = ( (y(i+1) - y(i))./(x(i+1) - x(i)) + ..
// (y(i) - y(i-1))./(x(i) - x(i-1)) ) / 2;
// end
grad(2:k-1) = ( (y(3:k) - y(2:k-1))./(x(3:k) - x(2:k-1)) + ..
(y(2:k-1) - y(1:k-2))./(x(2:k-1) - x(1:k-2)) ) / 2;
end
end
endfunction
Also, although the idea by Stephane Mottelet that a spline can work, I have found out that it is not the same as the numpy.gradient edge_order=2. The numpy.gradient implementation is somewhat more generic and the implementation is based on compact finite difference:
In general you must solve a system of equation to evaluate the compact scheme. In other words, they are implicitly defined. These schemes are used in computational fluid dynamics because of certain desirable numerical advection properties.