Calculating gradients

Hi

I am wondering if Scilab has a built-in function similar to Python numpy.gradient, which takes an ‘edge-order’ … I can manually create a first-order gradient function in Scilab like this:

function [grad]=dydx(y, x)
    k = length(x);
    grad = zeros(k);    // initialize
    grad(1) = (y(2) - y(1))./(x(2) - x(1));
    grad(k) = (y(k) - y(k-1))./(x(k) - x(k-1));
    if k>2 then
      //  for i = 2:k-1 do
      //      grad(i) = ( (y(i+1) - y(i))./(x(i+1) - x(i)) + ..
      //                  (y(i) - y(i-1))./(x(i) - x(i-1)) ) / 2;
      //  end
        grad(2:k-1) = ( (y(3:k) - y(2:k-1))./(x(3:k) - x(2:k-1)) + ..
                        (y(2:k-1) - y(1:k-2))./(x(2:k-1) - x(1:k-2)) ) / 2;
    end
endfunction

What I am asking for is a more advanced gradient calculation with Scilab.

With kind regards,
Claus

Hello,

The output of splin: splin - Cubic spline interpolation in “fast” mode will give you the same output as numpy.gradient when edge_order=2:

“fast”
in this case a sub-spline is also computed by using a simple local scheme for the di : d(i) is the derivative at x(i) of the interpolation polynomial of (x(i-1),y(i-1)), (x(i),y(i)),(x(i+1),y(i+1)), except for the end points (d1 being computed from the 3 left most points and dn from the 3 right most points).

>>> import numpy as np
>>> x = np.array([0, 2, 4, 5, 10], dtype=float)
>>> y = np.array([1, 2, 4, 7, 11], dtype=float)
>>> 
>>> np.gradient(y,x,edge_order=2)
array([ 0.25      ,  0.75      ,  2.33333333,  2.63333333, -1.03333333])
--> splin([0, 2, 4, 5, 10],[1, 2, 4, 7, 11],"fast")
 ans  =

   0.25   0.75   2.3333333   2.6333333  -1.0333333

S.

1 Like

Hi Stéphane

The Python docs says: The gradient is computed using second order accurate central differences in the interior points
and it follows: https://www.ams.org/journals/mcom/1988-51-184/S0025-5718-1988-0935077-0/S0025-5718-1988-0935077-0.pdf
It is surprising to me to see this correlation with cubic spline interpolation. Simple, when you know…

Interesting! Thank you.

With kind regards,
Claus

In “fast” mode the spline is not cubic but quadratic, hence is only C_1 (the first derivative of the spline is continuous) instead of C_2 (second derivative is continuous) in the cubic case. There is a connection between interpolation and numerical derivative/integration because approximate derivation/integration formulas always amounts to first consider an interpolation polynomial using some of your data points then derivate/integrate the polynomial.

S.

1 Like

Hi

Since this post was referenced elsewhere, let me improve a bit on the dydx function, to have a bit more fail safe operation = a bit more input check:

function [grad]=dydx(y, x)
    k = length(x);
    grad = zeros(k);    // initialize
    if k>=2 then        // gradient calculation needs a minimum of two points
        grad(1) = (y(2) - y(1))./(x(2) - x(1));
        grad(k) = (y(k) - y(k-1))./(x(k) - x(k-1));
        if k>2 then
      //  for i = 2:k-1 do
      //      grad(i) = ( (y(i+1) - y(i))./(x(i+1) - x(i)) + ..
      //                  (y(i) - y(i-1))./(x(i) - x(i-1)) ) / 2;
      //  end
            grad(2:k-1) = ( (y(3:k) - y(2:k-1))./(x(3:k) - x(2:k-1)) + ..
                        (y(2:k-1) - y(1:k-2))./(x(2:k-1) - x(1:k-2)) ) / 2;
        end
    end
endfunction

Also, although the idea by Stephane Mottelet that a spline can work, I have found out that it is not the same as the numpy.gradient edge_order=2. The numpy.gradient implementation is somewhat more generic and the implementation is based on compact finite difference:

In general you must solve a system of equation to evaluate the compact scheme. In other words, they are implicitly defined. These schemes are used in computational fluid dynamics because of certain desirable numerical advection properties.

Cheers,
Claus