Discrete & Continuous Models & Applied Computational Science
ISSN 2658-7149 (Online), 2658-4670 (Print)
2024, 32 (2) 202-212
http://joumals.rudn.ru/miph
Research article
UDC 519.21
PACS 52.25.Fi
DOI: 10.22363/2658-4670-2024-32-2-202-212
EDN: CRLKAJ
Clenshaw algorithm in the interpolation problem by the Chebyshev collocation method
Konstantin P. Lovetskiy, Anastasiia A. Tiutiunnik,
Felix Jose do Nascimento Vicente, Celmilton Teixeira Boa Morte
RUDN University, 6 Miklukho-Maklaya St, Moscow, 117198, Russian Federation (received: October 14, 2023;revised: November 12, 2023;accepted: November 18, 2023)
Abstract. The article describes a method for calculating interpolation coefficients of expansion using Chebyshev polynomials. The method is valid when the desired function is bounded and has a finite number of maxima and minima in a finite domain of interpolation. The essence of the method is that the interpolated desired function can be represented as an expansion in Chebyshev polynomials;then the expansion coefficients are determined using the collocation method by reducing the problem to solving a well-conditioned system of linear algebraic equations for the required coefficients. Using the well-known useful properties of Chebyshev polynomials can significantly simplify the solution of the problem of function interpolation. A technique using the Clenshaw algorithm for summing the series and determining the expansion coefficients of the interpolated function, based on the discrete orthogonality of Chebyshev polynomials of the 1st kind, is outlined.
Key words and phrases: interpolation of functions by the Chebyshev collocation method, Clenshaw algorithm for accelerating calculations
Citation: Lovetskiy K. P., Tiutiunnik A. A., do Nascimento Vicente F. J., Boa Morte C. T., Clenshaw algorithm in the interpolation problem by the Chebyshev collocation method. Discrete and Continuous Models and Applied Computational Science 32 (2), 202-212. doi: 10.22363/2658-4670-2024-32-2-202-212. edn: CRLKAJ (2024).
1. Introduction
The construction of efficient numerical methods for solving differential and integral equations is an important element in solving applied problems in various fields, such as aerospace engineering, meteorology, physical oceanography, mechanical engineering, and nuclear energy. Taking this into account, we will consider and analyze the efficiency of some spectral algorithms for function interpolation, which are often used when solving equations of mathematical physics.
Spectral methods are a class of methods used in applied mathematics for the numerical solution of certain differential and integral equations, sometimes using the fast Fourier transform [1-4]. The idea is to present the desired solution u(x) in the form of a finite sum of "basis functions" <pn(x) with subsequent choice of the coefficients in the sum that satisfy the specified equations.
u(x) « uN(x) = 2 an<pn(x).
(1)
© Lovetskiy K. P., Tiutiunnik A. A., do Nascimento Vicente F. J., Boa Morte C. T., 2024
This work is licensed under a Creative Commons "Attribution-NonCommercial 4.0 International" license.
Substituting this expression into equation
Lu(x) = f(x),
where L is the operator of the differential or integral equation, results in the appearance of the so called residual function
R(x; a0, a1,..., aN) = LuN(x) - f(x). (2)
The residual function R(x; a0, a1,..., aN) is identically equal to zero, when uN(x) is an exact solution. Therefore, the main goal of an algorithm for solving the problem studied is to minimize the residual function by choosing the appropriate spectral coefficients an, n = 0,1, ...,N.
In the Galerkin-Petrov method, the solution is expanded in one basis, the coordinate functions, and the orthogonality of the residual is required to another basis, the projection functions.
The choice of trial basis functions $n(x) in Eq. (1) and the testing functions, the basis for minimizing the residual (2), is the key feature that distinguishes spectral methods from the finite difference and finite element methods. In the latter two methods, the trial/testing functions are local functions with finite carriers. On the contrary, spectral methods use globally smooth functions as trial/testing functions. The simplest basis functions are power functions — the monomials $n(x) = xn such that
uN(x) = a0 + a1x + a2x2 + ... + aNxN.
The most frequently used trial/testing functions are trigonometric functions or orthogonal polynomials (usually, eigenfunctions of singular Sturm-Liouville problems), which comprise
Fourier spectral method <Pk(.x) = e'kx,
Chebyshev spectral method <Pk(.x) = T^ix),
Legendre spectral method <Pk(.x) = L^(x),
Laguerre spectral method <Pk(x) = Lk(x),
Hermit spectral method <Pk(x) = Hk(x).
Here Tk(x), Lk(x), Lk(x) and Hk(x) are Chebyshev, Legendre, Laguerre and Hermit polynomials of the fc-th power, respectively.
It is exactly the choice of the testing functions for calculating the residual that determines the name of the methods for solving equations:
- Galerkin method. The main feature of the method is the coincidence of the trial basis and the testing one.
- Petrov-Galerkin method. The trial basis and the testing one are different.
- Collocation method. For the grid points chosen in advance (collocation) from the search domain it is required that the residual function is zero.
2. Variants of the colocation method for interpolation of functions
Let us study the collocation method for solving the interpolation problem, based on the representation of the approximating function in the form of a finite sum (1) of its expansion in the Chebyshev polynomials of the first kind. In this case, the Chebyshev functions of the first kind <pn(x) = Tn(x) are considered as the orthogonal basis.
The approach to solving the problem of approximating f(x) based on the collocation method consists in choosing not only the finite-dimensional space of possible solutions (usually, polynomials up to a certain power), but also the number and position of grid points in the solution's search domain (called collocation points). Then it is necessary to choose such coefficients an, n = 0,l,...,N of the expansion of the solution in the basis polynomials, which provide (2) to be exactly satisfied at the collocation points.
R(x; a0, alt..., aN) = ^ anTn(xt) - f(xt) = 0, i = 0, ...,N.
(3)
The expansion coefficients an, n = 0,1, ...,N are conventionally found from the solution of system (3), for the unambiguous solvability of which it is necessary that the Chebyshev matrix determinant should be nonzero: det[2J(A:fc)] # 0, j, k = 0,..., n. The choice of different grid points not coincident with each other guarantees the nondegeneracy of the determinant and, thus, uniqueness of the solution of (3) upon such a choice [3].
In the matrix form, the system of equations (3) can be written as
Ta = f,
where the elements Tj(x^) of the fc-th row of matrix T are Chebyshev polynomials of the first kind of the j-th degree, aT = (a0,a1,..., aN) is the vector of interpolation parameters, fT = (f0,f1, ■■■,fN)T is the vector of values of the interpolated function at the grid points (x0, x1,..., xN).
Using the properties of the discrete orthogonality of the polynomials Tn(x) on the Chebyshev-Lobatto grid allows constructing an efficient algorithm for finding the interpolation coefficients. Let us transform the system of residual equations (3) so that the SLAE matrix would be almost orthogonal. For this purpose, multiply the first and the last equation in (3) by the factor 1/^2 to obtain an equivalent "modified" system with the new matrix T (instead of T) and vector f instead of f. The new system's advantage is that its matrix is "almost orthogonal", and multiplying it from the left by the transposed matrix TT we obtain the diagonal matrix:
T1T:
00 Î о о I
000
The multiplication of the modified system (3) from the left by the transposed matrix TT leads to a simple matrix equation with a diagonal matrix that determines the desired expansion coefficients
TTTa= TTf,
(4)
where f = (f0/^2, fi,..., fn-i, fn/^2). In the right-hand side of Eq. (4), vector g = TTf is obtained. In this notation, the coefficients of function f(x) expansion in Chebyshev polynomials of the first kind are easily written explicitly as
3 = go/n,
(5)
ai = 2gjn, a2 = 2g2/n,
an = gn/n.
Thus, relations (5) unambiguously determine the expansion coefficients of the approximating polynomial uN(x) = anTn(x).
The described approach to the solution of interpolation problem allows stable solution of both the problem of reconstructing the approximating polynomial expansion coefficients and the problem of calculating the interpolant values at an arbitrary point in the domain of definition of the desired function. However, the speed of executing these operations still leaves much to be desired, even though the use of the Gauss-Lobatto grid eliminates the need to solve a system of linear algebraic equations (3) with completely filled matrix. The problem turned out to reduce to multiplying the matrix TT by the vector f and dividing the components of the resulting vector by the appropriate elements of the diagonal matrix TtT.
The use of various modifications of the Clenshaw algorithm can significantly speed up the solution of the interpolation problem.
3. Clenshaw algorithm - increasing the efficiency of calculating the Chebyshev series at an arbitrary point in the approximation interval
Having the coefficients of the polynomial expansion of the desired function makes it possible to calculate the values of the interpolating polynomial at arbitrary points of the approximation interval x g [-1,1] directly as
uN(x) = a0T0(x) + aiTi(x) + a2T2(x) + ... + aNTN(x). (6)
However, calculating the sum using Eq. (6) is not optimal [5]. Efficient and stable summation of this series is possible based on the Clenshaw algorithm using the recurrent three-term relation.
Tn(x) = 2xTn-i(x) - Tn-2(x).
This approach allows calculating the value of the next polynomial in the summable series from the values of a pair of previous polynomials using simple multiplication and addition operations. First, it is just necessary to calculate
To(x) = 1, Ti(x) = x.
and to launch the iteration process [5]. Detailed information on the algorithm and stability of the summation process is presented in the paper by Fox and Parker [6].
Clenshaw's algorithm generalizes not only to various types of Chebyshev polynomials; it applies to any class of functions that can be defined by a three-term recurrence relation. Clenshaw's algorithm calculates the weighted sum of a finite series of functions $k(x):
n
S(x) = 2 ak<Pk(x), k=0
where $k(x), fc = 0,1,..., n is a sequence of functions satisfying three-term relations
<Pk+i(x) = ak(x)<Pk(x) + fa(x)<Pk-l(x),
with pre-known coefficients ak(x) and ft(x).
The algorithm is most efficient when <Pk(x) are functions hard to calculate directly, whereas the calculation of coefficients ak(x) and fa(x) is relatively simple. In the most widespread applications, ak(x) is independent of fc and ¡3k(x) is a constant depending on neither x, nor fc. In our case (6), ak(x) = 2x and fa(x) = -1.
To execute the series summation for a given sequence of coefficients a0,a1,..., an, it is first necessary to calculate the values bk(x) of the auxiliary sequence using the "inverse" recurrence relation
bn+i(x) = bn+2(x) = a
bk(x) = ak + ak(x)bn+i(x) + @k+i(x)bn+2(x).
It is important to note that to construct the sequence bk(x), k = n,..., 1 no calculation (and knowledge) of <pk(x) values is necessary. After determining the coefficients b2(x) and b^x) only two simplest values of <p0(x) and <P]_(x) are enough to obtain the desired sum
S(x) = ao<Po(x) + <Pi(x)bi(x) + @i(x)<Po(x)b2(x).
Let us consider in more detail the Clenshaw summation method for fast and stable calculation of the sum of series (6) — the recursive method for calculating a linear combination of Chebyshev polynomials.
un(x) = ao + aiTi(x) + a2T2(x) + ... + anTn(x).
Let us take into account that the coefficients in the recurrence relation for the Chebyshev polynomials are a(x) = 2x and ¡3(x) = —1 and the initial polynomials are T0(x) = 1, T1(x) = x. Then the "reverse" recurrent sequence for calculating coefficients bk(x) has the form
bk(x) = ak + 2xbk+i(x) — bk+2(x), k = n,n—1,..., 1, with zero "initial" coefficients
bn+i(x) = bn+2(x) = 0.
Now we calculate the value of the desired sum using the "reverse" recurrence relation
n n
un(x) = ao +2 akTk(x) = £(bk — 2xbk+i + bk+2)Tk(x) = k=1 k=1
n
= ao + bix + b2(2xTi(x) — To(x)) — 2xb2Ti(x) + £ bk(Tk(x) — 2xTk-i(x) + Tk-2(x)).
k=3
The last term turns into zero and the ultimate value of the sum of series un(x) is determined by the formula
un(x) = a0 + xb1(x) — b2(x). (8)
Example 1. A program in pseudocode implementing the summation according to the Clenshaw algorithm.
The simplest (using no Clenshaw algorithm) program for calculating the sum of series (6) at a fixed point x g [—1,1] can be presented directly as
Sum := a[0] + x * a[1] t := x; t1 := 1; c := 2 * x for k := 2 to n do begin
t2 := t1; t1 := t;
t := c * t1 - t2;
Sum := Sum + t * a[k] end
On completion of the program operation, the variable Sum contains the desired S(x) value. The program uses 2n + 1 multiplications and 2n-1 additions.
Following Clenshaw and using the auxiliary "reverse" recurrence formula (7), we transform the calculation program so that the number of multiplications reduces (practically by two times) to n + 2 and the number of additions remains to be 2n + 1.
The subroutine pseudocode is Sum := a[n]; b1 := 0; c := 2 * x; for k := n - 1 downto 0 do begin
b2 := b1; b1 := Sum; Sum := a[k] + c * b1 - b2 end;
Sum := 0.5 * (Sum - b2)
Instead of the above "beautiful" version of the pseudocode, it is possible to use an alternative version with even fewer operations, namely, n + 1 multiplications and 2n additions
Sum := a[n]; b1 := 0; c := 2 * x; for k := n - 1 downto 1 do begin
b2 := b1; b1 := Sum; Sum := a[k] + c * b1 - b2 end;
Sum := a[0] + x * b1 - b2
4. Expanding the scope of application of the Clenshaw method (calculation of interpolant expansion coefficients)
The calculation of expansion coefficients {c0,ci, ...,cn} reduces to a solution of the system (4) of linear algebraic equations. The most laborious stage is the operation of multiplying the vectors of transposed Chebyshev matrix TT by vector f = (f0/^2, fi,..., fn-i,fn/^2)T. It turns out that the structure of the Chebyshev matrix T allows efficient use of the Clenshaw algorithm for substantial reduction of the number of operations compared to the conventional algorithm of multiplying a matrix by a vector.
Let us present the modification of the Clenshaw algorithm simplifying the procedure of multiplying the transposed Chebyshev matrix by the right-hand side vector TTf when using the Gauss-Lobatto grid.
T Tf =
10,0 Ti,o Ti,0
Tn,0
0,i Ti,i T2,i
Tn,i
0,2 Ti,2 1*2,2
Tn,2
L0,n
Ti,n
?2,n
~f0 ' C0
fi Ci
Ï2 = C2
Jn- Cn-
To get the value of a particular j-th component of the vector of coefficients, it is necessary to multiply the j-th row of matrix TT by the vector f.
n
cj(x) = E fkTj(xk) = f0Tj(x0) + f1Tj(x1) + f2Tj(x2) + ... + fnTj(xn). k=0
To simplify the calculation of the desired sum, the product of the row of Chebyshev transposed matrix by the vector on the right-hand side, we use the representation of a Chebyshev polynomial of the first kind of the j-th order in the trigonometric form:
Tj(cos e) = cos(jQ)
Xk = cos(kn/n), k = 0,!,...,n.
Each j-th row of the transposed Chebyshev matrix contains the values of a Chebyshev polynomial of the j-th order at the Gauss-Lobatto grid points xk = cos(kn/n), k = 0,l,..., n.
Let us execute the change of variables
yj = jn/n, j = 0, l,..., n.
Tj(xk) = cos(kyj).
This change allows using the identity Tj(xk) = cos(j(kx/n)) = cos(k(jn/n)) = Tk(xj) for a transition to the conventional Clenshaw scheme of calculating the product of the transposed (symmetric) Chebyshev matrix by the vector of the right-hand side. That is, the product of coefficients fk,k = 0, l,..., n by the corresponding values of the polynomial of the j-th order Tj(xk) at the grid points xk, k = 0,1,..., n for calculating the interpolant expansion coefficient Cj(x) can be replaced with a product of the same coefficients by the corresponding values of the polynomials of the fc-th order Tk(xj) at one point Xj of the grid with the number corresponding to that of the calculated interpolant expansion coefficient.
n
cj = E fkTj(Xk) = foTj(0 ■ yd + f1Tj(1 ■ yd + /2Tj(2 ■ yd + ... + fnTj(n ■ yt). k=0
Denoting
d = jn/n,
we arrive at a formula for calculating the j-th expansion coefficient of the desired interpolant in the reduced form:
cj = E fkTj(xk) =Efk cos(kd), j = 0,1,..., n.
=0 =0
To calculate each expansion coefficient Cj, j = 0,1,..., n using the Clenshaw scheme, two approaches are possible.
In the first of them, it is possible to use the three-term recurrence relation expressing cosines of multiple angles cos nx through two cosines of preceding multiplicities cos((n — 1)x), cos((n — 2)x) and the value of cos(x):
cj =Efk cos(kd), e=J-^, j = 0,1,...,n.
k=o n
The second approach leads to using the variant of Clenshaw scheme (7)-(8) already studied above.
cj =EfkTj(xk), j = 0,1,...,n.
=0
It is known that the coefficients of the three-term recurrence relation for the Chebyshev polynomials of the first kind are
a(n, x) = a(x) = 2x, ¡3(n, x) = ¡3(x) = -1, Tn(x) = 2xTn-i(x) - Tn-2(x) and the polynomials of the 1-st and the 2-nd order have the form
T0(x) = 1, T1(x) = x.
Having summed over the specified coefficients c0, c1, c2,..., cn, we calculate the values bk(x) using the "reverse" recurrence formula:
bn+i(x) = bn+2(x) = 0, bk(x) = ck + 2xbk+i(x) - bk+2(x), k = n,n-1,..., 1.
Then
ck = bk - 2xbk+i(x) + bk+2(x), k = n,n-1,..., 1. Now we substitute these coefficients into the series sum formula
n
Pn(x) = 2 ckTk(x) = C0 + b1X - b2.
k=0
To get rid of using the value of argument in the ultimate formula for the sum of the truncated Chebyshev series, it is possible to continue the loop process by one additional step and to calculate the zeroth coefficient of the reverse sequence by the formula
b0 = c0 + 2xbi - b2,
which allows calculating the product bxx = ——^ + ^2,
Substituting the calculated value of b1 x into the sum expression, we get an alternative variant of the formula for the sum of products of indexed coefficients by the Chebyshev polynomials of the first kind of the appropriate degree:
f \ 1 1 - C0 + ^2 1 1 r 7 7 T
pn(x) = C0 + btx -b2 = C0 + -2--b2 = 2tC° + b0 - b2\.
The ultimate value of the desired sum depends only on the coefficient c0 of the series sought for and two coefficients b0, b2 obtained as a result of running the reverse recurrence sequence.
5. Conclusions
Clenshaw described an algorithm that allows calculating the final sum of the Fourier series in terms of sines and cosines. Clenshaw's algorithm is a recursive method for summing a linear combination of Chebyshev polynomials [5, 6]. The method was published by Charles William Clenshaw in 1955. This is a generalization of Horner's method for summing a linear combination of monomials. Although this method is named after William George Horner, it has been known for a long time - Horner himself attributed it to Joseph-Louis Lagrange. But the method was described and used many hundreds of years ago by Chinese and Persian mathematicians. After the advent of computers, this algorithm became fundamental for efficient calculations with polynomials.
The use of many well-known useful properties of Chebyshev polynomials can significantly improve the software implementation of the function interpolation algorithm based on the Clenshaw method. In future, the authors intend to use the outlined interpolation technique for a stable implementation of algorithms for calculating definite integrals, derivatives of functions using matrices of spectral Chebyshev differentiation and finding antiderivative functions using integration matrices.
For example, to calculate definite integrals, it may be useful to calculate the sums of modified series of Fourier type — only even or only odd terms of the series.
With the rapid development of modern technology, many types of interpolation methods have been proposed, including piecewise constant, linear, polynomial and spline interpolation methods [2, 7, 8]. Among them, interpolation based on Chebyshev polynomials is of great interest, which has been shown to be one of the important methods in the literature [9-15], since this type of interpolation polynomials eliminates the problem of the Runge phenomenon [16,17].
Function approximation based on Chebyshev polynomial interpolation and discrete cosine transform is discussed in many papers [1, 2,18-20]. In these methods, the points of a non-uniform grid corresponding to the roots or extremals of the Chebyshev polynomials are first obtained, and then the approximation coefficients are calculated at these points using collocation methods. The results showed that the use of Chebyshev polynomials provides almost optimal accuracy for solving problems of interpolation, differentiation, and integration of smooth functions.
Author Contributions: Konstantin P. Lovetskiy—Conceptualization, investigation, writing—original draft preparation, supervision. Anastasiia A. Tiutiunnik—methodology, writing—review and editing, project administration, funding acquisition. Felix Jose do Nascimento Vicente, Celmilton Teixeira Boa Morte—software, validation. All authors have read and agreed to the published version of the manuscript. Data Availability Statement: Data sharing is not applicable.
Acknowledgments: The authors of the article are grateful to Prof. Sevastianov L. A. for help with the work. Conflicts of Interest: The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data;in the writing of the manuscript;or in the decision to publish the results. Funding: This research was funded by the RUDN University Scientific Projects Grant System, project No. 021934-0-000 (Konstantin P. Lovetskiy, Anastasiia A. Tiutiunnik).
References
1. Boyd, J. P. Chebyshev and Fourier Spectral Methods: Second Revised Edition. Dover Books on Mathematics (Courier Corporation, 2013).
2. Fornberg, B. A practical guide to pseudospectral methods doi:10 . 1017 / cbo9780511626357 (Cambridge University Press, 1996).
3. Mason, J. C. & Handscomb, D. C. Chebyshev Polynomials in Chebyshev Polynomials (Chapman and Hall/CRC Press, 2002).
4. Orszag, S. A. Comparison of Pseudospectral and Spectral Approximation. Studies in Applied Mathematics 51, 253-259. doi:10.1002/sapm1972513253 (1972).
5. Clenshaw, C. W. A note on the summation of Chebyshev series. Mathematics of Computation 9, 118-120. doi:10.1090/S0025-5718-1955-0071856-0 (1955).
6. Fox, L. & Parker, I. B. Chebyshev polynomials in numerical analysis (Oxford, 1968).
7. Shen, Z. & Serkh, K. Is polynomial interpolation in the monomial basis unstable? 2023. doi:10. 48550/arXiv.2212.10519.
8. Zhang, X. & Boyd, J. P. Asymptotic Coefficients and Errors for Chebyshev Polynomial Approximations with Weak Endpoint Singularities: Effects of Different Bases 2021. doi:10.48550/arXiv.2103.11841.
9. Lovetskiy, K. P., Sevastianov, L. A. & Nikolaev, N. E. Regularized Computation of Oscillatory Integrals with Stationary Points. Procedia Computer Science 108, 998-1007. doi:10.1016/j.procs. 2017.05.028 (2017).
10. Lovetskiy, K. P., Sevastianov, L. A., Kulyabov, D. S. & Nikolaev, N. E. Regularized computation of oscillatory integrals with stationary points. Journal of Computational Science 26, 22-27. doi:10. 1016/j.jocs.2018.03.001 (2018).
11. Lovetskiy, K. P., Kulyabov, D. S. & Hissein, A. W. Multistage pseudo-spectral method (method of collocations) for the approximate solution of an ordinary differential equation of the first order. Discrete and Continuous Models and Applied Computational Science 30,127-138. doi:10.22363/2658-4670-2022-30-2-127-138 (2022).
12. Lovetskiy, K. P., Sevastianov, L. A., Hnatic, M. & Kulyabov, D. S. Numerical Integration of Highly Oscillatory Functions with and without Stationary Points. Mathematics 12, 307. doi:10. 3390/math12020307 (2024).
13. Sevastianov, L. A., Lovetskiy, K. P. & Kulyabov, D. S. An Effective Stable Numerical Method for Integrating Highly Oscillating Functions with a Linear Phase in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12138 LNCS (2020), 29-43. doi:10.1007/978-3-030-50417-5\_3.
14. Sevastianov, L. A., Lovetskiy, K. P. & Kulyabov, D. S. Numerical integrating of highly oscillating functions: effective stable algorithms in case of linear phase 2021. doi:10.48550/arXiv.2104.03653.
15. Sevastianov, L. A., Lovetskiy, K. P. & Kulyabov, D. S. A new approach to the formation of systems of linear algebraic equations for solving ordinary differential equations by the collocation method. Izvestiya of Saratov University. Mathematics. Mechanics. Informatics 23, 36-47. doi:10. 18500/1816-9791-2023-23-1-36-47 (2023).
16. Berrut, J. & Trefethen, L. N. Barycentric Lagrange Interpolation. SIAM Review 46, 501-517. doi:10.1137/S0036144502417715 (2004).
17. Epperson, J. F. On the Runge Example. The American Mathematical Monthly 94, 329. doi:10. 2307/2323093 (1987).
18. Amiraslani, A., Corless, R. M. & Gunasingam, M. Differentiation matrices for univariate polynomials. Numerical Algorithms 83,1-31. doi:10.1007/s11075-019-00668-z (2020).
19. Wang, Z. Interpolation using type i discrete cosine transform. Electronics Letters 26, 1170. doi:10.1049/el:19900757 (1990).
20. Wang, Z. Interpolation using the discrete cosine transform: reconsideration. Electronics Letters 29,198. doi:10.1049/el:19930133 (1993).
Information about the authors
Lovetskiy, Konstantin P.—Candidate of Sciences in Physics and Mathematics, Associate Professor of Department of Computational Mathematics and Artificial Intelligence of Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University) (e-mail: [email protected], phone: +7 (495) 952-25-72, ORCID: 0000-0002-3645-1060) Tiutiunnik, Anastasiia A.—Candidate of Sciences in Physics and Mathematics, Associate Professor of Department of Computational Mathematics and Artificial Intelligence of Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University) (e-mail: [email protected], phone: +7 (495) 955-07-83, ORCID: 0000-0002-4643-327X) do Nascimento, Vicente Felix Jose—student of Department of Computational Mathematics and Artificial Intelligence of Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University) (e-mail: [email protected]) Teixeira Boa, Morte Celmilton—student of Department of Computational Mathematics and Artificial Intelligence of Peoples' Friendship University of Russia named after Patrice Lumumba (RUDN University), (e-mail: [email protected])
UDC 519.21 PACS 52.25.Fi
DOI: 10.22363/2658-4670-2024-32-2-202-212 EDN: CRLKAJ
Алгоритм Кленшоу в задаче интерполяции методом Чебышевской коллокации
К. П. Ловецкий, А. А. Тютюнник, Ду Нашсименту Висенте Феликс Жозе, Тейшейра Боа Морте Селмилтон
Российский университет дружбы народов, ул. Миклухо-Маклая, д. 6, Москва, 117198, Российская Федерация
Аннотация. В статье описан метод вычисления интерполяционных коэффициентов разложения по полиномам Чебышева. Метод справедлив, когда искомое функция ограничена и имеет конечное число максимумов и минимумов в конечной области интерполирования. Суть метода состоит в том, что интерполируемая искомая функция может быть представлена в виде разложения по полиномам Чебышева; затем коэффициенты разложения определяются по методу коллокаций сведением задачи к решению хорошо обусловленной системы линейных алгебраических уравнений относительно искомых коэффициентов. Использование известных полезных свойств полиномов Чебышева позволяет значительно упростить решение задачи интерполяции функций. Изложена методика использования алгоритма Клен-шоу для суммирования рядов и определения коэффициентов разложения интерполируемой функции, основанная на дискретной ортогональности полиномов Чебышева 1-го рода.
Ключевые слова: интерполяция функций методом Чебышевской коллокации, алгоритм Кленшоу ускорения вычислений