Discrete & Continuous Models
#& Applied Computational Science 2022, 30 (2) 127-138
ISSN 2658-7149 (online), 2658-4670 (print) http://journals-rudn-ru/miph UDC 519.872:519.217
DOI: 10.22363/2658-4670-2022-30-2-127-138
Multistage pseudo-spectral method (method of collocations) for the approximate solution of an ordinary differential equation of the first order
Konstantin P. Lovetskiy1, Dmitry S. Kulyabov1,2, Ali Weddeye Hissein1
1 Peoples' Friendship University of Russia (RUDN University), 6, Miklukho-Maklaya St., Moscow, 117198, Russian Federation 2 Joint Institute for Nuclear Research, 6, Joliot-Curie St., Dubna, Moscow Region, 141980, Russian Federation
(received: February 4, 2022; revised: April 18, 2022; accepted: April 19, 2022)
Abstract. The classical pseudospectral collocation method based on the expansion of the solution in a basis of Chebyshev polynomials is considered. A new approach to constructing systems of linear algebraic equations for solving ordinary differential equations with variable coefficients and with initial (and/or boundary) conditions makes possible a significant simplification of the structure of matrices, reducing it to a diagonal form. The solution of the system is reduced to multiplying the matrix of values of the Chebyshev polynomials on the selected collocation grid by the vector of values of the function describing the given derivative at the collocation points. The subsequent multiplication of the obtained vector by the two-diagonal spectral matrix, 'inverse' with respect to the Chebyshev differentiation matrix, yields all the expansion coefficients of the sought solution except for the first one. This first coefficient is determined at the second stage based on a given initial (and/or boundary) condition. The novelty of the approach is to first select a class (set) of functions that satisfy the differential equation, using a stable and computationally simple method of interpolation (collocation) of the derivative of the future solution. Then the coefficients (except for the first one) of the expansion of the future solution are determined in terms of the calculated expansion coefficients of the derivative using the integration matrix. Finally, from this set of solutions only those that correspond to the given initial conditions are selected.
Key words and phrases: initial value problems, pseudo spectral collocation method, Chebyshev polynomials, Gauss-Lobatto sets, numerical stability
1. Introduction
Spectral methods are a class of methods used in applied mathematics and scientific computing to solve many differential equations numerically [1]-[4]. The main idea of the method is to represent the desired solution of a differential equation as a sum of certain 'basis functions' [5] (e.g., as an expansion into a sum in power functions — a Taylor series, or a sum of
© Lovetskiy K. P., Kulyabov D.S., Hissein A.W., 2022
This work is licensed under a Creative Commons Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/
sinusoids, which is a Fourier series), and then calculate the coefficients in the sum to satisfy the differential equation in the best possible way.
Spectral and finite element methods are closely related and are based on the same ideas. The main difference between them is that spectral methods use nonzero basis functions over the entire domain, while finite element methods use nonzero basis functions only on small subdomains. In other words, spectral methods use a global approach, while finite element methods use a local approach. It is for this reason that spectral methods provide excellent convergence, their 'exponential convergence' being the fastest possible when the solution is smooth.
Spectral methods for the numerical solution of ordinary differential equations with given initial conditions are often reduced to solving a system of linear algebraic equations (SLAE), which includes both the initial conditions and conditions that ensure the fulfillment of differential relations [6]. However, a priori embedding of the initial (boundary) conditions into the system of linear equations leads to a significant increase in the filling of the matrices and, consequently, to the complication of the algorithm and method for solving the problem [7].
A more interesting approach is to select a basis that automatically takes into account the boundary conditions [1], [5], [6]. This is a frequently used trick when formulating the SLAE of the initial problem, and it reduces to taking into account the required initial/boundary conditions when creating the basis (a set of good basis functions-orthogonal, etc.) in a natural way, i.e., a basis is selected in which each basis function satisfies the initial conditions. The solution obtained using this approach is automatically sought in the class of functions satisfying the initial conditions. However, in this case it becomes much more difficult to work with new basis functions.
The novelty of the approach proposed by the authors is that first, a class (set) of functions that satisfy the differential equation is selected using a stable and computationally simple method of interpolation (collocation) of the derivative of the future solution. Then the coefficients (except for some) of the expansion of the future solution are determined in terms of the calculated expansion coefficients of the derivative using the integration matrix. Only after that, from this set of solutions those that correspond to the given initial conditions are selected.
Here we propose to divide the main problem into independent subproblems and to calculate the solution components in parts — separately those that determine the behavior of the derivative of the solution, and separately those that are determined by the boundary conditions. Thus, the problem is divided into two independent subproblems, each allowing stable and simple solution. The solution of the first problem in the simplest case is reduced to multiplying the vector of the right-hand side by the matrix of the Chebyshev functions values on the Gauss-Lobatto grid. At the next step, we solve the SLAE with a diagonal positive definite matrix and, multiplying the resulting vector on the left by the two-diagonal matrix, inverse (anti-derivative) with respect to the spectral Chebyshev matrix of differentiation, we obtain all the expansion coefficients of the desired solution, except for the first one. At the second, 'most difficult' stage, we determine the first coefficient of the expansion of the solution in terms of basis polynomials, solving a linear algebraic equation of the first order with respect to this coefficient.
2. Numerical solution of ordinary differential equations
Exact solution of a trivial ordinary differential equation for a given initial (boundary) condition
y' = f(x), x > Xo, y(x) = y0, (1)
the right-hand side of which is independent of y, can be presented in the form
Vo + f f(r)dr.
z0
Since the numerical methods for integrating functions are well developed from theoretical and practical points of view, it seems natural to apply them to the numerical solution of ordinary differential equations of general form
y' (x) = f{x,y(x)), x^xo, y(xo ) = yo, (2)
and this is exactly the fact that naturally explains the development and wide use of the methods of the Runge-Kutta type.
Usually, the method implies obtaining the solution in the interval [xo,xo + ckh]. The coefficients O^c-l < c2 < ... < cn^l are chosen. Then, using the method of polynomial collocation, the solution is approximated by a polynomial p of the degree n, which satisfies two types of conditions
— the initial condition: p(xo) = yo, and
— the differential equation, p'(xk) = f(xk,p(xk)), at all the collocation points [xk = xo + Ckh], k = l,..., n.
Satisfying these (n+ l) conditions allows calculating (n + l) coefficients of the expansion of the sought polynomial p of the degree n.
Thus, the collocation methods are actually implicit Runge-Kutta methods [8].
It is important to note that to solve the approximation problem, it is not necessary to try solving Eq. (1) with simultaneous satisfaction of both the initial condition and the differential equation at the collocation points. In
some cases, a fast and stable result can be achieved in two stages. First, to find those coefficients of the sought solution expansion that satisfy the differential equation at the collocation points. Then, to determine the deficient coefficients of the sought function expansion using the initial (final or intermediate) value.
3. Approximation of derivative. Cauchy problem
First, consider the problem of determining (recovering) a function from its derivative and some (one) additional condition. In this formulation, the problem naturally splits into two sub-problems:
— polynomial interpolation of the derivative (calculating the coefficients of the expansion of the derivative into a finite series in basis functions) and
— calculation of the coefficients of the required function by the initial (boundary, etc.) condition and the coefficients of the derivative expansion.
Without loss of generality, we assume that the domain of definition of the solution is the interval [-1,1].
Most often, the approximation of continuous functions is obtained by discarding the terms of the Chebyshev series, the magnitude of which is small [9], [10]. In contrast to the approximations obtained using other power series, the approximation in Chebyshev polynomials (having the property of being almost optimal) minimizes the number of terms required to approximate a function by polynomials with a given accuracy. This is also related to the property that the approximation based on the Chebyshev series turns out to be close to the best uniform approximation (among polynomials of the same degree), but easier to find. In addition, it allows avoiding the Gibbs effect with a reasonable choice of interpolation points.
Let us consider in more detail the problem of finding the derivative of the desired function, or rather the approximating polynomial p(x), satisfying condition (1) at a given number of points in the interval [-1,1]. The pseudospectral (collocation) method [11] for solving the problem consists in representing the desired approximating function in the form of an expansion in a finite series in Chebyshev polynomials
n
P(x) = ^ck Tk (x), яе[-м] (3)
k=0
using the basis of Chebyshev polynomials of the first kind {Tk(x)}k=0, defined in the Hilbert space of functions on the segment [-1,1].
Let us differentiate the function (3). The derivative is expressed as
d ( n \ n n
V' = Tk (x)) =^Ck Tk(x) = ^bk Tk (x), же[-М]. (4)
ax \k=0 J k=0 k=0
Using the recurrence relations, which are satisfied by the Chebyshev polynomials of the first kind and their derivatives [3], [12] and equating the coefficients at the same polynomials in (4), we come [3] to the following dependence of the coefficients ck on bk:
"1 0 -1/2 0
0 1/4 0 -1/4
0 0 1/6 0
0 0 0 1/8
0 0 0 0 .0 0 0 0
That is, the vector calculation of the coefficients {c^ ,C2, ...,cra} is the result of multiplying a simple tridiagonal matrix by a vector and it can be implemented by the following explicit formulas
0 0 0 0
1
2(rc- 1) 0
0 0 0 0
0
1/(2n).
x
" bo ' Ci
h C2
b2 Cs
bs — C4
bn-2 Cn—1
-bn-i - - cn -
(5)
Ci =b0 -b2/2, k = 1,
ck = (h-i -bk+i)/2k, k > l, k<n-1, (6)
ck = bk-1/2k, k = n — 1,n.
Thus, known the expansion coefficients bk of the function f(x) of problem (1) in Chebyshev polynomials of the first kind, we can recover the last N expansion coefficients of the sought function in the same basis functions by formulas (2.1.3) from [3].
Therefore, the first part of the problem is to calculate the coefficients {b0,b1 ,...,bn} of the representation of the right-hand side of (1) on the interval [—1,1]
n—1
YshTk (x) = f(x).
k=0
The collocation method consists in the selection of such coefficients {b0,b1 ,...,bn} of the expansion of the interpolation polynomial p'(x) that the following equalities are satisfied for the desired coefficients bk, k =
0,1,...,n — 1.
n—1
^hTk )=f{xj ^ j = 0,..-,n — 1 (7)
k=0
at the collocation points {x0, x1,.,xn }.
The last statement is equivalent to the fact that the coefficients bk, k = 0,... ,n must be a solution to the system of linear algebraic equations (7) of the collocation method. In matrix form, this can be written as
Tb = f. (8)
The choice of collocation points should ensure the nondegeneracy of the system of Eqs. (7); for this it is sufficient that all grid points are different, and otherwise their choice is arbitrary, that is, the solution of system (7) on an arbitrary grid of the interval [—1,1] determines the required approximation. For an arbitrary grid, the matrix Tis completely filled and the solution of such a system is rather laborious. To simplify the form of the matrix and speed up the search for the vector b, we use the discrete orthogonality property of the Chebyshev matrix T on the Gauss-Lobatto grid. Consider the set xrj = cos(j/n), j = 0,...,n as collocation points. To further improve the properties of the system of linear equations, the solution of which will be the vector {b0, b1,.,bn}, we multiply the first and last equations (7) by the factor 1/\[2. We obtain an equivalent 'modified' system with a new matrix
T (instead of T) and a vector f instead of f. The good thing about the new system is that it has the property of discrete 'orthogonality' and multiplying it on the left by the transposed TT gives a diagonal matrix:
n 0 0. 0
0 n/2 0. 0
0 0 n/2 . 0
0 0 0. . n
TTT =
We transform system (8), multiplying it on the left by the transposed
matrix TT. As a result, we obtain a simple matrix equation with a diagonal matrix to determine the required expansion coefficients {ft0,b1,..., bn}:
n 0 0
0 n/2 0
0 0 n/2
0 0 0
i/o, fi,.. • , fn
.. 0" fto lo /VST
.. 0 fti fi
.. 0 ft2 = TT /2
.. -K - -fn/V2-
(9)
t ,
the product of matrix T1 by vector
f0/v^2, A, . , fn-1, fn/V2) in the right-hand side of equation (9), we write down the required expansion coefficients of the derivative of the solution -the function /(x) — in the explicit form
ft = lo
uo — , n
ft = 2Â fti _ , n
ft =2/2 ft2 _ , n
(10)
ft = —
Consequently, relations (10), (6) uniquely determine the last n coefficients {c1, c2, ... , cn} of the expansion of the sought function p(x), and to determine one more coefficient c0 it is necessary to involve at least one more additional condition. This can be both a boundary condition at the left or right end of the interval of consideration of a function, or a condition for the passage of the desired function through any given point within the interval of specifying the function.
That is, the considered method makes it possible to solve, depending on the type of the additional condition, both the Cauchy problem with initial conditions and problems with boundary conditions of a general form, requiring, for example, the use of the iterative shooting method [4].
In the case when the boundary condition is specified at the left end of the integration interval, the zero coefficient is determined from the equation
n
co + Y°k Tk (-1) = Vo (11)
k=l
by the formula (taking into account that Tk(-1) = (-l)k) for any Chebyshev polynomial
n n
Co =V0 -Y, Ck Tk (-1) = V0 -Y ck(-1)k ■ (12)
k=1 k=1
If the additional 'boundary' condition is specified at an arbitrary point of the integration interval, yb = y(xb), xbG[-1,1], then the coefficient c0 is determined by the formula
n
Co = Vb -Y°kTk (Xb). (13)
k=1
At the right-hand end of the integration interval yr = y(1), xr = 1, the Chebyshev polynomials of any order take the value equal to 1 (Tk(1) = 1). Therefore, the coefficient c0 is determined by the formula
n n
c0 = Vr - Y ckTk (xr) = Vr - Y ck- (14)
k=1 k=1
4. Examples with simplest differential equations
Reconstructing a function from its derivative and a boundary condition. Comparison with the Runge-Kutta-Fehlberg method [13]
dy
dX = f(x), y(0) = yo, xe[a,b]-
Let us compare the solutions obtained by the Runge-Kutta method (subroutine RKF45) and the solutions obtained as previously described.
Let us specify a grid in the interval [a,b], calculated by the formula
b - a \ b + a
xj = cos3 = 0,1,-,N-1,
and related to the chosen Gauss-Lobatto grid in the interval [-1,1]. The number of grid points equals N, i.e., to recover the function from the given derivative and additional condition by our method, only N calculations of the function (the right-hand side) are needed, and the recalculation of these values into the expansion coefficients in Chebyshev polynomials will require only 2N divisions and 2N additions.
To solve the Cauchy problem by the Runge-Kutta-Fehlberg method, we applied the RKF45 algorithm on each subinterval of the grid calculated above on [a, b].
Algorithms are compared when looking for a solution to the simplest problem
dy
— = cos(x), y(0) = 0, xe[-i,i].
The calculation carried out by the Runge-Kutta method with automatic control of accuracy (not worse than 10-9) required about 800 calculations of the function values over the entire interval.
For the two-stage method of separation of unknowns, the results of the deviation of the calculated values from the exact ones at the grid points are given in the table 1.
Table 1
Deviation of the calculated values from the exact ones
Number of grid points 11 13 15 30
Maximum deviation < 4 • 10-7 < 5 • 10-9 < 2 • 10-13 < 10-19
Consider a few more model examples of solving the Cauchy problem, i.e., recovering functions from given derivatives and an initial condition. Functions from [14], in which the accuracy of calculating derivatives with the help of Chebyshev matrices of differentiation in physical space, were studied as model ones. The selected examples systematically illustrate the accuracy of calculating derivatives as a function of the number of approximation points (see the figure 1).
Four functions characterized by different smoothness are considered: |x31, exp(-x-2), 1/(1 + x2), and x10. The first function has the third derivative of bounded variation, the second function is smooth, but not analytical, the third one is analytical in the vicinity of [-1,1], and the fourth function is a polynomial. The accuracy of solutions obtained by us is by 1.5-3 orders of magnitude better than Trefethen's solutions [14] when using a similar number of collocation points.
5. Discussion and conclusion
There are methods based mainly on the local approximation of the solution, which primarily use the initial approximation (boundary conditions) when
solving differential equations. These are methods like Euler, Runge-Kutta
method, etc. Other methods based on the approximation of the solution using global functions [global collocation methods — Mason, Boyd, Fornberg,
Iserles, Townsend] are based on the construction of such systems of equations that simultaneously include both initial (boundary) conditions and conditions specifying the behavior of the derivatives of the desired solution.
In our study (within the framework of the pseudospectral collocation method), the problem is divided into two independent subproblems. The first is to select a set of solutions that satisfies the differential equation. However, it does not necessarily satisfy the initial (boundary) conditions. The choice of suitable bases for representing the solution in the form of an expansion
(a) u(x) = I
Ц-1) = 1
(b) u(x) = exp (
u(-1) = e-
3
u(x)
(c) u(x) =
1 + X2
U(-1) =
(d) u(x)
u(-1)
10
1
Figure 1. The accuracy of derivative recovering for four functions with increasing smoothness depending on the number of approximation points
in polynomials, e.g., Jacobi ones, and grids taking into account the discrete orthogonality of the considered bases, makes it possible to use highly efficient algorithms.
Perhaps, the most promising from the point of view of the application of numerical methods is the use of a particular case of Jacobi polynomials — Chebyshev polynomials, as specific basis functions [15]. The Chebyshev polynomials provide an almost optimal approximation of the ODE solution, the ease of calculating the Gauss-Lobatto grid for using the discrete orthogonality condition, and three-term relations determining the ease of constructing differentiation and integration matrices of the sought solutions [16].
The initial (boundary) conditions are considered at the second stage of solving the original problem, which is actually reduced to solving a linear equation with one unknown coefficient.
The solution of the first problem is reduced to multiplying the matrix of values of the Chebyshev functions on the Gauss-Lobatto grid by the vector of values of the function that defines the right-hand side of the original differential equation to determine the expansion coefficients of the solution derivative. Further, the multiplication of the two-diagonal integration matrix [3] by the vector of coefficients yields all the coefficients of the desired solution, except for the first one. At the second stage, the use of the initial (boundary) condition makes it possible to determine the first coefficient of the solution expansion.
The approach based on dividing the problem of solving first-order ODEs into two subproblems seems to be very promising. The authors will continue to develop approaches and algorithms for the application of the multistage pseudospectral method for solving initial and boundary value problems with differential equations of higher orders.
Acknowledgments
This paper has been supported by the RUDN University Strategic Academic Leadership Program.
References
[1] J. P. Boyd, Chebyshev and Fourier spectral methods, 2nd ed. Dover Books on Mathematics, 2013.
[2] J. C. Mason and D. C. Handscomb, Chebyshev polynomials. London: Chapman and Hall/CRC Press, 2002.
[3] B. Fornberg, A practical guide to pseudospectral methods. Cambridge, England: Cambridge University Press, 1996. DOI: 10.1017/ cbo9780511626357.
[4] M. Planitz et al., Numerical recipes: the art of scientific computing, 3rd. New York: Cambridge University Press, 2007.
[5] J. Shen, T. Tang, and L.-L. Wang, Spectral methods. Berlin, Heidelberg: Springer, 2011, vol. 41.
[6] S. Olver and A. Townsend, "A fast and well-conditioned spectral method", SIAM Rev., vol. 55, no. 3, pp. 462-489, 2013. DOI: 10.1137/ 120865458.
[7] S. Chandrasekaran and M. Gu, "Fast and stable algorithms for banded plus semiseparable systems of linear equations", SIAM Journal on Matrix Analysis and Applications, vol. 25, no. 2, pp. 373-384, 2003. DOI: 10. 1137/S0895479899353373.
[8] A. Iserles, A first course in the numerical analysis of differential equations, 2nd edition. Cambridge: Cambridge University Press, 2008. DOI: 10. 1017/CB09780511995569.
[9] X. Zhang and J. P. Boyd, "Asymptotic coefficients and errors for Chebyshev polynomial approximations with weak endpoint singularities: effects of different bases", Online. https://arxiv.org/pdf/2103.11841.pdf, 2021.
[10] J. P. Boyd and D. H. Gally, "Numerical experiments on the accuracy of the Chebyshev-Frobenius companion matrix method for finding the zeros of a truncated series of Chebyshev polynomials", Journal of Computational and Applied Mathematics, vol. 205, no. 1, pp. 281-295, 2007. DOI: 10.1016/j.cam.2006.05.006.
[11] D. Dutykh, "A brief introduction to pseudo-spectral methods: application to diffusion problems", Online. https://arxiv.org/pdf/1606.05432.pdf, 2016.
[12] A. Amiraslani, R. M. Corless, and M. Gunasingam, "Differentiation matrices for univariate polynomials", Numer. Algorithms, vol. 83, no. 1, pp. 1-31, 2020. DOI: 10.1007/s11075-019-00668-z.
[13] E. Hairer, G. Wanner, and S. P. N0rsett, Solving ordinary differential equations I. Berlin: Springer, 1993. DOI: 10.1007/978-3-540-78862-1.
[14] L. N. Trefethen, Spectral methods in MATLAB. Philadelphia: SIAM, 2000.
[15] K. P. Lovetskiy, L. A. Sevastianov, D. S. Kulyabov, and N. E. Nikolaev, "Regularized computation of oscillatory integrals with stationary points", Journal of Computational Science, vol. 26, pp. 22-27, 2018. DOI: 10. 1016/j.jocs.2018.03.001.
[16] L. A. Sevastianov, K. P. Lovetskiy, and D. S. Kulyabov, "An effective stable numerical method for integrating highly oscillating functions with a linear phase", Lecture Notes in Computer Science, vol. 12138, pp. 2943, 2020. DOI: 10.1007/978-3-030-50417-5_3.
For citation:
K. P. Lovetskiy, D. S. Kulyabov, A. W. Hissein, Multistage pseudo-spectral method (method of collocations) for the approximate solution of an ordinary differential equation of the first order, Discrete and Continuous Models and Applied Computational Science 30 (2) (2022) 127-138. DOI: 10.22363/26584670-2022-30-2-127-138.
Information about the authors:
Lovetskiy, Konstantin P. — Candidate of Physical and Mathematical Sciences, Assistant Professor of Department of Applied Probability and Informatics of Peoples' Friendship University of Russia (RUDN University) (e-mail: [email protected], phone: +7(495)9522572, ORCID: https://orcid.org/0000-0002-3645-1060 , ResearcherID: A-5725-2017, Scopus Author ID: 18634692900) Kulyabov, Dmitry S. — Docent, Doctor of Sciences in Physics and Mathematics, Professor at the Department of Applied Probability and Informatics of Peoples' Friendship University of Russia (RUDN University) (e-mail: [email protected], phone: +7(495)9520250, ORCID: https://orcid.org/0000-0002-0877-7063, ResearcherID: I-3183-2013, Scopus Author ID: 35194130800)
Hissein, Ali Weddeye — student of Department of Applied Probability and Informatics of Peoples' Friendship University of Russia (RUDN University) (e-mail: [email protected], phone: +7(977)4294833, ORCID: https://orcid.org/0000-0003-1100-4966, ResearcherID: AAD-7566-2022)
УДК 519.872:519.217
DOI: 10.22363/2658-4670-2022-30-2-127-138
Многостадийный псевдоспектральный метод (метод коллокаций) приближенного решения обыкновенного дифференциального уравнения первого порядка
К. П. Ловецкий1, Д. С. Кулябов1,2, Али Уэддей Хиссен1
1 Российский университет дружбы народов, ул. Миклухо-Маклая, д. 6, Москва, 117198, Россия 2 Объединённый институт ядерных исследований, ул. Жолио-Кюри, д. 6, Дубна, Московская обл., 141980, Россия
Аннотация. Рассмотрен классический псевдоспектральный метод коллокации, основанный на разложении решения по базису из полиномов Чебышева. Новый подход к формированию систем линейных алгебраических уравнений для решения обыкновенных дифференциальных уравнений с переменными коэффициентами и с начальными (и/или граничными) условиями позволяет значительно упростить структуру матриц, приводя её к диагональной форме. Решение системы сводится к умножению матрицы значений полиномов Чебышева на выбранной сетке коллокации на вектор значений функции, описывающей заданную производную в точках коллокации. Следующее за этой операцией умножение полученного вектора на двухдиагональную спектральную «обратную» по отношению к матрице дифференцирования Чебышева приводит к получению всех коэффициентов разложения искомого решения за исключением первого. Этот первый коэффициент определяется на втором этапе исходя из заданного начального (и/или граничного) условия. Новизна подхода заключается в том, чтобы сначала выделить класс (множество) функций, удовлетворяющих дифференциальному уравнению, с помощью устойчивого и простого с вычислительной точки зрения метода интерполяции (коллокации) производной будущего решения. Затем рассчитать коэффициенты (кроме первого) разложения будущего решения по вычисленным коэффициентам разложения производной с помощью матрицы интегрирования. И лишь после этого выделять из этого множества решений те, которые соответствуют заданным начальным условиям.
Ключевые слова: начальные задачи, метод псевдоспектральных коллокаций, многочлены Чебышева, множества Гаусса-Лобатто, численная устойчивость