Научная статья на тему 'RICHARDSON-KALITKIN METHOD IN ABSTRACT DESCRIPTION'

RICHARDSON-KALITKIN METHOD IN ABSTRACT DESCRIPTION Текст научной статьи по специальности «Математика»

CC BY
39
5
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
FINITE DIFFERENCE METHOD / ORDINARY DIFFERENTIAL EQUATIONS / A POSTERIORI ERRORS

Аннотация научной статьи по математике, автор научной работы — Baddour Ali, Malykh Mikhail D.

An abstract description of the Richardson-Kalitkin method is given for obtaining a posteriori estimates for the proximity of the exact and found approximate solution of initial problems for ordinary differential equations (ODE). The problem is considered, the solution of which results in a real number 𝑢. To solve this problem, a numerical method is used, that is, the set ⊂ ℝ and the mapping 𝑢ℎ ∶ → ℝ are given, the values of which can be calculated constructively. It is assumed that 0 is a limit point of the set and 𝑢ℎ can be expanded in a convergent series in powers of ℎ: 𝑢ℎ = + 𝑐1ℎ𝑘 + …. In this very general situation, the Richardson-Kalitkin method is formulated for obtaining estimates for and from two values of 𝑢ℎ. The question of using a larger number of 𝑢ℎ values to obtain such estimates is considered. Examples are given to illustrate the theory. It is shown that the Richardson-Kalitkin approach can be successfully applied to problems that are solved not only by the finite difference method.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «RICHARDSON-KALITKIN METHOD IN ABSTRACT DESCRIPTION»

Discrete & Continuous Models & Applied Computational Science 2021, 29 (3) 271-284

ISSN 2658-7149 (online), 2658-4670 (print) http://journals-rudn-ru/miph

Research article

UDC 519.872, 519.217

PACS 07.05.Tp, 02.60.Pn, 02.70.Bf

DOI: 10.22363/2658-4670-2021-29-3-271-284

Richardson-Kalitkin method in abstract description

Ali Baddour1, Mikhail D. Malykh1'2

1 Peoples' Friendship University of Russia (RUDN University) 6, Miklukho-Maklaya St., Moscow, 117198, Russian Federation 2 Meshcheryakov Laboratory of Information Technologies Joint Institute for Nuclear Research, Dubna, Russia 6, Joliot-Curie St., Dubna, Moscow Region, 141980, Russian Federation

(received: August 26, 2021; accepted: September 9, 2021)

An abstract description of the Richardson-Kalitkin method is given for obtaining a posteriori estimates for the proximity of the exact and found approximate solution of initial problems for ordinary differential equations (ODE). The problem T is considered, the solution of which results in a real number u. To solve this problem, a numerical method is used, that is, the set H C R and the mapping uh ■ H ^ R are given, the values of which can be calculated constructively. It is assumed that 0 is a limit point of the set H and can be expanded in a convergent series in powers of h: Uh = u + c1 hk + .... In this very general situation, the Richardson-Kalitkin method is formulated for obtaining estimates for u and c from two values of uh. The question of using a larger number of uh values to obtain such estimates is considered. Examples are given to illustrate the theory. It is shown that the Richardson-Kalitkin approach can be successfully applied to problems that are solved not only by the finite difference method.

Key words and phrases: finite difference method, ordinary differential equations, a posteriori errors

1. Introduction

A priori estimates for finding solutions to dynamical systems using the finite difference method predict an exponential growth of the error with increasing time [1]. Therefore, long-term computation requires such a small sampling step that cannot be accepted in practice. Nevertheless, calculations for long times are carried out and it is generally accepted that they reproduce not the coordinates themselves, but some average characteristics of the trajectories. In this case, a posteriori error estimates are used instead of huge a priori ones. As early as in the works of Richardson [2], for estimating the errors arising in the calculation of definite integrals by the method of finite differences, it was proposed to refine the grid, and in the works of Runge a similar technique

© Baddour A., Malykh M.D., 2021

This work is licensed under a Creative Commons Attribution 4.0 International License http://creativecommons.org/licenses/by/4.0/

was applied to the study of ordinary differential equations. This approach was systematically developed in the works of N.N. Kalitkin and his disciples [3]-[7] as the Richardson method, although, given the role of Kalitkin in its development, it would be more correct to call it the Richardson-Kalitkin method.

The method itself is very general and universal, so we set out to present it in general form, divorcing it from the concrete implementation of the finite difference method. However, it soon became clear that this method could be extended to methods that are not finite difference methods, for example, the method of successive approximations, and even problems that are not related to differential equations.

In our opinion, this method is especially simply described for a class of problems in mechanics and mathematical physics, when it is necessary to calculate a significant number of auxiliary quantities, although only one value of some combination of them is interesting.

Example 1. On the segment [0, T], we consider the initial problem

d^ = f(x,t), x(0) = x0, at

it is required to find the value of x at the end of this segment, i.e., x(T). To find this value numerically, we will have to calculate x approximately over the entire segment.

Example 2. On the segment [0, T], we consider the dynamical system

(^ = f{x,y,t), ^ = g(x,y,t),

with initial conditions x(0) = x0, y(0) = y0. It is required to find the value of the expression x + y at the point t = T. To find this value numerically, we also have to calculate approximately x and y over the entire segment, then add the final values.

Example 3. The problem of many bodies is considered, say, the solar system, and it is required to find out whether the bodies scatter in 10 thousand years, or not. To solve it, it is enough to calculate the sum of the squares of the distances between the bodies and the center of mass of the system in 10 thousand years. At the same time, the coordinates and velocities of the bodies themselves are of no interest to anyone exactly 10 thousand years later.

Example 4. Let K be a unit circle on the plane. Find the first eigenvalue of the problem

Av + Av = 0, v\ =0.

Here, the eigenvalue X1 is to be found. We cannot find it numerically without finding the eigenfunction or roots of the determinant, i.e., other eigenvalues.

All these problems have one property in common: the result of the solution is a real number u. Various numerical methods are used to solve such

problems. To substantiate these methods, the errors that occur in intermediate calculations when calculating auxiliary parameters are estimated, and then they are summed up. The a priori error estimates obtained in this way turn out to be enormous. However, in many cases the real situation is much better than the forecasts obtained in this way. Using example 2, this can be explained as follows: errors usually made in the calculation of x and y have different signs and therefore their contributions to the expression x + y are canceled. Having estimated the error in calculating x + y as the sum of the modules of errors in determining x and y, we inevitably and significantly overestimate the error. It will not be superfluous to note that problems whose solution is just a real number are considered in the topology R. This means that the numerical solution must be a number that is close to the exact solution in that topology. However, the topology of the space in which the auxiliary variables take values is not specified. Usually, numerical methods are constructed so that these auxiliary variables are found with greater accuracy with respect to some Euclidean norm. For example, to find x + y at time T, you need to find an approximation to the pair of functions x(t), y(t) with respect to the norm

sup vmi2 + mi2.

0 <i<T

In the situation under consideration, such requirements are unnecessarily stringent.

In this paper, we describe a method for obtaining estimates of errors made in solving problems of this class in general form based on the Richardson-Kalitkin method [3], [4], abstracting from the particular choice of numerical method. In our opinion, this approach makes it possible to clearly see the main ideas of the Kalitkin method, which usually turn out to be hidden behind the details of the numerical methods used. Half a century of using the Richardson-Kalitkin method in practice has shown that its correct application requires the calculation of not two, but a significantly larger number of approximate solutions to test the hypothesis of the dominance of the principal term in the error (see section 4 below). We will discuss one possible modification of the method for the simultaneous use of all of these solutions for evaluating solutions and errors.

2. Basic definitions

Let the problem P be given, the solution of which is a real number u. We will not concretize this problem, let it only be known that this problem has a solution and, moreover, a unique one.

We are not going to concretize the numerical method for solving this problem. The use of any numerical method for solving it means replacing the problem P with another problem , the result of which is the mapping uh ■ H —y R. The interpretation of the set H essentially depends on the numerical method used. In some cases this set is a segment (0, to), and in other cases it consists of positive rational numbers. For example, for the finite difference method, this set is formed by the admissible step lengths. Below this does not matter, but it is important that the set H is a subset of the real axis and that 0 is a limit point for the set H.

By analogy with the usual conventions, let us accept the following

Definition 1. Let uh ■ H ^ R be a solution to the problem . If lim uh = u, then we say that the problem approximates the problem P.

h^tO

If uh = u + 0(hk), then we say that the order of approximation of problem P by problem Ph is k.

In the overwhelming majority of cases, the value h has the meaning of the discretization step of the original problem, and the order of k is known. Here are some examples.

Example 5. Let the problem P consist in finding the value of the integral

1

f dx

u =

J 1 + x2'

x=0

Its solution is the number u = n/4, which we do not know exactly. To calculate it, we cut the segment [0,1] into ^eN parts. Let us assume that H is formed by all possible inverse natural numbers. Let uh map this set to R, putting in correspondence to h = the number

N h

=0 1 + (nh)2'

n=0 y '

Then uh = ^/4 + 0(h), i.e., the order of approximation obtained by the rectangle rule is 1 .

Remark 1. It should be noted that the methods of the numerical calculation of some classes integrals are known when the error depends on the step value not linearly or quadratically, but exponentially [8], [9].

Example 6. Let us consider the problem from example 1. An explicit Euler scheme can be used to solve it. We cut the segment [0, T] into ^eN parts and take h = . Let us put this number in correspondence with the number uh = xN, which is calculated by the recurrent formulas

xn+1 = xn + f(xn, nh)h, n = 0,..., N — 1.

Moreover, it is possible to prove an a priori estimate for the error [1]:

lu + u — Uh I = lx(T) — xNI < CeaTh,

where C, a are some constants depending only on f and the initial data x0, but not on h and T. This immediately implies that uh = x(T) + 0(h), i.e., the order of approximation by the problem obtained using the Euler scheme is 1.

Basically, the finite difference method will be applied further, but this is not at all necessary.

Example 7. To calculate u = x(T) from example 1, one can use the sequential iteration method (Picard's method). Let N G N be the number of iterations, let us take h = 1/N and assign this number to the number uh, which is calculated as follows. First, N functions are calculated by recurrent formulas

t

Xn+1 (t) = J f(xn(r),r)dr, n = 0,...,N -1,

t=0

and then uh = xN(T). In this case uh — u at h — 0, i.e. the problem approximates the initial problem.

The problem should be simpler than the original one in the sense that it is possible to calculate the values of the mapping uh ■ H — K at all points of H. In practice, this possibility is limited both by an increase in the computational complexity when approaching h = 0, and by an increase in the role of the round-off error.

Definition 2. The value of the function Uh at any point of the set H will be called the approximate solution to the problem P, and the modulus of the difference between this value and the solution to the problem P is the error made when solving problems P by method Ph.

3. A posteriori error estimates

The Richardson-Kalitkin method can be separated from the finite difference method by adopting the following definition.

Definition 3. Let Uh ■ H — K be a solution to the problem . If there exists a constant c ^ 0 such that uh = u + chk + 0 (hk+1), then we will say that chk is the leading term of the approximation error for problem P by problem .

Remark 2. In practice, it is usually assumed that the estimate uh = u+0(hk) implies the existence of a constant c such that uh = u+chk+0(hk+1). Usually, this can be justified. But the definition 3 specifically states that c ^ 0. If c = 0, then one speaks of superconvergence of the method, because the order of approximation turns out to be greater than that predicted in theory. For difficulties in applying the Richardson-Kalitkin method in the case of superconvergence, see [10].

The essence of the Richardson-Kalitkin method is as follows. If we discard 0(hk+1), then uh = u + chk. We do not know the values of u and c, but we can calculate uh for any value of h. Taking two such values, say h1 and h2, we have a system of two linear equations

Uh(h1 ) = u + chk, Uh(h2 ) = u + chk2,

resolving which for u and c, we will find some estimates for these quantities. We are talking about estimates, not values, since they are obtained by discarding

0(hk+1).

Definition 4. Let uh ■ H ^ R be a solution to the problem and there exists a constant c ^ 0 such that uh = u + chk + 0(hk+1). For any two h1 ,h2 e H the solution to the system

uh (hi) = u + chk, Uh (h2) = u + ch\

with respect to u and c will be called the Richardson-Kalitkin estimate for the solution u to the problem P and the coefficient c at the leading term of the approximation error. We will denote these estimates as u(h1 ,h2) and c(h1, h2), below we will often omit the indication of their dependence on h1, h2, if this will not introduce ambiguity into presentation.

Example 8. Consider the initial problem

x = —y, y = x, x(0) = 1, y(0) = 0,

and let it be required to find u = x(1). We approximate it according to the explicit Euler scheme and calculate the approximate solution for h1 =0.1 and h2 = 0.01 in Sage [11]:

uh(h1) = 0.5707904499, uh(h2) = 0.543038634332351.

The solution of the system

uh (h1 ) = u + ch1, Uh (h2 ) = u + ch2

yields an estimate u = 0.539955099269280 for u = cos 1 = 0.540302305868140, and for the coefficient of the leading term of the error c = 0.308353506307201.

The result looks very reasonable. With h = 0.1, we have an estimate for the error ch = 0.0308, while the error itself is 0.0304. With h = 0.01, we have an estimate for the error ch = 0.00308, while the error itself is 0.0027. The estimate for the solution differs from the solution by only 3.5 • 10-4, which is an order of magnitude better than the result with the smallest step.

Richardson-Kalitkin estimates can also be performed in problems for the solution of which other numerical methods are used, while for specific methods such estimates themselves are well known, but under different names. For example, in this way the error is estimated when determining the eigenvalues by means of the finite element method (FEM) [12].

Example 9. Let it be required to find the smallest eigenvalue of the problem

Av + Xv = 0, vIdK = 0

in the unit circle K. Then the answer is the number u = X1. Let us apply the FEM implementation in the system FreeFem++ [13]. The parameter h will be the value of 1/N, where N is the number of points into which the circle is divided during triangulation. Then, when using linear elements, the smallest eigenvalue of the approximate problem is uh = u + ch2 + 0(h3).

Two-sided estimates for the error were obtained in the PhD thesis by Panin [14]. Let us take h1 = 1/20 at random, and h2 = 1/100, then

uh (h1) = 6.0173, uh (h2) = 5.79292.

The solution of the system

uh (h1) — u + ch2, Uh(h2) — u + ch2

yields u — 5.78357083333333 against the exact value u — ft — 5.783185962946785, and for the coefficient of the leading term of the error we get c — 93.4916666666667.

The result looks very reasonable. For h — 0.01, we have an estimate for the error ch2 — 9.34 ■ 10-3, while the error itself is 9.73 ■ 10-3. The estimate for the solution differs from the solution by only 3.84 ■ 10-4, which is an order of magnitude better than the result for the least h.

4. Justification of the Richardson—Kalitkin method

Justification of the Richardson-Kalitkin method consists of two parts: first, it is necessary to prove that the used numerical method satisfies the asymptotic formula

uh = u + chk + 0(hk+1).

Second, it is necessary to justify the possibility of omitting 0(hk+1). The first step essentially depends on the numerical method used and its discussion is beyond the scope of this article. The second step, on the contrary, has nothing to do with the choice of a numerical method. Let us consider it in more detail. To discard the remainder 0(hk+1), it must be substantially less than the principal term chk. For this purpose, first of all, c must be nonzero, which is indicated in definition 4. Further, the considered values of h should be sufficiently small. We have no a priori data to know in advance how small the chosen h should be. Finally, in practice, we cannot take h too small as well, when the round-off error becomes essential in the calculation of Uh.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In order to find a practically suitable interval of h values, N. N. Kalitkin and his disciples [5]-[7] have recommended to carry out calculations at least at 10 points rather than only two ones. Richardson's method can be applied only for those h, for which the error versus the step plotted in the log-log scale using these points, lies on a straight line with the slope k known from the theory. If the steps are too large, this plot differs from the straight line due to the fact that the discarded 0(hk+1) is still large, and if the steps are too small, the rounding error becomes essential. If the slope of the straight line differs from k, then the phenomenon of superconvergence takes place (see remark 2).

Example 10. Let us return to example 8 and find an approximate solution by the fourth-order Runge-Kutta method with 15 steps, starting from the step dt — 0.1, each time decreasing the step by two times. Taking the approximate value for x(1) — cos 1, obtained at the smallest step, as exact, we can plot the dependence of the error Ax — xn — X15 on the step dt, see the figure 1. The plot clearly shows an inclined section with a slope of approximately 4, followed by a horizontal section, interpreted as a region where a round-off error prevents further refinement of the solution.

With this approach, several natural questions arise. First, the points never exactly fall on a straight line. Therefore, we need quantitative characteristics

for the site, which we will consider straight. How can we find them? Second, since approximate solutions were found not for two, but for many values of h, how can they be used to refine the solution? Third, the terms in power series do not have to form a monotonic sequence, therefore, for large h, the leading term can be significantly less than the next term. Can this possibility be taken into account explicitly?

10

-i6 -18 -20 -22 -24 -26 -28 -30

-InAt

In Ax

Figure 1. Dependence of error on step for example 10

5. Usage of several terms in the expansion of uh

in powers of h

The simplest answers to these questions can be found if we take into account the following terms in the expansion of uh in powers of h. Suppose that uh expands into a power series

uh = u + c1 hk + c2 hk+1 + ... ( 1 )

If we have performed calculations for N different values for h, say, for h = h1,..., hN, then we can estimate the value of u and N — 1 coefficients, discarding all terms, starting with cNhN.

Definition 5. Let the solution uh ■■ H R to the problem 0)h be expanded in a power series (1), and let there be nonzero coefficients among c1,... cN_x. For any N values h1,... hN G H the solution to the system

uh(hn) = u + c1h* + ... + cN_1h%+k-1, n= 1,2,. ..,N (2)

with respect to u and c1,..., cN_x will be called an estimate for the solution u to the problem !P and the first coefficients c over N approximate solutions. We will denote these estimates as û and c1,..., cN_x.

As a result of solving system (2) we have: i) the estimate u for the value of the exact solution, ii) the estimate c1hk for the error, suitable for sufficiently small h, and additional information about how small are those terms that are not taken into account in the Richardson-Kalitkin method.

Of course, as in the previous section, discarding terms, the order of which is equal to or greater than N + k requires certain conditions to be met. However, these conditions are noticeably less restrictive. First, the simultaneous vanishing of several expansion coefficients seems incredible. Second, we can consider sufficiently large values of h for which the subsequent terms of the expansion are still noticeable.

6. Computer experiments

In our tests, we took N = 4 and h1 e Q n H at random, and the remaining h2, h3, h4 were obtained by dividing h1 by 2, 3 and 4. To avoid introducing additional rounding errors, system (2) is solved exactly over the field Q.

Let us start with the simplest linear example.

Example 11. We will solve the problem from example 8 by the fourth-order Runge-Kutta method with a uniform step h. With step h1 = 0.1, we get

uh (0.1) = 0.540302967116884

against

cos 1 = 0.540302305868140 ...,

i.e., 6 correct decimal places. Calculating three more approximate solutions, we get the estimate for u = cos 1 coinciding with the exact value up to 13 digits (the penultimate one). The estimate for the expansion coefficients (1) allows us to evaluate the error at h = 0.1 as

Uh — U = 0.007 • 10-4 — 0.011 • 10-5 + ... = 6 • 10-7,

as it should be. It is interesting to compare the interpolation polynomials obtained at the initial step dt = 0.1 and dt = 0.01: the estimate for u = cos 1 coincides with the one obtained earlier up to the last digit, c1 differs in the fifth digit, c2 differs by an order of magnitude, and c3 — by two orders of magnitude. We increased the number of bits allocated to a real number, and made sure that the noted effects are not related to round-off errors.

In the course of our experiments, we came across situations where the coefficients are monstrously overestimated.

Example 12. Consider the same system

x = —y, y = x, x(0) = 1, y(0) = 0,

but let it be required to find u = x(0.3). At the first step, h1 = 10-4, we got a huge estimate c = 5 • 1013, while the scatter of estimates is very high. However, the estimate for cos 0.3 itself coincides with the exact value with a very high accuracy, and one can easily find such values for the initial step, at which the estimates for the coefficients look quite reasonable.

Application of the standard Richardson-Kalitkin method (N = 2) leads to even less pleasant results in this example. Take h1 =0.1 and h2 = 0.05 and estimate u and c1 using the Richardson-Kalitkin method. Then the estimate for the error Uh — u will be cx0.14 = 10-10, which is much less than the actual error uh(0.1) — cos(0.1), equal to 2 • 10-2.

The simplest explanation for these effects is that in the series

uh = cos 0.3 + c1 h4 + c2 h5 + ...

the coefficient c1 is very small, but the coefficient for some large power of h, on the contrary, is very large. Because of this, firstly, for small steps of the order of h = 0.01, we already have a value that coincides with the exact one, and, secondly, our estimates, which are based on the assumption of the possibility of discarding senior terms, do not work.

Now we proceed to a simplest nonlinear example.

Example 13. Let it be required to find u = x(1) for solving the initial Volterra-Lotka problem

x = (1 — y)x, y = —(1 — x)y, x(0) = 0.5, y(0) = 2

on the segment 0 <t < 1. We will solve this problem according to the explicit Runge-Kutta scheme of the 4th order and estimate the solution with four steps, starting with h1 = 0.1. For u, we obtain the estimate

u = 0.302408337777406,

and for the error

uh(h) — u = —0.002 • dt4 + 0.00001 • dt5 + ....

At the smallest step, we have an error of 10-9, that is, we can rely on more than 9 decimal places. Starting with h1 = 0.01 we get another estimate, in which u differs from the previously found value in the last two digits, and c1 differs in the fourth digit.

7. Discussion of experimental results

The experiments performed, first of all, indicate that the proposed generalization of the Richardson-Kalitkin method allows, with a very modest number of steps, to obtain an estimate for the exact solution that coincides with it up to a round-off error. In this case, instead of 1 calculation, we perform 4 independent ones, which does not waste time at all, since the calculations are performed in parallel.

The larger the power, the greater the discrepancy in determining the coefficients for powers of h. It is not hard to explain this fact. All formulas are derived under an assumption typical of various kinds of mean-value theorems: for any s > n there is a constant Ms such that

s

x — c0 — ^ Cj hn+i-1 < Ms hn+s+1.

i=1

When solving the interpolation problem, we solve the problem

Co + Y, Ci hp-*-1 = bj + ^ ,

i=1

where bj are the values of x for h = hj, and ^ are unknown quantities, about which we know that l^j | < Ms. Consider, for simplicity, s = 2

Co + C1 hi = bi + $1K+1, Co + C1 hi = b2 + £i hrl+1.

According to Cramer's formulas

^ =b_2hl-bJhi (h h )nh1 $1 h2$2 c0 - un un ~ (h1 h2)

and

h'1 -b% y 1 2 h^ -h

_ bl-b2 +hn1+1 ^ -K+1 &

I 7 <y-> 7 <y-> '

h 1 -hI -hi

For h1 _ h, h2 _ h/2, the error in c0 will be of the order of 0(hn+1 ), and in c1 — only of the order of 0(h). As s grows, the divergence of orders will become more and more noticeable.

Of course, the main problem is that we do not know neither s nor Ms. The example, in which superconvergence manifested itself, makes one think that there are cases when s cannot be taken as wanted. But in this case, the problem of applicability of the described method is reduced to the classical problem of the theory of power series: how many terms should be taken in the series in order to have a given accuracy? It is not difficult to answer it if the recurrent formulas for the coefficients are known, rather than the estimates for the coefficients of the power series, which become the worse the greater the power.

In theory, this circumstance is obviously a serious problem. However, in fact, all problematic cases immediately manifested themselves in the form of inadequately large coefficients. Thus, as a practical recipe, the generalization described seems to be quite useful.

8. Conclusion

We described the Richardson-Kalitkin method as a means for evaluating numerical methods for solving any problem P, the result of which is a real number u. To specify a numerical method for solving the problem T means to specify the set H c \R, for which 0 is a limit point, and the mapping uh ■ H ^ R, the values of which can be calculated constructively. This

method gives a solution to the problem P, if lim Uh — u.

h^0

If there exist k G N and numbers c1,..., cN, among which there are nonzero numbers such that

uh _u + chk + ... cNhk+N + 0(hk+N+1 ),

then from N values of the mapping uh it is possible to estimate the exact solution of the original problem and the coefficients c1,.,cN, characterizing the error of the numerical method. The examples show that the higher the coefficient number, the worse these estimates are, but on the whole they characterize the numerical method quite accurately. The values of Uh are calculated independently, so the calculation of such problems can be naturally parallelized.

Acknowledgments

This work is supported by the Russian Science Foundation (grant no. 2011-20257).

References

[1] E. Hairer, G. Wanner, and S. P. N0rsett, Solving Ordinary Differential Equations, 3rd ed. New York: Springer, 2008, vol. 1.

[2] L. F. Richardson and J. A. Gaunt, "The deferred approach to the limit," Phil. Trans. A, vol. 226, pp. 299-349, 1927. DOI: 10.1098/rsta.1927. 0008.

[3] N. N. Kalitkin, A. B. Al'shin, E. A. Al'shina, and B. V. Rogov, Calculations on quasi-uniform grids. Moscow: Fizmatlit, 2005, In Russian.

[4] N. N. Kalitkin, Numerical methods [Chislennyye metody]. Moscow: Nauka, 1979, In Russian.

[5] A. A. Belov, N. N. Kalitkin, and I. P. Poshivaylo, "Geometrically adaptive grids for stiff Cauchy problems," Doklady Mathematics, vol. 93, no. 1, pp. 112-116, 2016. DOI: 10.1134/S1064562416010129.

[6] A. A. Belov and N. N. Kalitkin, "Nonlinearity problem in the numerical solution of superstiff Cauchy problems," Mathematical Models and Computer Simulations, vol. 8, no. 6, pp. 638-650, 2016. DOI: 10.1134/ S2070048216060065.

[7] A. A. Belov, N. N. Kalitkin, P. E. Bulatov, and E. K. Zholkovskii, "Explicit methods for integrating stiff Cauchy problems," Doklady Mathematics, vol. 99, no. 2, pp. 230-234, 2019. DOI: 10 . 1134 / S1064562419020273.

[8] L. N. Trefethen and J. A. C. Weideman, "The exponentially convergent trapezoidal rule," SIAM Review, vol. 56, pp. 385-458, 3 2014. DOI: 10.1137/130932132.

[9] A. A. Belov and V. S. Khokhlachev, "Asymptotically accurate error estimates of exponential convergence for the trapezoid rule," Discrete and Continuous Models and Applied Computational Science, vol. 3, pp. 251259, 2021. DOI: 10.22363/2658-4670-2021-29-3-251-259.

[10] A. Baddour, M. D. Malykh, A. A. Panin, and L. A. Sevastianov, "Numerical determination of the singularity order of a system of differential equations," Discrete and Continuous Models and Applied Computational Science, vol. 28, no. 5, pp. 17-34, 2020. DOI: 10.22363/2658-46702020-28-1-17-34.

[11] The Sage Developers. "SageMath, the Sage Mathematics Software System (Version 7.4)." (2016), [Online]. Available: https://www.sagemath. org.

[12] O. C. Zienkiewicz, R. L. Taylor, and J. Z. Zhu, The finite element method: its basis and fundamentals, 7th ed. Elsiver, 2013.

[13] F. Hecht, "New development in FreeFem++," Journal of Numerical Mathematics, vol. 20, no. 3-4, pp. 251-265, 2012. DOI: 10.1515/jnum-2012-0013.

[14] A. A. Panin, "Estimates of the accuracy of approximate solutions and their application in the problems of mathematical theory of waveguides [Otsenki tochnosti priblizhonnykh resheniy i ikh primeneniye v zadachakh matematicheskoy teorii volnovodov]," in Russian, Ph.D. dissertation, MSU, Moscow, 2009.

For citation:

A. Baddour, M. D. Malykh, Richardson-Kalitkin method in abstract description, Discrete and Continuous Models and Applied Computational Science 29 (3) (2021) 271-284. DOI: 10.22363/2658-4670-2021-29-3-271-284.

Information about the authors:

Baddour, Ali — PhD student of Department of Applied Probability and Informatics of Peoples' Friendship University of Russia (RUDN University) (e-mail: alibddour@gmail.com, phone: +7(495)9550927, ORCID: https://orcid.org/0000-0001-8950-1781)

Malykh, Mikhail D. — Doctor of Physical and Mathematical Sciences, Assistant professor of Department of Applied Probability and Informatics of Peoples' Friendship University of Russia (RUDN University); Researcher in Meshcheryakov Laboratory of Information Technologies, Joint Institute for Nuclear Research (e-mail: malykh_md@pfur.ru, phone: +7(495)9550927, ORCID: https://orcid.org/0000-0001-6541-6603, ResearcherID: P-8123-2016, Scopus Author ID: 6602318510)

УДК 519.872, 519.217

PACS 07.05.Tp, 02.60.Pn, 02.70.Bf

DOI: 10.22363/2658-4670-2021-29-3-271-284

Метод Ричардсона—Калиткина в абстрактном

изложении

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Али Баддур1, М. Д. Малых1'2

1 Российский университет дружбы народов ул. Миклухо-Маклая, д. 6, Москва, 117198, Россия 2 Лаборатория информационных технологий им. М. Г. Мещерякова

Объединённый институт ядерных исследований ул. Жолио-Кюри, д. 6, Дубна, Московская область, 1^1980, Россия

Дано абстрактное описание метода Ричардсона-Калиткина для получения апостериорных оценок близости точного и найденного приближённого решения начальных задач для обыкновенных дифференциальных уравнений (ОДУ). Рассматривается задача Т, результатом решения которой является вещественное число и. Для решения этой задачи используется численный метод, то есть заданы множество Н С К и отображение и^ ■ Н ^ К, значения которого имеется возможность вычислять конструктивно. При этом предполагается, что 0 является предельной точкой множества Н, и^ можно разложить в сходящийся ряд по степеням Н: и^ = и + с-Нк + .... В этой весьма общей ситуации сформулирован метод Ричардсона-Калиткина получения оценок для и и с по двум значениям и^. Рассмотрен вопрос об использовании большего числа значений и^ для получения такого рода оценок. Приведены примеры, иллюстрирующие теорию. Показано, что подход Ричардсона-Калиткина с успехом может быть применён к задачам, которые решаются не только методом конечных разностей.

Ключевые слова: метод конечных разностей, обыкновенные дифференциальные уравнения, апостериорные ошибки

i Надоели баннеры? Вы всегда можете отключить рекламу.