Научная статья на тему 'LOCAL COMPETING IN INTERPOLATION PROBLEMS'

LOCAL COMPETING IN INTERPOLATION PROBLEMS Текст научной статьи по специальности «Математика»

CC BY
27
7
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
POLYNOMIAL INTERPOLATION / RATIONAL INTERPOLATION / SPLINE INTERPOLATION / ADAPTIVE SPLINE / LOCAL ALGORITHM / METRIC SPACE / EXPLICIT FORMULA / A SET OF PATTERNS

Аннотация научной статьи по математике, автор научной работы — Znamenskij Sergej Vital'Evich

A simple example illustrates the insufficiency of the known approaches to interpolation in the problem of recovering a function from a few given specific values that clearly convey the form. A local choice between polynomial and rational local interpolants, which minimizes the local interpolant's errors at the nearest external nodes from one or different sides, complements the known approaches. It combines the extreme computational simplicity of local interpolants with the thorought selection of them. The principles of constructing the algorithm are formulated in general terms for mappings of metric spaces. They provide accurate (with rare exceptions) reconstruction of mappings that locally coincide with some of the given possible interpolants. In the one-dimensional case, the two-stage algorithm guarantees the continuity of the interpolant and accurately reconstructs {[itemindent=1cm] polynomials of small degree, simple rational functions with a linear denominator, broken lines of long links with knots at the ends } when these requirements do not contradict each other. An additional parameter allows you to replace the exact restoration of polylines with the required smoothness of interpolation.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «LOCAL COMPETING IN INTERPOLATION PROBLEMS»

ISSN 2079-3316 PROGRAM SYSTEMS: THEORY AND APPLICATIONS vol. 11, No4(47), pp. 99-122

CSCSTI 27.41.17 UDC 004.421.2+519.652

Sergej V. Znamenskij Local competing in interpolation problems

Abstract. A simple example illustrates the insufficiency of the known approaches to interpolation in the problem of recovering a function from a few given specific values that clearly convey the form.

A local choice between polynomial and rational local interpolants, which minimizes the local interpolant's errors at the nearest external nodes from one or different sides, complements the known approaches. It combines the extreme computational simplicity of local interpolants with the thorought selection of them.

The principles of constructing the algorithm are formulated in general terms for mappings of metric spaces. They provide accurate (with rare exceptions) reconstruction of mappings that locally coincide with some of the given possible interpolants.

In the one-dimensional case, the two-stage algorithm guarantees the continuity of the interpolant and accurately reconstructs

(1) polynomials of small degree,

(2) simple rational functions with a linear denominator,

(3) broken lines of long links with knots at the ends

when these requirements do not contradict each other. An additional parameter allows you to replace the exact restoration of polylines with the required smoothness of interpolation.

Key words and phrases: polynomial interpolation, rational interpolation, spline interpolation, adaptive spline, local algorithm, metric space, explicit formula, a set of patterns.

2020 Mathematics Subject Classification: 41A05; 41A15, 41A20

Introduction

Various variants of the following formulation of the interpolation problem have been studied and used for centuries:

© S. V. Znamenskij, 2020

© Ailamazyan Program Systems Institute of RAS, 2020

© Program Systems: Theory and Applications (design), 2020 ©41

2

# Known data O Lost data

Original function

1

Polynomial interpolant

Monotonicity

preserving Convexity preserving Rational

0

interpolant

-2 0 2 4 6 8 10

Figure 1. Problem areas of the graphs of various interpolants

Problem 1. Given a set y = f |X of values of the unknown function f : X ^y on a finite subset X CX.

Assume that the values convey the character of the local behavior of the function on X. Reconstruct f qualitatively without involving additional information.

The incompleteness of the modern theory of interpolation within the framework of the problem formulated is highlighted by an example of a simple task of recovering lost intermediate data. Figure 1 illustrates the inapplicability of well-known interpolation methods with an intuitively obvious solution.

Polynomial of minimal degree wrongly oscillates at the edges. The oscillations unacceptably overestimate the value at node 0. For local polynomial splines, the effect is weaker but persists.

For a natural polynomial spline, the effect can be suppressed by the right choice of the boundary condition. Still, the valley between the two hills is wavy for all polynomial splines, except those that preserve monotonicity or convexity.

The loss of monotonicity and convexity during polynomial interpolation has caused a stream of studies on shape-preserving splines, see for example [2,5-7].

Monotonicity preserving is effective only when the extrema are located precisely at the nodes. This condition forces the spline not to increase or decrease, to be constant on the segment [4,6], so sharply underestimates the value at node 5.

Convexity preserving is only useful when all inflection points are precisely at the nodes. Here this condition requires linear interpolation on [2,4] and on [6,8] (convexity and concavity are joined), and the linearity overestimates the values at points 3 and 7. Any linear portion of the function graph in the figure generates not shown sharp bends at the bottom or both ends.

Rational interpolation is applicable in the absence of discontinuities. Continuous rational interpolant of minimal degree may not exist. The choice of the rational interpolant of the excess degree can be ambiguous.

Note that neither values of extrema or inflection points nor other information about the function are assumed to be available. The figure shows that the known approaches to interpolation can not reproduce the function's easily visible features, such as segments of monotonic increase or monotonic decrease. This inability prevents an acceptable recovery of lost values. When the data being recovered does not have periodic repetitions, there is no known effective interpolation alternative for this task [8].

A generally accepted solution to the interpolation problem by minimizing the error energy in the space of square-summable functions is presented, for example, in [1]. This way, the optimal solution is implemented by a linear operator, which gives a polynomial B-spline with unwanted edge effects.

Similar formulations of the interpolation problem (see, for example, [9]) differ in the choice of the objective functional (norm) and come to completely different optimal solutions for the same given values. In practical formulations of the interpolation problem, grounds for clear choice of the target functional are not always visible. In such situations, it turns out that there are many solutions, and the choice of the correct one is not defined. As a result, with few nodes, careful use of drawing patterns today often guarantees a better graph than any available algorithm. Hence, more accurate algorithms are possible.

Note that the simplicity of expressions is characteristic of fundamental natural-scientific laws, and the graphs of simple elementary functions look flawless. Probably the best way to evaluate the quality of interpolation algorithms is to check the most easily computing expressions' reconstruction errors and exact reconstruction of the most simple of them. The quality of interpolation is uniquely characterized by the set of precisely reconstructed functions.

Unpredictable poles of the simplest rational interpolants on the interpolation segment are the main obstacle to the simplest rational functions' exact interpolation. A special kind of interpolant (for example, [2,3]) and interpolation with fixed poles (for example, [4]) allows avoiding the poles simply. The score is the heavy loss of precision on continuous simplest rational functions reconstruction.

What is required is an exact reconstruction of just simple rational functions on segments of continuity.

R. Schaback [10] was the first to understand and implement this. He constructed a spline that interpolates monotonic sets of values with a rational function with a linear denominator and non-monotonic ones with a polynomial one and showed plots of exponential interpolation and probability density functions that effectively illustrate shape retention and multiple precision improvements.

Shchabak's work did not become widespread due to an unsuccessful selection criterion: the polynomial interpolant was used only in the absence of monotonicity. For example, when restoring the function y = x2 from its values at positive points, instead of quadratic polynomials, fractions with non-degenerate denominators were used.

Purpose of the article is to construct a continuous interpolation that accurately locally restores both simple polynomials and simple continuous rational functions and uses the locally closest such function to interpolate any data.

First consider the more generic task of finding a common efficient way to combine competing local interpolation procedures into a global one.

Long time ago, drawing patterns (french curve sets, see Figure 2) have naturally been associated with graphs of simple expressions for functions that may include numeric parameters. This association is mentioned in some early works on interpolation [11,13] and either in modern ones on anti-aliasing [12].

There is a non-trivial technique of using drawing patterns for plotting graphs by points not reflected in computer literature. The pattern and position are selected so that several given points named drawn (falling on the drawn area) points precisely fall on the curve, and several others named refining (close to those drawn) are expected to be as possible closer to the curve. This technique combines the selection of pattern with the selection of drawn and refining points to ensure a perfect fit. Deep difference from

the techniques previously implemented in interpolation algorithms is the selection of used nodes set sometimes only from better side.

Figure 2 shows the position of the pattern is selected to draw a section of the curve between the points B and C. We see that B is the inflection point of the curve being drawn, and therefore using the point A instead of D will give a significantly worse result with convexity upward. This pattern's orientation to both A and D with the equal power can also degrade the result.

The example clarifies that when building interpolants between the nearest nodes, it happens to be preferable to use additional nodes on only one side of the area being drawn. When working with drawing patterns, this situation often arises when one node is close to the inflection point or a singular point of an algebraic function.

This technique poses two new independent problems:

(1 ) Calculate the parameters of the interpolants that minimize errors in

the refining nodes at exact values in the drawn nodes. (2) Model the relation between local interpolation error, choice of nodes and the errors in the refining nodes.

The first problem is discovered below for the interpolant classes used in [10].

To solve the second problem, a model is constructed based on an estimate of the interpolation error of the minimum degree polynomial. Estimation in the form of a product of distances to knots made it possible to formalize the described technique in general terms of abstract metric spaces.

When convex combinations of functions of the interpolant class are defined, the interpolation operator can be constructed continuous and, as expected, more accurate. Combinations are calculated with weights

that are inverse to the error. A special conflict zone of exact interpolants is highlighted by setting the error's threshold value, below which the interpolants are considered to be approximately equivalent. This way excludes division by zero.

A two-stage sampling algorithm focused on the construction of smooth interpolants and reasonable depreciation of the jump in the next derivative completes the article. For a long time, at the first stage, short sections of the graph (germs) were outlined by human or program in the vicinity of given values, and then the germs were connected independently at each segment. This two-stage feature has been inherited by the algorithms in [13] half a century ago. The calculated values of the derivatives at the nodes complete the germs.

1. Formalization of competitive pattern selection

The described technique of using patterns is based only on minimizing distances. This allows it to be formulated in general terms of metric spaces.

For arbitrary metric spaces (X, pX) and (Y, py), consider the generic problem of interpolation over a finite set of nodes X C X with the given values y(x), x G X. Suppose that only a finite set of available competing local interpolants < is known, given in closed subdomains Mj C X, j G J, and the problem is to optimally choose an interpolant for a particular point xc G M. This point can for example, to be the center of the set Mc — a particular simplex or cell into which the space is divided; the case Mc = {xc} is also allowed. The problem is to choose the optimal local interpolant for Mc.

The appropriate choice needs modeling of unknown relation between interpolation error and sets of nodes.

1.1. Dependence of the interpolation error on the location of nodes

A simple formula appears for one-dimensional polynomial interpolation:

Theorem 1. The magnitude of one-dimensional polynomial interpolation error in the simplest case is proportional to the product of the distances to the interpolation nodes.

Proof. Let interpolation by k nodes be considered. Then the polynomials of degree below k are interpolated exactly and it is necessary

to consider the error of interpolation for the polynomial of degree k. This error is a polynomial of degree k and vanishes at the interpolation nodes. Therefore, it has the form ck rii=i(x — x). □

Unfortunately, no simple convincing analogs of this theorem are foreseen for other formulations of interpolation problems. In the absence of alternatives, we can only rely on this simple formula heuristically.

1.2. Simple objective function

Let's exclude from consideration unsuitable local interpolants for which Mc ^ MJ or ipj(x) = y(x) for some node x G X n Mc. If there are no candidates left, then the problem has no solution.

Denote Xj = {x G X n Mj : fj (x) = y(x)} and Xj = X n Mj \ Xj. Define the degree d(f j) of the interpolants fj in terms of the cardinality of the set of coinciding values by the formula

d(f j) = _ max |B n Xj|.

BcMj, BnXj =0

We leave only the interpolants with the highest degree.

If Xr = B n Xj = {x} consists of a single node, then in accordance with the theorem 1 the target function is

(1) j = Py(fj(x),y(x)) n PJ^.

XiEXp Px ( )

If xc coincides with one of the nodes, then the zero factor in the numerator should be excluded, since in this case we are not talking about the node itself, but about nearby points with a constant small distance.

If there are several nodes in Xr, then it is reasonable to consider their contributions independent and apply an unbiased error estimate:

1

T

xeXr

(2) ^ = ^ E Sj,x.

Thus, the optimal local interpolants are calculated by simple enumeration using the target formula 2.

If the probability of zero error is vanishingly small, then such an algorithm guarantees locally exact reconstruction of the interpolant from the used family in the absence of conflicts between them.

1.3. Refinement for linear space

A finite set of interpolants forces unintended essential differences between possible interpolants. If, for example, there are approximately equivalent polynomial and rational interpolants, then the choice between them is not sufficiently justified and, by analogy with independent measurements, averaging may look like a more reasonable alternative. The possibility of such averaging appears when all < (x) lie in the ambient linear space

Let e > 0 — a positive number less than the error of the initial interpolation data. If only one of the interpolants is accurate enough, then the result should be approximately it. These considerations can be combined with averaging with the weights wj = 1/1§j x, | + e, inversely proportional to errors in the refining nodes.

If all local interpolants have significant errors, then the weighted sum can be used as the value of the local interpolant

E wj <j(x)

(3) <(x) = ^-

wj

xj ex,

The only accurate interpolant will be restored exactly.

In the one-dimensional case, all linear functions are usually included in the set of basic interpolants. So any piecewise linear function with singular points at sufficiently sparse nodes will be reconstructed almost exactly.

Among the well-known one-dimensional interpolation algorithms, only linear interpolation has such a useful quality.

It should be noted that the described choice of the best interpolant defined in Mc is aimed exclusively at minimizing the error near xc.

2. Polynomial-rational interpolation

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

We denote

Pd — the set of all polynomials of one variable degree at most d on

the segment;

Rdnom ' dq — the set of all fractions with a polynomial of degree at most

dnom in the numerator and no more than dq in the denominator; in

this case, the piece RcTfe] 'dq is the subset consisting of all rational

continuous on [a, b] functions with the specified limitation on the degree.

In the special case of one-dimensional rational interpolants, the standard deviation with weight is traditionally used. The weight function in this case is the modulus of the denominator q(x) of the fraction f (x), normalized by the unit leading coefficient.

(4) BKd:,(Xp,Xr,y,f) = £ q2(x)(f(x) — y(x))2.

xeXr

This linearizes the problem and makes it possible to dramatically simplify the calculation of the optimal interpolant.

Let further S = (RM°m,dq,Xp,Xr,y) — local formulation of the interpolation problem on the segment M cX = [a, b] C R and dn°m+dq = d. Denote dp = |Xp| — 1, ds = dn°m —dp — 1, Np = {0,..., dp}, N = {0,..., dr}, Ns = {0,..., ds} and Nq> = {0,..., dq — 1}. For simplicity, we restrict ourselves to the case dp + dq < dn°m.

Theorem 2. Let dp ^ dn°m — dq. Then the optimal solution f = C (S) in the estimate (4) has the form

/ \ / \ w(x)s(x)

(5) f (x) = p(x) + K ; y,

q(x)

where p is a 'polynomial of degree dp that interpolates over nodes Xp, polynomial w(x) = (x — xj), and the coefficients of the polynomials

XiEXp

s(x) = J2 six1 degree ds and q(x) = ^ qix1 of degree dq, normalized by

l£Ns l£Nq,

qdq = 1, are determined by the system of linear equations

E q^ (p(x) — y(x))w(x)xi+l

ieNq, xeXr

+ £ si E w2(x)xi+l =0, l G Ns,

ieNs xeXr

E qi E (p(x) — y(x))2xj+l

ieNq, xeXr

+ E E (p(x) — y(x))w(x)xj =0, l G Nq,, ieNs xeXr

where w(x) = rix^eXp(x — xi)-

Proof. For the function (5) from the statement of the theorem, the right-hand side (4) takes the form

(6) ((p(x) - y(x))q(x) + ^(xMx))2.

xexr

Equating the derivatives of this expression with respect to the coefficients s and by the coefficients q to zero and canceling the common factors, we obtain a system of linear equations for the coefficients:

{0=5^ ((p(x) — y(x))q(x) + w(x)s(x))w(x)xl, l G Ns,

xexr

0 = E ((p(x) — y(x))q(x) + u(x)s(x))(p(x) — y(x))xl, l G Nq>. xexr

By substituting the representations s and q, a system of equations is obtained from this system under the conditions of the theorem. The existence and uniqueness of the solution is determined by the correctness of the statement of the problem S. □

Corollary 1. Let S = (PM,Xp,Xny) be a local statement of the interpolation 'problem. Then the optimal solution p = C(S) in the estimate (4) has the form

p(x) = p(x) + w(x)s(x),

where p is an interpolation polynomial over the nodes Xp, and the coefficients of the polynomial s(x) = J2jeN, Sjxj are determined by the system of linear equations

(7) E Si E ^2(x)xi+j = ^ (y(x) — p(x))^2(x)xj, j G Nr. ieNr xexr xexr

3. Two Stage Polynomial Rational Interpolation

Let in the notation of the previous section there are exactly two nodes xi < x2, but in addition to the values of the interpolated function, the values of its derivatives up to the order lm > 0 inclusive, m =1, 2. In this case, both of the higher derivatives specified at each node are assumed to be approximations, and the remaining lm — 1 give exact values for interpolation. Consider an arbitrary auxiliary polynomial f, whose values and derivatives at the nodes x1 and x2 coincide with the given ones, and denote

w(x) = (x — xi)11 (x — x2 )l2.

Then the linearizing estimate (4) of the function (5) is replaced by the estimate

BR dnom:1 (x1, x2 , ^^^ f)

(8) = ]T (gm ((p(x) — f (x))q(x) + W(x)s(x)) ]

m=1 \ x = xm /

by the values of the higher derivatives of y under the condition dl

(9) 0 = —i ((p(x) — f(x))q(x) + w(x)s(x))

x — xm

Vl<lm Vm=1,2,

equivalent by virtue of the Leibniz formulas for differentiating the product as required

f(l) (xm) = y(l)(xm) Vl<lm Vm=i,2, and convenient for calculations (10) p(l)(xm) = y(l)(xm) Vl<lm Vm=i,2.

Theorem 3. Under the assumptions under consideration, for dq = 0,1, the optimal solution f = C(S) in the estimate (8) has the form

f (x) = p(x) + w(x)r(x),

where p is the Hermite polynomial in exactly given values of the function and lower derivatives,

so + six, dq = 0,

r(x) = 4 so . .

——, dq = 1; qo + x

BxCx BxxC i _ ^

= BxxB — bx , dq = 0,

so S AxC — ACx d = 1

AB — C2 , dq 1;

BCx — BxC Ax B — CxC

si = —^r, qo = ■

BxxB — b2 ' ^ AB — C2 '

A = ¿2 + Ax = ¿2xi + ¿^2,

B = b2 + b2, Bx = bixi + b2x2,

Bxx = + b2x2i

C = ¿161 + ¿262, Cx = ¿161X1 + ¿2^2X2;

¿m = p('m)(im] - y(lm)(xm), m = 1, 2;

6m ^m !(xm xm) ^, m 1, 2;

m. = 3 — m.

Proof. Expanding the right-hand side (8), taking into account the zero lowest derivatives of the function w(x) and the identity (10), we reduce it to the form

2 2 ^ ((p(lm)(x) — y(lm)(Xm))q(Xm) + W(lm)(Xm)s(Xm)) .

m=1

Taking into account the equality w(lm)(xm) = 1m!(Xm — xm)lm, substituting expressions for q and for s, and w(lm)(xm) = 1m!(xm — xm)lm, differentiating the obtained equality for unknown coefficients and equating the derivatives to zero, we obtain a system of equations for these coefficients. For dq = 0, by definition q(x) = 1, and this system has the form

0 = (¿1 + &1(so + S1x1))&1 + (¿2 + 62 (so + S1x2))62, 0 = (¿1 + 61(so + S1x1))61x1 + (¿2 + 62 (so + S1x2))62x2.

that is, it is a linear system on s0 and s1 of the form,

= C + Bso + BxS1,

= Cx + BxSo + BxxS1.

is constant, q1 = 1, and the system has the form

) + 61So^1 + (¿2(qo + x2 ) + 62So)¿2, ) + 61so)61 + (¿2 (qo + x2 ) + 62so )62,

0 = Ax + Aqo + Cso, 0 = Cx + Cqo + Bso.

(11) 00

For dq = 1, the function s

0 = (¿1(qo + x1 0 = (¿1(qo + x1

or

4. Sketch of a for polynomial-rational interpolation algorithm

,_ Polynomial-Rational interpolation Algorithm _,

,_ Purpose of the algorithm _,

The algorithm approximately reconstructs the unknown mapping

f : X ^ R.

If tout = 0, then on each Mi as an interpolant the simple function closest to the specified values is chosen:

• or a polynomial of degree d

• or a continuous rational function with a numerator of degree d — 1 and a linear (dq =1) denominator of the form x — c so as to ensure

(J) matching values at nodes with smoothness m in case m > 0, (2) exact local recovery of each continuous function of the interpolant class on the convex hull of any d + 2 nodes if the data allows.

For tout > 0, the output family is one, but the optimization uses both families of local interpolants.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

,_ Initial data _,

X = [a,b] C R — interpolation segment;

X = {a = xo < • • • < xn = b} C X — a discrete set of nodes,

dividing [a, b] into n segments Mi = [xi_i,xi]; y : X ^ R — values specified in nodes; d > 1 — degree of interpolation; tout — type of interpolant at output: tout = 0 — polynomial and rational, tout = 1 — only polynomials,

tout = 2 — only rational (for d < 4 this entails preserving monotonicity). a = 0 if you really need superior smoothness J, otherwise the smoothness remain max(0, ^y3J); £ > 0 — lower error estimate;

When calculating germ in a node, the values in this and no more than d + 1 nearest nodes on each side are used.

Local calculations using short formulas based on separated differences, avoiding need for high precision arithmetic.

,_ Divided differences and Newton's polynomials _,

prJ For (l = 0, . . . , d ){For ( i = 0, . . . ,n - l ){

PRaiJ If ( l ==0 ) Then { Kn := yi ; } Else {Kn+i := , i+l-1-Ki+1,»+1 }}}

' ' Xi — Xi+i

I def j

Function , j (x) = { Return ( (x — Xl) ) }

' l=i

c Function pi,j (x) d=f{ If (i == j) // the Newton's polynomial on [xi,xj]

PRcia Then {Return (xi )}

PRb

PRcib Else {Return (pi,j-1(x) + )}}

_o // .sequential enumeration of nodes is easy

' // to parallelize, the locality radius is d + 1

Stage 1. Calculate derivatives at all the nodes _,

PRdla

Differentiation of Newton's polynomial pi,j(xi) _,

For (v _ 0,... ,m){ // derivative order in node xi € [xi, Xj], j — I < d

J For (l _ max(0, i — d),... , min(n, i + d)){ PRdial If ( v __ 0 ) Then {p(0i)(xi) :_ y(xj ) } Else {pl'^i) :_ 0 } PRd1a2 For ( j _ I + 1, . . . , min(n, I + d) ){

pRdia2j j (xi) :_ w(Vj)-1 (xi)(xi — Xj) + v^j-1 (xi),

pRdij2b pj (xi):_ p(Vj)-i(xi) +K iw ( Vj)-i(xi)}}}

For a competitive definition of a germ in Xi through the weighted sum of germs of test interpolants from successive sets of nodes containing Xi, we initialize

pRd2 DV :_ 0; // one-side derivatives being defined at Xi node

n :_ 0; // the number of terms in the weighted sum or

pRc3 n± : ; // minus the number of error-free trial interpolants

W :_ 0; // the total weight of the one-way derivative or

l| ' ; // minus the sum of error-free terms

Loop over [xj, Xj+d+l] trial segments of d + 1 nodes _,

5 For (j = max(0, i — d — 1),... , min(n — d — 1, i — d — 1)){

Determine the degree dp of the polynomial p in (5) and the numbers of the beginning of the drawn segment jp and the refining (far) node

5a If ( | Xj — Xi | == |xj+d+1 — Xi | ) Then { dp := d — 1} Else {dp :=d;} PRdsb If ( |xj — Xi| < |Xj+d+i — Xi| ) Then {jp := j, jr := j + d + 1; } PRd5b2 Else {jp := j + 1 ; jr := j ; }

pRd5c £ := Xjr; // the refining node pRd5d Sign := sign(2jp + dp + 1 — 2i); // sign of the offset of the node set relative to : Weighted sums of one-sided derivatives of trial interpolants _,

Let's start with test interpolants of the form p(x) + s(x)w. If (d== dp) // if the node farthest from Xi in the set is qualifying

pRd5ei Then { // then we use Newton's interpolant for other nodes,

— Pjpjp+d (C)) jp+d(6

PRd5e2 err •=

; } // i.e. s(x) = 0

// otherwise the distant nodes are equidistant and the optimality condition

PM5e3 Else { // (7) takes the form < AqSq + B°Sl C , where j„ = j + 1,

[AI«Q + Bisi = Ci

Ao Ai

PRd5e4 Bl

Co Ci

jp,j+d(xj ) +

= wjp,j+d(x

2 / \ .2

= wjp,j+d(xj)xj + j'+dfâÉ; Bo •= Ai;

= wip,j+d(xj )xi + ^ip,j+d(Ç)Ç2; = (y(xj) -p(xj))w2p,j+d(xj) + (y(6 -P(Ç))w2p,j+d(Ç); (y(xj) -p(xj))w2p,j+d(xj)xj + (y(6 -P(Ç))^2p,j+d(Ç)Ç;

PRd5e5 Sn •= B1C0-B0C1 ; S1 .= A1C0-A0C1 ; pRd5e5 sq .= a0BI-A!B0 ; si •= A0B1-A1B0 ;

PRd5e6 err •= / J2 (y(x) - Pjp,j + d(x) - Wj + i,j + d(x)(six + So )) }

y xe{xi,xj}

PRd5f If ( err > £ ) Then { w •= yerr ;} Else { w •= — 1}

Averaging the derivatives of trial polynomial interpolants _,

PRd5g For ( v = 1,... , m ){

order of the weight-averaged derivative

PRd5g1 If ( d== dp ) Then { Dv := PVp,jp+d(Xi) }

pRd5gib Else {Dv := pV+i,j+d(xi) + (so + «i^i),j+d(®i) + si (xi) ; }

pRd5g2 If ( Si9n ^ 0 ) Then { // weighted sum for the derivative on the left

PRd5g2b If ( U_ > 0 ) Then {

PRd5g2b2 If ( w > 0 ) Then {Dv += ; W- += w; U_ ++ ; }

12,

PRd5g2b2b Else {DV := Dv ; W- := 1; U- := -1;}}

I" -

PRd5g2b3 Else { If ( w < 0 ) Then {D- += Dv ; W_ += 1 ; U_ += -1;}}} PRd5g3 If ( Si9n ^ 0 ) Then { // weighted sum for the derivative on the right

PRd5g3b If ( n+ > 0 ) Then { If ( w > 0 ) Then { D+ += wDv ; W+ += W; U+++;} PRd5g3b2b Else {D+ := Dv ; W+ := 1 ; U+ := -1;}}

in: + + +

PRd5g3b3 Else {

PRd5g3b4' If ( w < 0 ) Then {D+ += Dv ; W+ += 1 ; U+ += -1;}}}}

A trial rational interpolant of the form(5) _,

/ \ w(x) p(x) + so-, dq = 1, qo = -c, qi = 1

If (d == dp) Then { // pure interpolation over d + 1 nodes

// from Pjp+ijp+dCx) + so"jp+x1-,jp+d ( x) = y(x) for x e {xjp, ?}

// coefficients are determined by the system < So + OjC XjOj , where

[w'so + o'c = Xj+dO' '

| := wjp+l,jp+d(xjL &j := y(xj) - Pjp+l,jp+d(xj),

PRd5h3 f f

I W := ^jp + 1,jp + d (xj + d), & := y(xj + d) - Pjp + 1,jp + d fe + d^

s := S'xj Sj Sj xj+dS' c := W'xj Sjxj+dS' .

PRd5h4 «0 :— -s!-Tr- , C :— -F7-Tr-.

w Wj S ' — U'öj ' Wj S ' — W'öj '

pM5„5 err := | y(£) — Pjp,jp+d(i) — sjp+d(S) | >

// otherwise the far nodes are equidistant and the optimality condition // interpolants of the form (5) from the Theorem 2 PRd5h6 Else { (AS + BC — C

// is written by the system of equations < , where

I Aiso + Bic — Ci

Ao := — w2+i,j+d(Xj) — w2+i,j+d(C); PRd5h7 Ai := (Pj+i,j+d(Xj) — y(Xj))wj+i,j+d(Xj) + (pj+i,j+d(i) — y(i))^j+i,j+d(i);

Bi

Bi

50

PRd5h8

= Ai;

= (Pj+i,j+d(Xj) — y(Xj))2 + (Pj+i,j+d (£) — y(£))2; C0 := (Pj + i,j+d(Xj) — y(Xj))Wj+i,j+d(Xj)Xj

+ (Pj+i,j+d(C) — y(C))^j+i,j+d(C)C; Ci := (Pj+i,j+d(Xj) — y(Xj))2Xj + (Pj+i,j+d(C) — y(6)2<S; „,,, BiCo — BoCi Ai Co — AoCi

PM5h10 so := aobT—AObO; c := aObT—AObO;}

i If (c [Xj, ) Then { // only if pole outside interpolation segment

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

PM512 err := J E ( y(X) — Pjp,j+d(X) — so ^j+i,j+d(XM \j xe{xi,xj}V X — c J

PM5131 If ( err > £ ) Then { w := 1jerr ;} Else { w := — 1}

PRd5i4 For (v = 1,. . . , m){ // order of the weighted derivative

A

5i4a DV := Pip+i,jp+d(Xi) — so vj+j(Xi)(c—x)1 i v

i=o

PRd5i4b If (Sign ^ 0) Then { // weighted sum for the derivative on the left

PRd5i4b2 If (n- > 0) Then {

PRd5i4b2b If ( W > 0 ) Then {Dv += wDv ; W- += W; n- ++ ; } PRd5i4b2b2 Else { DV := Dv ; W- := 1; n- := — 1;}}

PRd5i4b2c Else { If ( w< 0 )Then{ DV += Dv ; W_ += 1; n_ +=-1;}}}

II '

PRd5i4c If (Sign ^ 0) Then { // weighted sum for the derivative on the right

PRd5i4c2 If (n+ > 0 ) Then { If (w > 0) Then { D+ += wDv ; W+ += w; n+++;}

I + + + +

PRd5i4c2b2 Else {D+ := Dv ; W+ := 1; n+ := -1;}}

m:El { + + +

PRd5i4c2c Else {

PRd5i4c2d If( w< 0 )Then {DV- += DV ; W+ += 1 ; n+ += -1;>>>>>

. nv ._ DV Dv

pRd5j DV+ ; DV_ ; // the derivatives

W+ W_

PRd5k If ( a > 0 ) Then { DV_ := —'—rT^^TTr^— ; DV+ := DV_ >>

6 If ( i ==0 ) Then {Next (PRd) }

Calculate interpolant between nodes

W- + W+

Justification: Explicit strict analogs of finite difference formulas for multiple Hermite interpolation over two nodes of arbitrary degree were not found in the literature.

Let us directly derive them from linear relations

p(—1)(x) = o,

p(2 v + 1)(x) = p(2 v —1)(x)

+ (x - xi)v(x - Xi_1)V(A v (x - Xi— 1) + Bv (x - Xi )).

From the last A = D+ - ^~P ^ and Bv = ^ ~ ,

v!(xi - xi—1)v+1 v!(xi - xi—1)v+1

and its differentiation gives direct formulas for the derivatives of order j ^ v at nodes i - 1 and i, into which the separated differences go when the nodes merge:

p(2)v+1)(xi—1) := p(2v—1)(xi—1)

v!fc_ 1 - xi)2v—j N Bv N

+ (* 1 . i))! (j - v)(Av + Bv) + —j ,

(2v - j - 1)! V 2v - jj

p(2l+i)(xi) p(2t _i)(xi)

+ ,- ^ (Xi - Xi_l)2v_j (Av + BV) (2v - j - 1)!

+ - Xi_l)2v_j-1Av (Xi - Xi_l).

(2v - j)!

Derivatives at nodes and coefficients of the Hermite interpolant

7 p(—)1)± := 0, // start of array 2 X (m +1) X (m +1)

8 For (v = 0,. . . ,m){

nv _ p(v) p(v) _ nv

Di+ p(2v — 1)+ „ p(2v—1) — ni—

4 . 1 (2V-1)+ T-» .

PRd8a := ——---—- , :=

' v!(xi - Xi-1 ' V!(xi - Xi_i)V

t

PRd8b If ( V < m ) Then {

PRd8b2 For ( v' = V, . . . , 2 V — 1 ){

PRd8b2a pP(2V+i)- •= pP(2V-i)-

+ (o^ — V 1), (Xi-i — Xi)2v-j (Av + Bv) (2V — j — 1)!

V,

(2v — j),

(Xi-i — Xi)2V j iBv (Xi — Xi-i)

PRd8b2b

J P(j) •= P(j)

P(2v+i) + •= P(2v-i) +

+ (¿jl)! (Xi — Xi-i)2V-j (Av + Bv )

+ , (Xi — Xi-i)2v j iAv(Xi — Xi-i) }}}

(2v — j)!

Selection of local interpolant < on the segment [xi—i,Xi] _,

If (d mod 2__1) Th { // ^f d — + 1 is odd, then one

// of two Hermite interpolants:

PRd9b If (tout = 2) Then { // or polynomial interpolant,

the local Hermite polynomial p(2v+i), calculated by the analog of Newton's formula for Hermite polynomials:

PRd9b2 ^(p)i(X) =f p(2i+i)(x) V=o(x — Xi)V (X — Xi-i)V (Av (x — Xi-i)+ Bv (x —

Xi))

3 If ( tout == 1) Then { =f <£(p)i }

b3b Else { J(p)i •= / E I ^(P)i(Xj) — Dj I 2 >>

y je{i-i,i} 1 () 1

c If ( tout = 1 ) Then { // or a rational interpolant of the form

/ \ def ^ / n , ((x-xi)(x-X7 l))1 —1 , , , ,

2 ^(r)i(X) = pd-2(X)+ s0 —- x-c— -, /■ calculate s0 and c.

The system of equations ^(^(xj) = Dj, j G {i — 1,i} is reduced to linear ;

c3 For ( j G {i — 1,i} ){ // its coefficients

PRd9c3a Aij •= ((Xj — Xi)(Xj — X;-i))i-i, Cij •= Xj Dj — pd-2 (Xj ) } . Di— 1C0-D Cl _ A0C0-A0C°

PRd9c4 So •= AD ; c •= AAD—T-ÂCDT ;

ij

PM9c!

5 If ( iout == 2 ) Then { cpi = ip{r)iy

PM9c5b Else iô(r)i := E \<p{r)i(.xi) - Dlj\ }}

\je{i-i,i}1 1

PM9d If (iout == 0) Then { // choice of two interpolants

PRd9d2 If (0{r)i > S(p)i) Then { Ipi =f ip{p)iy PM9d2b Else { CPi = I¿>(r)j}}}

pm9o Else { // d = 2/ is even and the least squares method works

If (iout 2) Then { // the polynomial interpolant has the form

12 iP(P)i(x) = P(2i-D(x) + «0(0e - -

// i/ere the coefficient so minimizing the squared error I = T, (v{p)i(xi) - D'j) min> is determined from {¿fp)iYS0 = 0,

// where cp^^^Xi) = + Z!Sb(.T — Xi—i)1, by the formula.

^21-1)^0 + (-1)^21-1) PM913 So :=--Try-ry-;

2l\{Xi - xi-1)'

PM9141 If ( iout == 1 ) Then { =f ip^^y

PM9!4b Else {<5(p)i := Y, - Dj\ >>

]J je{i-i,i}1 1

(iout 1 ) Then { // the rational interpolant is

/ \ def f \ I i , J(X ~ Xi)(x ~ Xl-i))'-1

PM9g2 ip{r)i{X) = p2l-3{X) + (so + Six)-—-,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

/ where so, si o?ïd c soiue a. problem similar to minimizing (4):

( £ ({{Xj - c^^iixj))^ - (xj - c)D)+lDlri)2 rnin, [((xc)^,,^®))»-1'!^. = (xc)D'r1 + (/- l)Dlr\ j e {i- l,i}.

/ In this conditional extremum problem

AiEi + A2E2 , , D r< , T?

PRd9gi2 c :=--r.-—, so := Ac + B, si := Cc + E

A2 + A2

PRd9gi3 ¿(r)i := ^ E ^ I ^(r))i(xi) — I ;

- (' )i \! I r (r)i

Y je{i-i,i} 1 ■

9gi4 If (tout ==2) Then {Result: = ^(r)i>> PM91J If (tout == 0) Then { // Best choice of two interpolants

PRd9h2 If ( c [xi — 1,Xi] and ¿(r)i < ¿(p)i )

PRd9h2a Then {Result: = ^(r)i> Else {Result: = ^(p)i>>>>

4.1. On numerical testing of the algorithm

The construction of the algorithm guarantees for tout = 0 exact reproduction of quadratic and linear-fractional functions by a sufficient number (n > d + 1) of values in interpolation nodes, regardless of the values of other parameters. This quality significantly distinguishes it from well-known interpolation algorithms. In particular, any fractional linear function is reproduced by this algorithm more accurately than by the algorithms of polynomial interpolation or interpolation by polynomial splines.

In addition, if smoothness is not required (a = 0), then the function y = %/x2 = |x| is exactly reconstructed for a sufficient number of given both positive and negative values, and therefore , reproduced more accurately than rational interpolation.

The construction gives reason to expect that the quality of interpolation in the examples considered by Shchabak will be as high as in the presented in his work graphs.

On the other hand, a random polynomial of degree n will be accurately reconstructed by polynomial interpolation from n +1 value, but not by the described algorithm.

Thus, the demonstration of selected examples does not provide meaningful information about the possibilities of using the algorithm. The ultimate way to obtain useful information is to consider a representative independent sample. In our settings, such a sample can be formed by sets of prime mathematical functions from common programming languages with sufficiently representative sets of meshes. The proposed continuation naturally requires a separate publication.

i Надоели баннеры? Вы всегда можете отключить рекламу.