Научная статья на тему 'COMPARISON BETWEEN SOME SIXTH CONVERGENCE ORDER SOLVERS UNDER THE SAME SET OF CRITERIA'

COMPARISON BETWEEN SOME SIXTH CONVERGENCE ORDER SOLVERS UNDER THE SAME SET OF CRITERIA Текст научной статьи по специальности «Математика»

CC BY
40
9
i Надоели баннеры? Вы всегда можете отключить рекламу.
Журнал
Проблемы анализа
WOS
Scopus
ВАК
MathSciNet
Область наук
Ключевые слова
BANACH SPACE / SIXTH CONVERGENCE ORDER METHODS / LOCAL CONVERGENCE

Аннотация научной статьи по математике, автор научной работы — Argyros I. K., George S.

Different set of criteria based on the seventh derivative are used for convergence of sixth order methods. Then, these methods are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on distance between the iterate and solution, and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «COMPARISON BETWEEN SOME SIXTH CONVERGENCE ORDER SOLVERS UNDER THE SAME SET OF CRITERIA»

54

Probl. Anal. Issues Anal. Vol. 9(27), No3, 2020, pp. 54-65

DOI: 10.15393/j3.art.2020.8690

UDC 517.988

I. K. Argyros, S. George

COMPARISON BETWEEN SOME SIXTH CONVERGENCE ORDER SOLVERS UNDER THE SAME SET OF CRITERIA

Abstract. Different set of criteria based on the seventh derivative are used for convergence of sixth order methods. Then, these methods are compared using numerical examples. But we do not know: if the results of those comparisons are true if the examples change; the largest radii of convergence; error estimates on distance between the iterate and solution, and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

Key words: Banach space, sixth convergence order methods, local convergence.

2010 Mathematical Subject Classification: 65J20, 49M15, 74G20, 41A25

1. Introduction. In this study, we compare some sixth-order methods for approximating a solution x* of the nonlinear equation

F (x) = 0.

Here F : Q C Bi ^ B2 is a continuously differentiable nonlinear operator between the Banach spaces B1 and B2, and Q stands for an open nonempty convex compact set of B1. The sixth-order method we are interested in is defined as follows [1]:

2 , _1

yn xn 3F (xn) F(xn)

Zn = Xn - A(Vn)F'(xn)-1F(xn), (1)

© Petrozavodsk State University, 2020

xn+i = - 2(3B-1 - F'(xn)~l)F(zn),

where, A : Bi L(B2, Bi), Bn = F'(xn) + F'(yn) and K = B-1F'(xn).

These methods use similar information; derived based on different techniques, whose convergence has been shown using Taylor expansions involving the seventh-order derivative not on these methods, of F. The assumptions involving the seventh derivatives limit the applicability of these methods. For example: Let B1 = B2 = R, Q = [— 1, §]. Define f on Q by

,, \ f s3 log s2 + s5 — s4, s = 0 f (s) = \ 0, s = 0.

Then, we get

f'(s) = 3s§ log s2 + 5s4 — 4s3 + 2s§,

f ''(s) = 6s log s2 + 20s3 — 12s§ + 10s,

f'''(s) = 6 log s2 + 60s2 — 24s + 22.

Obviously f '''(s) is not bounded on Q. Hence, the convergence of methods (1) is not guaranteed by the earlier analysis.

Moreover, in the case of the last three methods, no computable convergence radii, upper error estimates on ||xn — x*||, nor results on the uniqueness of x* are given. Furthermore, their performance is compared by numerical examples. Hence, we do not know in advance, having the same set of assumptions, which method provides the largest radius of convergence (i.e., more initial points x0); the tightest error estimates on ||xn — x*|| (i. e., needs fewer iterations to obtain a desired error tolerance); and the best information on the location of the solution.

In this paper, we address these concerns. The same convergence order is obtained using COC or ACOC (to be precised in Remark 1); it depends only on the first derivative and the iteration. Hence, we also extend the applicability of these methods. Our technique can be used to compare other methods [1-12] in the same way.

The rest of the paper is organized as follows. The convergence analysis of schemes (1) is given in Section 2, examples are given in Section 3, and the conclusion is in Section 4.

2. Local convergence. Let us define real parameters and functions needed for our analysis. Assume that there exists a continuous increasing function that maps the interval S := [0, ro) into itself and such that the equation

(s) — 1 = 0,

has the least positive solution r0. Define the real functions gi and h on (0, r0) as

1 i

J w((1 — t)s)dr + 1 J (rs)dr

I \ 0 0

gi(s) =-i--

1 — ^0 (s)

and

h1(s) = gi(s) — 1,

where are continuous increasing functions on S0 := [0, r0). Assume that the equation

hi(s) = 0.

has the least solution in (0,r0) denoted by Ri. Assume that the equation

p(s) — 1 = 0

has the least solution in (0,r0) denoted rp, where

P(s) = 2Ms)+ ^0 (gi(s)s)).

Define the functions g2 and h2 on [0, rp) as

i

q(s) J" (ts)^t

g2(s) = g0(s) +

and where

1 — ^0 (s) h2 (s) = g2(s) — 1, fu((1 — t )s)dT

g0(s) = -

1 — ^0 (s)

and q is a continuous increasing real function on [0,rp). Assume that the equation

h2(s) = 0

has the least solution in (0,rp) denoted R2.

Assume the the equation

^c(g2(s)s) - 1 = 0 (2)

has the least solution in (0, rp) denoted r1. Define the functions g3 and h3 on [0, r1) as

g3(s)

a>te(.).) + (2 + ■*(') +

V 1 - (s)

1

i i \ \ , o i i \ w $^1(rg2(s)s)dTn + ^o(s) + Wo(01(s)s) + 2^o(g2(s)s^ 0

1 - ^o(g2(s)s) J 2(1 - p(s))

g2(s)

and

h3(s) = g3(s) - 1. Assume that the equation

Ms) = 0

has the least solution in (0, r1) denoted R3. Let the radius of convergence R be as

R = min{Rm}, m = 1, 2, 3,... (3)

Thus, for all s G [0, R):

0 ^ ^o(s) < 1, (4)

0 ^ ^o(^2(s)s) < 1, (5)

0 ^ p(s) < 1, (6)

0 ^ gm(s) < 1. (7)

The following definitions are used: U(x, a) = {y G B1 : ||x - y|| < a} and let f7(x,a) be its closure for a > 0. Let us use the notation en = ||xn -x*||, for all n = 0,1, 2,...

The following assumptions (A) are used:

(A1) F : Q —> Y has a simple solution x* and the inverse of F' (x*) exists.

(A2) There exists a continuous increasing function on S, such that for all x G Q

||F'(x*)-1(F'(x) - F'(x*))|| ^ wo(|x - x*||). Set Qo = Q fi U(x*,ro).

(A3) There exists a continuous increasing functions u on S0, such that for each x, y G H0

IIF'(x*)-i(F'(y) — F'(x))|| ^ u(||y — x||),

||F'(x*)-iF'(x)|| ^ ui(||x — x*||).

(A4) There exists a continuous increasing real function q defined on (0, rp), such that for all x G H0

||1 — A((F'(x) + F'(y))-iF'(x))|| ^ q(||x — x*||),

where y = x — |F'(x)-iF(x). (A5) U7(x*, R) C H, and r0, ri, Ri and R2 exist, where R is defined by (2). (A6) There exists R* ^ R, such that

i

J U0(TR*)dT < 1. 0

Set Hi = H n U7(x*,R*).

Under these assumptions, we present the ball convergence for (1).

Theorem 1. Suppose that x0 G U(x*,R) — {x*} under the conditions (A). Then the following assertions hold:

{x„} G U(x*, R),

lim x„ = x*, (8)

n-»TO

||yn — x*|| ^ gi(en)en ^ en < R, (9)

||zn — x*|| ^ g2(en)en ^ en, (10)

||xn+i — x*|| ^ g3(en)en ^ en, (11)

and x* is unique in the set Hi as a solution of equation F(x) = 0.

Proof. Let us choose x G U(x*,R) — {x*}. Then, due to (3), (4), (Ai) and (A2), we get

||F'(x*)-i(F'(x) — F'(x*))|| ^ u0(|x — x*||) < u0(R) ^ 1,

leading to F'(x) being invertible,

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

||F'(x)-1F'(x*)|| < ---i-^ (12)

1 - wo(||x - x*||)

by the Banach perturbation lemma [8], and to the existence of yo by the method (1). Further, in view of

1

F(x) = F(x) - F(x*) = J F'(x + t(x - x*))dr(x - x*),

o

(A1) and (A3), we have

1

||F'(x*)-1F'(x) || < / ^(t||x - x*||)dT||x - x*||. (13)

Then it follows from (3), (7) (for m =1), (A3), (12) (for x = xo) and (13) (for x = xo)

||Vo - x*|| = ||xo - x* - F'(xo)-1F(xo) + 3F'(xo)-1F(xo)|| < 1

< ||F'(xo)-1F'(x*)| JF'(x*)-1(F'(x* + t(xo-x*))-F'(xo))dr(xo-x*) +

o

+ 3||F'(xo)-1F(x*)||||F'(x*)-1F'(xo)|| < 1

/ w((1 - t) ||xo - x*||)dT||xo - x* || < °_+

1 - Wo(||xo - x*||)

+ Uo1 ^1(t|xo - x*|)dT||xo - x*W < (14)

1 - Wo(|xo - x*||) <

< g1(|xo - x*||)||xo - x*|| < ||xo - x*|| < R,

leading to the estimate (9) for n = 0 and yo G U(x*, R).

Next, we need to show that F'(xo) + F'(yo) is an invertible operator. Indeed, using (3), (6) and (12), we have

||(2F'(x»))-1(F'(xo) + F'(yo) - 2F'(x*))|| ^

^ 2(lF'(x*)-1(F'(xo) - F'(x*))|| + ||F'(x*)-1(F'(yo) - F'(x*))||) ^ ^ 2(^0(||Xo - x*||) + Wo(|yo - x*||)) ^ ^ 2(^o(|xo - X*ll) + ^o(gi(||xo - x*||)||xo - x*||)) ^ p(R) < 1,

so

№ + F'(yo))-lF'(x.)B i 2(1 - ^ - x.B)) (15>

and zo is well-defined. Using (3), (8) (for i = 2), (A4), (12) (for x = xo), (13)-(14), and the method (1), we obtain

||zo - x*|| = || (xo - x. - F'(xo)-1F(xo)) + (I - A( Vo))F'(xo)-1F(xo) || ^

^ ||xo - x* - F(xo)-1F(xo)|| +

+ ||1 - A(Vo)||||F'(xo)-1F'(x»)||||F'(x*)-1F(xo)|| ^

^ [go(|xo - x.||) + 1

q(||xo - x*||) /^(r||xo - x*||)dr

+ 1 - J(||xo - x*||) ]|xo - x*^ =

= g2(|xo - x*|)|xo - x*|| ^ ||xo - x*||, (16)

leading to the verification of (10), zo G U(x*, R), and so x1 is well defined. We need an estimate

- 2[F'(xo)-1 - 3B-1] = -2F'(xo)-1(Bo - 3F'(xo))B-1 = = -2F'(xo)-1(F'(xo) + F'(yo) - 3F'(xo))Bo-1 =

= -2F'(xo)-1(F'(yo) - F'(xo)) + 2B0-1. (17)

Then, by (5), (8) (for m = 3), (12) (for x = zo), (13) (for x = zo), (15)-(17), we get

||x1 -x*|| = ||(zo-x*-F'(zo)-1 F (zo)) + [2F'(xo)-1(F'(yo)-F'(xo))Bo-1 + + (F'(zo)-1 - 2B-1)]F(zo)|| ^ ^ ||zo - x* - F'(zo)-1F(zo)| + ||2F'(xo)-1(F'(yo) - F'(xo)) + + F'(zo)-1 ((F'(xo) - F'(zo)) +

+ (F'(yo) - ))||||£o-1F'(x*)||||F'(x*)-1F(*b)|| ^

^ L(||zo - x*||) + (2 M||yo-s^+^IN» - x||) + V 1 - ^o(|xo - x*||)

+ (^o(|xo - x*||) + ^o(|yo - x*||) + 2^o(|zo - x*||)) x

/o1 (t||zo - x*||)dT

X

zo - x

2(1 -p(|xo - x*||)

= gs(|xo - x*||)||xo - x*|| ^ ||xo - x*||,

leading to the completion of the induction for (9)-(11) for n = 0 and x1 G U(x*, R). By supposing they hold for all m = 0,1,..., n - 1, we complete the induction for (9)-(11) by replacing xo, yo, zo, x1 by xm, ym, zm, xm+1 in the previous calculations. In view of the estimate

||xm+1 - x*! ^ b||xm - x*! < R>

where b = g3(|xo-x*||) G [0,1), we obtain xm+1 G U(x*,R) and lim xm = = x*, showing (9). Let u G Q1 with F(u) = 0. Define

1

C = J F'(u + t (x * - u))dT.

0

Then, by (A2) and (A5), we get

1 1

||F'(x* )-1(C - F'(x * ))|| ^J wo((1 - t ) ||x * - u||)dr ^J ^o(tR* )dr < 1,

oo

so x * = u; this follows from invertability of C and the identity 0 = F(x *) - F(u) = C(x * - u). □

Remark 1.

(a) We can find the convergence order by resorting to the computational order of convergence (COC) defined by

£ = ln/ ||xra+1 x * / i^ / ||xn x *

\ ry _ ry j V /~Y~' -I _ /~Y~'

\ H^n *|| / V || n— 1 ^

or the approximate computational order of convergence

J. -. / |xn+1 xnH \ /1 / ||xn xn— 1

=ln n-i7 /ln

\ ||xn xn-1 y / \ ||xn-1 xn-2

This way, we obtain in practice the order of convergence without resorting to the computation of higher-order derivatives appearing in the method or in the sufficient convergence criteria usually appearing in the Taylor expansions for the proofs of those results. (b) Let us consider specializations of method (1). Case I: A(V) = I; then, clearly, q(s) = 0. Case II: A(V) = 2V. Then, from the estimate

I - A(V) = I - 2(F'(x) + F'(y))-1F'(x) =

= (F'(x) + F'(y))-1(F'(x) + F'(y) - 2F'(x)) = = (F'(x) + F'(y))-1(F'(y) - F'(x)),

so, by the proof of Theorem 1, we can choose

( ) = ^o(g1(s)s) + ^o(s) q(s)= 2(1 - p(s)) . 3. Numerical Examples.

Example 3.1 Let us consider a system of differential equations governing the motion of an object and given by

F1 (x) = ex, F2(y) = (e - 1)y + 1, F3(z) = 1

with the initial conditions F1(0)= Fa(0) = Fa(0) =0. Let F = (F1, F2, F3). Let B1 = B2 = R3, Q = ¿7(0,1), x* = (0, 0, 0)T. Define the function F on Q for w = (x, y, z)T by

F(w) = (ex - 1,^y2 + y,z)T. The Frechet-derivative is

" ex 0 0

F'(v)= 0 (e - 1)y + 1 0 0 0 1

Notice that using the (A) conditions, we get wo(s) = (e-1)s, w(s) = ee-1 s, w1(s) = e7=1. The radii are given in Table 1.

Example 3.2 Let B1 = B2 = C[0,1], the space of continuous functions defined on [0,1], be equipped with the max norm. Let Q = U(0,1). Define the function F on Q by

1

F(p)(x) = p(x) - 5 / x^(0)3d0.

Radius Case-I Case-II

Ri 0.1544069513571540708252 0.1544069513571540708252

R2 0.3826919122323857447298 0.1492777526031611734502

R3 0.2952384459889182410918 0.0798310047170279202255

R 0.1544069513571540708252 0.0798310047170279202255

Table 1: example 3.1.

We have

i

F'(p(f))(x) = f(x) - 15 J for each f G Q.

0

Then we get x* = 0, so w0(s) = 7.5s, w(s) = 15s, and ^i(s) = 2. The radii are given in Table 2.

Radius Case-I Case-II

Ri R2 R3 0.02222222222222222222222 0.06666666666666666666666 0.05665528918936485469615 0.022222222222222222222222 0.028225943743472654834381 0.133333333333333333333333

R 0.02222222222222222222222 0.022222222222222222222222

Table 2: example 3.2.

Example 3.3 Returning back to the motivational example at the introduction of this study, we have w0(s) = w(s) = 96.6629073s and w1(s) = 2. The parameters for the method (1) are given in Table 3.

Radius Case-I Case-II

Ri R2 R3 0.002298939980488484951387 0.006896819941465455287843 0.005208424091120038290636 0.002298939980488484951387 0.002471912165730189275131 0.01034522991219942976426

R 0.002298939980488484951387 0.002298939980488484951387

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Table 3: example 3.3.

4. Conclusions. Different techniques are used to develop iterative methods. Moreover, different set of criteria, usually based on the seventh derivative, are needed in the ball convergence of the sixth-order methods. Then these methods are compared using numerical examples. But we do not know: if the results of those comparisons remain true if the examples change; the largest radii of convergence; error estimates on | xn - x | ; and uniqueness results that are computable. We address these concerns using only the first derivative and a common set of criteria. Numerical experiments are used to test the convergence criteria and further validate the theoretical results. Our technique can be used to make comparisons between other methods of the same order.

References

[1] Alzahrani. A, Bhel.R, Alshomrani.A: Some higher order iteration functions for solving nonlinear models, Appl. Math. Comput., 2018, vol. 334, pp. 8093. DOI: https://doi .org/10.1016/j.amc.2018.03.120

[2] Amat, S., Busquier, S., Gutierrez, J.M.: Geometrical constructions of iterative functions to solve nonlinear equations, J. Comput. Appl. Math., 2003, vol. 157, pp. 197-205. DOI: 10.1016/S0377-0427(03)00420-5

[3] Argyros, I.K., George, S., Mathematical modeling for the solution of equations and systems of equations with applications, Volume-IV, Nova Publishes, New York, 2019.

[4] Argyros, I.K., Magrenan, A.A., Iterative method and their dynamics with applications, CRC Press, New York, 2017.

[5] Cordero, A., Martinez, E., Torregrosa, J.R.: Iterative methods of order four and five for systems of nonlinear equations, Appl. Math. Comput., 2009, vol. 231, pp. 541-551.

DOI: https://doi.org/10.1016/j.cam.2009.04.015

[6] Grau-Sanchez, M., Grau, A., Noguera, M.: On the computational efficiency index and some iterative methods for solving systems of nonlinear equations, J. Comput. Appl. Math., 2011, vol. 236, pp. 1259-1266. DOI: https://doi.org/10.1016/j.cam.2011.08.008

[7] Gutierrez, J.M., Hernandez, M.A.: A family of Chebyshev-Halley type methods in Banach spaces, Bull. Aust. Math. Soc., 1997, vol. 55, pp. 113130. DOI: https://doi.org/10.1017/S0004972700030586

[8] Ortega, J.M., Rheinboldt, W.C.: Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, 1970.

[9] Ozban, A.Y.: Some new variants of Newton's method, Appl. Math. Lett., 2004, vol. 17, pp. 677-682.

DOI: https://10.1016/S0893-9659(04)90104-8

[10] Traub, J.F.: Iterative Methods for the Solution of Equations. Prentice-Hall, Englewood Cliffs, 1964.

[11] Sharma, J. R, Guha, R. K, Sharma, R.: An efficient fourth order weighted-Newton method for systems of nonlinear equations, Numer Algor., 2013, vol. 62, pp. 307-323. DOI: https://10.1007/s11075-012-9585-7

[12] Weerakoon, S., Fernando, T.G.I.: A variant of Newton's method with accelerated third-order convergence, Appl. Math. Lett., 2000, vol. 13, pp. 87-93. DOI: https://doi.org/10.1016/S0893-9659(00)00100-2

Received June 28, 2020. In revised form,, September 9, 2020. Accepted September 15, 2020. Published online October 12, 2020.

Ioannis K. Argyros,

Department of Mathematical Sciences, Cameron University, Lawton, OK 73505, USA. Email: iargyros@cameron.edu

Santhosh George,

Department of Mathematical and Computational Sciences, National Institute of Technology Karnataka, India-575 025. Email: sgeorge@nitk.edu.in

i Надоели баннеры? Вы всегда можете отключить рекламу.