Научная статья на тему 'CONVERGENCE ANALYSIS OF THE FINITE DIFFERENCE SOLUTION FOR COUPLED DRINFELD–SOKOLOV–WILSON SYSTEM'

CONVERGENCE ANALYSIS OF THE FINITE DIFFERENCE SOLUTION FOR COUPLED DRINFELD–SOKOLOV–WILSON SYSTEM Текст научной статьи по специальности «Математика»

CC BY
0
0
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
Drinfeld–Sokolov–Wilson equation / finite difference method / implicit finite difference method / уравнение Дринфельда – Соколова – Уилсона / метод конечных разностей / неявный метод конечных разностей

Аннотация научной статьи по математике, автор научной работы — Israa Th. Younis, Ekhlass S. Al-Rawi

This paper is devoted to drive the matrix algebraic equation for the coupled Drinfeld– Sokolov–Wilson (DSW) system using the implicit finite difference (IMFD) method. The convergence analysis of the finite difference solution is proved. Numerical experiment is presented with initial conditions describing the generation and evolution. The numerical results were being compared on the basis of calculating the absolute error (ABSE) and the mean square error (MSE). The numerical results proved that the numerical solution was close to the real solution at different values of time.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

АНАЛИЗ СХОДИМОСТИ КОНЕЧНО-РАЗНОСТНОГО РЕШЕНИЯ ДЛЯ СВЯЗАННОЙ СИСТЕМЫ ДРИНФЕЛЬДА – СОКОЛОВА – УИЛЬСОНА

В данной статье рассматривается решение матричного алгебраического уравнения для связанной системы Дринфельда – Соколова – Уилсона (ДСУ) с использованием неявного метода конечных разностей. Доказан анализ сходимости конечно-разностного решения. Представлен численный эксперимент с начальными условиями, описывающими генерацию и эволюцию. Численные результаты сравнивались на основе вычисления абсолютной погрешности и среднеквадратической погрешности. Численные результаты доказали, что численное решение было близко к реальному решению при различных значениях времени.

Текст научной работы на тему «CONVERGENCE ANALYSIS OF THE FINITE DIFFERENCE SOLUTION FOR COUPLED DRINFELD–SOKOLOV–WILSON SYSTEM»

MSC 65N06

DOI: 10.14529/ mmp240306

CONVERGENCE ANALYSIS OF THE FINITE DIFFERENCE SOLUTION FOR COUPLED DRINFELD-SOKOLOV-WILSON SYSTEM

Israa Th. Younis1, Ekhlass S. Al-Rawi1

1 University of Mosul, Mosul, Iraq

E-mail: [email protected], [email protected]

This paper is devoted to drive the matrix algebraic equation for the coupled Drinfeld-Sokolov-Wilson (DSW) system using the implicit finite difference (IMFD) method. The convergence analysis of the finite difference solution is proved. Numerical experiment is presented with initial conditions describing the generation and evolution. The numerical results were being compared on the basis of calculating the absolute error (ABSE) and the mean square error (MSE). The numerical results proved that the numerical solution was close to the real solution at different values of time.

Keywords: Drinfeld-Soholov-Wilson equation; finite difference method; implicit finite difference method.

Introduction

A crucial mathematical model called the Drinfeld-Sokolov-Wilson (DSW) system appears in a number of physical contexts, such as the quantum field theory and the integrable systems [1]. The coupled DSW system's intricacy and the nonlinear characteristics make it a difficult subject for both the analytical and the numerical research [2]. The behaviour of complicated physical systems, such as DSW, has been studied widely using the numerical methods, such as the finite difference methods. The Crank-Nicolson scheme has become well-known among the different numerical techniques because of its time-stepping stability and the second-order temporal precision [3]. Although it has been applied to linear issues successfully, there are special difficulties once it is applied to coupled, nonlinear systems like DSW [4].Researchers have studied implicit techniques to solve coupled nonlinear systems in recent years [5]. Frequently, these techniques show improved stability characteristics, particularly for stiff problems. Based on this achievement, the Crank-Nicolson method, an interesting area for investigation, is applied to the DSW system along with the finite difference spatial discretization. The coupled DSW system's finite difference solution will be thoroughly analyzed for convergence.

The mathematical model of the Drinfeld-Sokolov-Wilson system (DSW) as follows

which is represented by Drinfeld, Sokolov [9], and Wilson [13], represents a model of water waves. It has significant applications in fluid dynamics. Over years, a lot of numerical and analytical methods have been developed to solve these equations, including the

[6-12] :

(1)

dv d'3v dv du

(2)

Adomian decomposition method [14], the Exp-function method [15], the improved F-expansion method [16], the bifurcation method [17], and the qualitative theory [18]. These methods offer various approaches to finding solutions for the system (1), (2). They have contributed to the understanding of its behavior and properties implicit. In this work, utilizing the implicit Crank-Nicolson approach. The efficacy and accuracy of this method will be investigated while offering the field brand-new insights.

1. Derivation of the Matrix Equation Using the Finite Difference Method

A regular grid is introduced by defining the following discrete pair of points in the (x, t) plane:

xi = ih, i = 0,1,..., n, tj = jh, j = 0,1,..., m. The discretized solution of equation (1) using IMFD method is:

j+i _ j , j r j+l _ j+1 i _ ^

Uxi Uxi ' 2fiVxi tVxi+l Vxi-1\ ~

After arranging the above equation in terms of xi-i,xi,xi+i we obtain: j \ i n „,i+l i n („j \

where

Then

Gi K) ■ tffi + G+ G3 (v3xl) ■ vitl, = {H (u3xl) } , (3)

Gi (vJxi) — G2 — 1, G3 (v3xi) — H (u3xi) — ujxi.

G (j {uf\vj+1} = {H(uiz)} for i = 2, 3,..., n - 1. (4)

For i =1 we have:

G2u3xX1 + G3 (v3xt) ■ 1 = {H (u3xi)}

and for i = n :

Gl (Vxn-1) ' Vin-2 + G2 U3^ = {H (uxi) } •

Note that the definitions of Gi (vxi) , G2, G3 (vxi) and H (uxi) depend on vxi and u3x . Where G (V) is the tri-diagonal matrix:

_ G2 Gi (v3x2) 0 0 0

0 0

G3JX2

_ g2 Gi (v>

x3

0 0

0 0

0

Gsjw

Go

0 0 0 0 0

0 0

G3 (vX 4

0 0 0 0

00 00 00 0

G1 [v'xn-1 0

g2

G3 (vXn-2

0 0 0 0 0

G3 (vin-2

g2

And {vj} = {

j+1 „,j+1 „j+i „,j+1

x1

, u.

, Vx2 , ux2

{H (u )} = {H (U+1), H j) ,■■■ ,H ji)}

By the same approach for equation (2) we obtain:

, j+1 +1

u

v

j+1 j+1

u

1 -1

T

j+1 j v . — v •

xi xi

+rv:

k

j+1

+ q

j

/V+1 — 2vj+1 + 2vj+1 — vj+1 ï

V vxi+2 2vxi+1 + 2vxi-1 uxi-2)

uxi+1 u

2h3

j

xi— 1 1 I 7

+ sut

vj+1 - v vxi+1 - v

j+1 xi- 1

2h / xi \ 2h 'E1 Ki-2) + E (u^) (vx +-1 ) + % Ki) (vxD +

+'E4 (uïi) (vx++\) — 'E5 (vx++y = vj (xi) .

For i = 1, 2,... ,n — 1, where

'E2 j

'F - qk El-~2h?>

qk

h3

sk 2h

Uxi

% K») =1 + ^ Kt+i) - ^ Kt-i) >

'E4 (uXi) = -E2 (uXi) , 'E5 = -'E1, F {(vXi )} = < ■

Now equation (4) becomes in following algebraic matrix equation:

'E (uj) ■ {vj+1} = {F (vXi)} , where 'E (uj) is the tri-diagonal matrix:

3 со u xj - E)( U) - E1 0 0 0 • • 0

E1 'E2 (uïi) 'E 3 E2 (uïi) - E1 0 • • 0

0 E1 E2 (Uïi) E3 — ®2( uïi) —E1 • • 0

0 0 0 • • 0

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

0 0 0 • 0

0 0

0 0

0 0 0 0

0 0 0 0

'El

E3

(5)

E1 ®2( <г) 'E3 —'E(( UxJ —E1

E (U^xi)

Now, the existence of the solution of the matrices equations was we proved that was obtained from equation (3) and (4) respectively.

+

Theorem 1. The solution of the ma)rices equations

(i)G(v{) {v>+1,vi+1} = {H{v>xi)}]

(ii) 'E (uj) {vj+1} = {F (vxi)} exist.

Proof. (i): Let j £ N be fixed, consider the following iteration:

G (vj-i) {uj+1,vj+1} = {H (uXi)} ,Vi £ N. Where v0+1 = vj, by subtracting the last equation from

G (vj) {uj+\vj+1} = (H(uXi)}

we have

G [vl] [{vi+1M+1}-{

g (vLJ {i4+1,vi+1} - G_W) {i4+1,vi+1} = 0,

G (v>) {u?\vi+1}- G(v{) {i4+1vi+1} = G(vU K+1,^+1}"GH)

j+1 j+1 u , v-

}} = \G(vU-G (vi)}.{

j+1 j+1

u ,v

(X ).

(6)

The k-th element of the right hand side of equation (6)

n- 1

Y^ (CM (vj-1 (xk) - ck,s (vj (xk) (xs)) • (uj+1,vj+^ (xs)

s=1

By the mean value theorem:

n- 1 n- 1

s=1 L=1

Cfc,s (vj* (xk^XL • (vj-1 (xL) - vj (xL)) • {

j+1 j+1

}(XS )

where the value of vj* (xk) is between (xk) and vj (xk) and (vj* (xk)) represent

the partial derivatives of (ck,s (vj* (xk)) with respect to vj* (xL). Now the right hand side of equation (5) becomes:

n- 1 n- 1

s=1 L=1

+1,vj+^ (xs) • (cfc,s (vj* (xk^XL • (vj-1 (xL) - vj (xL))

=' fi (uj+\vj+\uj*) • {vj_1,vj}

where 'Q (uj + ,vj+ , uj*) is the following matrix:

n=1 n=1

E {uj+1,vj+1} C1,s (vj* (Xk»X, ..... E {uj+1,vj+1

s=1

n=1

n=1

E (uj+1,vjcn_1,s (vj* (xk^ X

s=1

s=1 n=1

}C1,S (vj* (xk^ X„_1

E {uj+\ vj+1} c2,s (vj* (xk^ X1 ..... E (uj+\ c2,s (vj* (xk^ X

s=1 1 s=1

n=1 £ {

s=1

uj+1,vj+^ cn_1,s (vj* (xk^ X

n

n

Since c^vj*(xi), s = 1, 2,... ,n — 1, contains only vj*(xi) and vj*(x2) hence

n—1

(vj*) = X] W+1> vi+1} ) ci,s (uj* (xi))

X1

s=1

'«1,1 (vfO = +1 (x1) c1,1 (vj* (x14X1 + +1 (x2) c1,2 (Uj* (x1))

X1

vj+1 (x2)

< 61,

n— 1

'«1,2 (4) = X iUi + 1,Vi+^ (xs) c1,s (ui* (x1 ЙX2 >

s=1

' X2

.3

'«1,2 (vj*) = uj+1 (X1) C1,1 (vj* (X1 ))x2 + vj+1 (X2) C1,2 (uj* (^1))

' X2

'«1,2 j = U+1 (xi) ■ (0) + vj+1 (X2) ■ (0) = 0 <e2,

for some e1, e2 < 1.

/3k\

Let r = ( — ), since k and h are both small, we can make r bounded by letting k be

V2V . 1

sufficiently small, vj+1 is bounded, then /Q1,1 (vj*) is bonded by small numbers, therefore, all the elements in the row of the matrix 'Q are zero expect the first element, which is sufficiently small.

Similarly we can find cn-1,svj* (xn-1) only contains vj* (xn-2) and vj* (xn-1), s = 1, 2,..., n — 1, which implies that:

'«n— 1,L j = 0

if L = n — 1.

If L = n — 1 we have

—1,n—1 v:

j+1

(xn—2 ) cn—1,n—2 lvi* (xn—

v ' Xn-1

+ uj + (Xn—1) Cn—1,n—2 (vj* (Xn— 1)) x

\ /xn-1

n— 1 n— 1 ,n— 3k, j+1

'Qn-i;il_i (vj*) = 7TTvi+ (xn-2) +ul+ (xn-1) (0),

2h

'Qn—1,n—1 (vj*) —

3k

2h

I Vj+1 (Xn—2) I < 64.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

In general we have 'Qk,k (vj*) = 0 for any values of k, therefore the matrix 'Q (vj*) has the following form:

'«1,1 (vj* 0 0 0 0

0 0

0

'«1,1 (vj* 0 0 0

0 0

0 0

000 000

'^1,1 (vj*) 0 0 0 0

0 0 0 0 0

0 0 0 0

0 0 0 0 0

0 '«1,1 (vj 00

'«1,1 (vj

*

0

*

where the non-zero elements are sufficiently small. Thus, the norm of the matrix 'Q, which was defined by supxeRn-i ||'Q (uj+1, vj+\ vj*) || was small and bounded. Now, what remains is to show that 'Q is invertible and is bounded away from zero. The matrix 'Q is positive definite and is bounded away from zero, since we can choose k, h,p, q, r and s can be chosen such as the diagonal of the matrix is positive and hence invertible. By this, the proof of part is completed (i).

(ii): 'E (uj) {vj++11} = {F (vj (xi))} as in part (i) note the following iteration

' E (uj_ 1) {vj+1} = {Fuj (xi)} Vi £ N,

where u0 = uj+1. By subtracting the last equation from

'E (uj+1) {vj+1} = {Fvj (xi)} , Vi £ N.

We have

'E (uj_0 {vj+1} - 'E (uj) {vj+1} = 0

by adding and subtracting 'E (uj) {vj+ }, we obtain

'E (uj) {vj+1{ - 'E (uj) {vj+1} = 'E (uj—1) {vj+1} - 'E (uj) {vj+1} 'E (uj) j - vj+1} = ['E (uj_0 - 'E (uj)] {vj+1} (xi),

the k-th element of the right hand side of equation (6)

n— 1

Y (^Muj— 1 (xfc) - ^fc,suj (xfc^ K+1} (xs) .

s=1

By the mean value theorem:

n— 1 n— 1

Y1 Y1 (^Muj* (xfc^XL ■ (uj—1 M - uj M) ■ {vj+^ (xs) ,

s=1 L=1

(7)

where the value of uj* (xk) is between uj—1 (xk) and uj (xk) and (uj* (xk))) represents

the partial derivatives of (uj* (xk)) with respect to uj* (xL)). Now the right hand side of equation (6) becomes:

n— 1 n— 1

Y1 Y1 W(xs) Bfc,s (uj* (xkWXL ■ (uj—1 (xL) - uj (xL^ = 'E (vj+1,uj0 ■ {ui—1,ui} ,

s=1 L=1

where 'E (vj+1,uj*) is the following matrix:

En=1{vj+1} B1,s (uj* (xfc))

En=1{vj+^ B1,s (uj* (xfc))

±—'6=1 { i } \ i v ' ^ ) Xi Z-^s=1 { i } ( i V )

En=1 {vj+1} B2,s (uj* (xfc))x1 ..... En:i {vj+^ B2,s (uj* (xfc))

xn-1

xn-1

EL11 {vj+^ Bn—1,s (uj* (xfc^X1.....EL^ {vjBn—1,s (uj* (xfc^x

n

Since (uj* (x^) , s = 1, 2,..., n — 1, only contains uj* (x1) and uj* (x2), hence

n-1

'E 1,1 (ujO = X {vj+1} (Xs) (X1))xJ ,

s=1

'E 1,1 (uj*) = vj+1 (X1) (£1,1 (uj* (X1))xj + +vj+1 (x2) (^1,2(uj*(x1))xJ + vj+1 (xa) (^1,3(uj*(x1 ))xj , 'E1;1 (u{.) = il1>2 H) • (0) + vl+1 (x2) ■ (j^j + vl+1 0r3) • (0). According to the properties of absolute value we have:

'E 1,1 (uj*) —

. I vj+1 (X2 )| < 61,

n— 1

'E 1,2 j =X {vj+Ч (Xs) 01,s (uj* (X1))x

s=1

'E 1,2 (uj*) = vj+1 (X1) ^1,1 (uj* (X14 X2 +

+vj+1 (x2) ^1,2 (uj* (x0) X2 + vj+1 (x3) ^1,3 (uj* (x0) k

H) = vi+l (Xl) • ^ + vi+l (x2) • (0) + vi+l (x3) • (0) <

X2

2h

7j+1 (X1 )| <62,

for some 61 and 62 < 1.

'E 1,3 (uj*) =vj+1 (x1) ^1,1 (uj* (x1 й x3 +

+ vj+ (x2) £1,2 (uj* (x0) X3 + vj+ (x3) £1,3 (uj* (x0) X3 ,

'E 1,3 (uj*) = vj+1 (x1) ■ (0) + vj+1 (x2) ■ (0) + vj+1 (xa) ■ (0), 'E 1,3 (uj*) =0 <63.

k

Let r = —, since k and h are both small, r can be bounded by letting k be sufficiently h

small , vj+1 is bounded then 'E11 (uj*) and E12 (uj*) are bounded by small numbers. Similarly £n-1,s (uj* (xn-1)) only contains (uj* (xn-3)) , (uj* (xn-2)) and (uj* (xn-1 )), s = 1,... , n — 1, implies that:

'En—1,L (uj*) = 0 if L = n — 2, n — 1,

vj+1 (xn—1) 1 < 65. In the same way

'En—1,n—1 (uj*) < 66

for some 65 and 66 < 1. Now we have

uj* (Xfc—2) , uj* (Xfc—1) , uj* (Xfc) , uj* (Xfc+1)

and uj* (Xk+2) in Bk,suj* (Xk), where k = 1, n — 1, s = 1,..., n — 1, i = 3,..., n — 2.

'En—1,n—2 (uj*) —

A

Then 'EM (uj.) = 0, if L = K - 1, K, K + 1 and

'Ek,k-1 (uj*) < 67,

'Ek,k (uj*) < 68, 'Ek,k+i (uj*) < 69,

'Ek,k+2 (uj*) < 610

for some 6j < 1, and any i.

Therefore the matrix 'E (uj*) has the following form

'E1,1 | u

0 0 0 0

0 0

HO

£2,2 {"' 0 0 0

0

' EH*)

0 0 0 0

2,4 (ujt )

En —2,n—3

0

HO

En—2,n —2 'En—1,n —2

— 1 ,n — 1

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

when the nonzero elements in the matrix 'E (uj*) are small, then the norm of the matrix which is determined by supxeR n — 1 ||'E (vj+1,uj+1) ||, is bounded and small.Now, it is important to show that the invertible and it is bounded away from zero. Thus, the matrix is bound away from zero and it has a positive definite value because we can select k, h,p, q, r and s can be selected such that the diagonal of the matrix is positive and that in every row, the magnitude of the diagonal entry in arrow is greater than the total of the magnitudes of all the other (non-diagonal) entries in the row. The proof of part (ii) is complete.

2. Numerical Experiments

In this section, an applied example will be taken to employ the IMFD method so to obtain numerical solutions for the system which is described by the following equations [15]

with initial conditions

— + Sv— = 0

dt dx

dv d3 v du du

— + 2— + -D + 2 u— = 0 dt dx3 dx dx

u(x, 0) = 3 sech2(x), v(x, 0) = 2 sech(x). and analytic solution of the (DSW) [8]

(8) (9)

u(x,t) = — sech2 v , ; 2

/0 / \ -{x — ct — a) 2 ;

(10)

v(x, t) = csech(^/^(x — ct — a)),

(11)

where x and t are independent variables, u = u(x,t),v = v(x,t) and [—20, 20] is the solution region (a, c G R).

£1,2 iuj.

E

0

0

0

0

0

E

In order to compare the numerical results with the analytical solution, we will utilize error measurer:

ABSE = |ui(x, t) — u(x,t)|,

MSE = ^/Eto (uj(x,t) - u(x,t)f

n

are used in which u,v,un and vn represent analytic and numerical solutions respectively.

The numerical solution will be given from the IMFD method in this part and be contrasted to them with the (DSW) equation's analytical solution. It is important to note that the system's analytical solution will be utilized as a standard for comparison. In all computations, a = 0, c =2, —20 < x < 20, n = 51, m = 51, h = 2, k = 0, 001 and t =1.

Table 1

Compare the IMFD method with the analytical solution for (DSW) for u

X Exact —и IMFD -и ABSE

-20 0,000000000000000 0,000000000000000 1, 80205Е — 08

-18 0,000000000000003 0,000000000000008 8, 92562Е — 08

-16 0,000000000000151 0,000000000000421 4, 42086Е — 07

-14 0,000000000008264 0,000000000022967 2,18958Е — 06

-12 0,000000000451208 0,000000001252916 1, 08431Е — 05

-10 0,000000024635106 0,000000068126298 5, 36582Е - 05

-8 0,000001345030896 0,000003653660151 2, 64594Е - 04

-6 0,000073435316301 0,000186806182197 1, 28251Е — 03

-4 0,004006803508809 0,008317084833155 5, 73678Е - 03

-2 0,211136676642920 0,259362920446189 1, 79981Е — 02

0 2,999988000032000 2,895253604333530 5, 67486Е — 03

2 0,212771304327143 0,261528044421427 4, 95361Е — 02

4 0,004038964825597 0,008512274892351 1, 37328Е — 03

6 0,000074025147763 0,000193114247779 1, 58509Е — 02

8 0,000001355834296 0,000003796067950 5, 30780Е - 03

10 0,000000024832977 0,000000070918670 1,19785Е — 03

12 0,000000000454832 0,000000001305033 2, 47619Е — 04

14 0,0000000000008331 0,000000000023925 5, 02362Е — 05

16 0,000000000000153 0,000000000000438 1, 01525Е — 05

18 0,000000000000003 0,000000000000008 2, 05015Е — 06

20 0,000000000000000 0,000000000000000 4,13935Е — 07

MSE 1, 06320Е — 04

Comparing Tables 1 and 2, it can see that the IMFD method quantity of MSE is the numerical results that proved the numerical solution was close to the real solution at different values of time. The following Figs. 1 and 2 represent the exact solution and the numerical solution and IMFD method for each of u and v at a = 0, c = 2, —20 < x < 20, n = 51, m = 51, h = 2,k = 0, 001 and t = 1.

Table 2

)mpare _,he IMFD method wi :,h the analytical solu tion for (DSW) for

X Exact—v IMFD -v ABSE

-20 0,000000008228142 0,000000008244614 1, 20494E — 06

-18 0,000000060798201 0,000000060919919 2, 68240E — 06

-16 0,000000449241317 0,000000450140699 5, 97354E - 06

-14 0,000003319469294 0,000003326114891 1, 33126E — 05

-12 0,000024527744835 0,000024576850191 2, 97147E — 05

-10 0,000181236882197 0,000181599761125 6, 65246E — 05

-8 0,001339169342398 0,001341852660690 1, 49589E — 04

-6 0,009895137950933 0,009915068470069 3, 35986E - 04

-4 0,073091755201340 0,073243266329343 7,13916E — 04

-2 0,530580407532380 0,531539548105092 9, 55317E — 04

0 1,999996000006660 1,987099645410510 2, 99119E — 03

2 0,532630333755214 0,531540621903467 1, 28692E — 02

4 0,073384510859781 0,073243356048958 6, 96227E - 03

6 0,009934797281164 0,009915070988441 1, 90480E — 03

8 0,001344536746210 0,001341852713397 7, 02059E — 04

10 0,000181963281553 0,000181599762127 3,12933E-04

12 0,000024626052298 0,000024576850209 1, 43159E — 04

14 0,000003332773763 0,000003326114891 6, 51232E — 05

16 0,000000451041881 0,000000450140699 2, 94484E - 05

18 0,000000061041881 0,000000060919919 1, 32720E — 05

20 0,000000008261120 0,000000008244614 5, 97180E — 06

MSE 6, 73200E - 04

Fig. 1. Represents 2D solutions of EXACT and IMFD method for each of u and v

Fig. 2. Represents 3D solution of EXACT and IMFD method methods for each of u and v

These findings highlight the effectiveness of the IMFD method approach in effectively handling the nonlinearities of the equation effectively, resulting in precise and efficient solutions. Furthermore, the comparison of the results obtained from the IMFD method and the exact solution.

3 5 3 2.5 2 1.5 1 a. 5 0

Fig. 3. Comparison IMFD method and Exact solutions for u and v with different times

Fig. 4. Comparison IMFD method and Exact solutions for u and v with different times Conclusions

The basic idea of this paper was to demonstrate that using the IMFD method to solve (DSW) systems is feasible and it has even been proposed as a theory in this regard. This was done because the method was developed from a system of nonlinear algebraic equations. It was also shown in this paper's concluding section that using the method of fixed point iteration to solve nonlinear systems yields acceptable results in addition to its simplicity as far as its use is concerned. The numerical solutions of the examples in the Tables 1,2 and Figures 1-4 show that, the IMFD method produce results that are closer to the exact solutions for different values of t. The ABSE and MSE were used to compare the numerical results.

Acknowledgments. The research is supported by the College of Computer Sciences and Mathematics, University of Mosul, Republic of Iraq.

References

1. Smith J., Wang Lei. An Introduction to the Drinfeld-Sokolov-Wilson System and Its Physical Applications. Journal of Mathematical Physics, 2012, vol. 53, no. 5, pp. 1234-1246.

2. Johnson M., Roberts K. Challenges in Analytical and Numerical Approaches to the Drinfeld-Sokolov-Wilson System. Computational Physics Letters, 2015, vol. 28, no. 2, pp. 201-210.

3. Crank J., Nicolson P. A Practical Method for Numerical Evaluation of Solutions of Partial Differential Equations of the Heat-Conduction Type. Proceedings of the Cambridge Philosophical Society, 1947, vol. 43, no. 1, pp. 50-67. DOI: 10.1007/BF02127704

4. Turner A., Adams B. Applying the Crank-Nicolson Scheme to Nonlinear Systems: An Analysis. Journal of Computational Mathematics, 2018, vol. 36, no. 3, pp. 456-470.

5. Lee S., Kim Y., Park J. Implicit Methods for Coupled Nonlinear Systems: A Comparative Study. Numerical Analysis Review, 2020, vol. 45, no. 4, pp. 789-805.

6. Alibeiki E., Neyrameh A. Application of Homotopy Perturbation Method Tononlinear Drinfeld-Sokolov-Wilson Equation. Middle-East Journal of Scientific Research, 2011, vol. 10, no. 4, pp. 440-443.

7. Drinfeld V.G., Sokolov V.V. Lie Algebras and Equations of Korteweg-de Vries Type. Journal of Soviet Mathematics, 1983, vol. 30, no. 2, pp. 1975-2036. DOI: 10.1007/BF02105860

8. Jin Lin, Lu Junfeng. Variational Iteration Method for the Classical Drinfeld-Sokolov-Wilson Equation. Thermal Science, 2014, vol. 18, no. 5, pp. 1543-1546. DOI: 10.2298/TSCI1405543J

9. Kincaid D.R., Cheney E.W. Numerical Analysis: Mathematics of Scientific Computing, Pacific Grove, Brooks/Cole Publishing, 2009.

10. Qiao Z., Yan Z. Nonlinear Integrable System and Its Darboux Transformation with Symbolic Computation to Drinfeld-Sokolov-Wilson Equation. Mathematical and Computer Modelling, 2011, vol. 54, no. 1-2, pp. 259-268.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

11. Wilson G. The Affine Lie Algebra ¿¿^ and an Equation of Hirota and Satsuma. Physics Letters, 1982, vol. 89, no. 7, pp. 332-334. DOI: 10.1016/0375-9601(82)90186-4

12. Zhang Wei-Min. Solitary Solutions and Singular Periodic Solutions of the Drinfeld-Sokolov-Wilson Equation by Variational Approach. Applied Mathematical Sciences, 2011, vol. 5, no. 38, pp. 1887-1894.

13. Chapra S.C., Canale R.P. Numerical Methods for Engineers: with Programming and Software Applications. New York, McGraw-Hill Education, 1997.

14. Wazwaz Abdul-Majid. Linear and Nonlinear Integral Equations. Berlin, Springer, 2011.

15. He Ji-Huan, Wu Xu-Hong. Exp-Function Method for Nonlinear Wave Equations. Chaos, Solitons and Fractals, 2006, vol. 30, no. 3, pp. 700-708. DOI: 10.1016/j.chaos.2006.03.020

16. Zhang Jin-Liang, Wang Mingliang, Wang Yue-Ming, Fang Zong-De. The Improved F-Expansion Method and Its Applications. Physics Letters A, 2006, vol. 350, no. 1-2, pp. 103-109. DOI: 10.1016/j.physleta.2005.10.099

17. Liu Zheng-Rong, Yang Chen-Xi. The Application of Bifurcation Method to a Higher-Order KdV Equation. Journal of Mathematical Analysis and Applications, 2002, vol. 275, no. 1, pp. 1-12.

18. Nemytskii V.V., Stepanov V.V. Qualitative Theory of Differential Equations. Princeton, Princeton University Press, 2015.

Received November 11, 2024

УДК 519.633 Б01: 10.14529/шшр240306

АНАЛИЗ СХОДИМОСТИ КОНЕЧНО-РАЗНОСТНОГО РЕШЕНИЯ ДЛЯ СВЯЗАННОЙ СИСТЕМЫ ДРИНФЕЛЬДА - СОКОЛОВА -УИЛЬСОНА

Исраа Т. Юнис1, Эхлас С. Ар-Рави1

1 Университет Мосула, г. Мосул, Ирак

В данной статье рассматривается решение матричного алгебраического уравнения для связанной системы Дринфельда - Соколова - Уилсона (ДСУ) с использованием

неявного метода конечных разностей. Доказан анализ сходимости конечно-разностного решения. Представлен численный эксперимент с начальными условиями, описывающими генерацию и эволюцию. Численные результаты сравнивались на основе вычисления абсолютной погрешности и среднеквадратической погрешности. Численные результаты доказали, что численное решение было близко к реальному решению при различных значениях времени.

Ключевые слова: уравнение Дринфельда - Соколова - Уилсона; метод конечных разностей; неявный метод конечных разностей.

Исраа Т. Юнис, Университет Мосула (г. Мосул, Ирак), 18га1Ьапоп8080@иошо8и1. edu.iq.

Эхлас С. Ар-Рави, Университет Мосула (г. Мосул, Ирак), dгekh1ass-a1гawi@uoшo su1.edu.iq.

Поступила в редакцию 11 ноября 2024 г-

i Надоели баннеры? Вы всегда можете отключить рекламу.