Научная статья на тему 'A second-order method for additive stiff problems'

A second-order method for additive stiff problems Текст научной статьи по специальности «Математика»

CC BY
62
7
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ЖЕСТКАЯ АДДИТИВНАЯ ЗАДАЧА / STIFF ADDITIVE PROBLEM / (M / K)-МЕТОД / K)-METHOD / ОЦЕНКА ОШИБКИ / ERROR ESTIMATION

Аннотация научной статьи по математике, автор научной работы — Novikov E.A.

A second-order accuracy method for additive stiff systems of ordinary differential equations is developed. Inequalities for accuracy control are obtained. Numerical results are presented.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «A second-order method for additive stiff problems»

UDC 519.622

E.A. Novikov

Institute of Computational Modeling SB RAS 50 Akademgorodok, ICM SD RAS, 660036, Krasnoyarsk, 660036, Russia

a second-order method for additive stiff problems

Е.А. Новиков

метод второго порядка для решения

аддитивных жестких задач

A second-order accuracy method for additive stiff systems of ordinary differential equations is developed. Inequalities for accuracy control are obtained. Numerical results are presented. STIFF ADDITIVE PROBLEM, (M,K)-METHOD, ERROR ESTIMATION.

Построен метод второго порядка точности для решения жестких аддитивных систем обыкновенных дифференциальных уравнений. Получено неравенство для контроля точности вычислений. Приведены результаты расчетов.

ЖЕСТКАЯ АДДИТИВНАЯ ЗАДАЧА, (М,К)-МЕТОД, ОЦЕНКА ОШИБКИ

I. Introduction

For the numerical solution of the Cauchy problem

y = f (t, y), y(to) = y0, to < t < tk, (1)

for a stiff system of ordinary differential equations, Z-stable methods are usually applied. Here y and f are real vector functions of dimension N, t is an independent variable.

For a large-scale problem (1), the total computational cost of a method with the unbounded stability domain in fact is completely defined by the time of the calculation and decomposition of Jacobi matrix of the system (1). In a number of algorithms, Jacobi matrix is frozen, i. e. one and the same matrix is used at several integration steps. This enables one to decrease a computational cost considerably. This approach is widely used when implementing semi-implicit and implicit methods of Runge — Kutta type and multistep methods of Adams and Gear type (see, for example, Ref. [9]). However, for iteration-free methods [2, 5, 8] the problem of the freezing or any other approximation of Jacobi matrix is rather more complicated. On the other hand, the problem

(1) can be written in the form [1, 6]: y ' = [ f (t, y) - By] + By,

(2)

y(to) = Уo, to < t < tk,

where B is an approximation of Jacobi matrix.

Assuming that the term By is completely responsible for stiffness, the expression in the brackets can be considered as a non-stiff part. Taking into account this fact when constructing an iteration-free method enables one, in particular, to freeze Jacobi matrix, which can be calculated analytically as well as numerically, in integration algorithms. For some problems, the symmetric part of Jacobi matrix or its diagonal approximation can be taken as the matrix B. Here we propose a second-order accuracy method which admits different types of approximation of Jacobi matrix. An error estimate and an inequality for the calculation accuracy control are obtained. Numerical results are presented.

II. A Numerical Scheme for Autonomous Problems

We consider the Cauchy problem for an autonomous system of the form

y = 9(y) + g(y), y(to) = yo, to < t < tk, (3)

where y, 9, and g are real vector functions of dimension N, and t is an independent variable.

We assume that the function g(y) is completely responsible for stiffness and ^(y) is a non-stiff part. For (3), we consider a method of the form

y„+i = yn + Z4=i Piki; Dn =E - ahg'n; k = hq>( yn); Dk2 = h[q>( yn) + g( yn)];

Pi + P4 = 0> (P42 + p4s)P4 = 0.5.

(5)

(4)

Dk^ — k ;

^4 — Aq>( + p4iki + p42k2 + p43k3),

where E is the identity matrix; g n = dg(yn)/dg; k, 1 < i < 4 are the stages of the method; a, p,, P4, k,, 1 < i < 4, 2 < j < 3 are numerical coefficients which define accuracy and stability properties of (4).

To study the scheme (4), we substitute Taylor expansion of the stages k, 1 < i < 4 in the first formula of (4). This gives

yn+i = yn + ( Pi + P2 + P3 + P4)h9n +

Now we study the stability of scheme (4). In this case we may not use the test equation y = ly with a complex value 1, Re(X) < 0 since there is no sense in splitting the right-hand side of the system of differential equations into stiff and non-stiff parts. Hence, in (3) we put 9(y) = A,y and g(y) = X2y where and A,2 are arbitrary complex numbers. Here and A,2 mean some eigenvalues of the Jacobi matrices of the functions ^(y) and g(y), respectively. Applying (4) to the scalar test problem

y' = A-iy + A-2y, ,y(0) = yo, t > (6)

with the notations x = A1y and z = A2y, we have yn+i = Q(x,z)yn where

Q(x, z) = {i + (i - 2a)z + x + +[-2api - ap2 + (P42 + p43 - 2a)p4]xz + 0.5x2 --ap42p4 x2z + [a2pi + a2p4 - aP42p4]xz2 + +(a2 - ap2)z2} / (i - az)2.

The necessary ¿-stability condition for the numerical formula (4) with respect to the

+ (P2 + P3)hgn + (P41 + P42 + P43)Ph 9 'n 9n + function g(y) = X2y is the relation Q(x,z) ^ 0

+ (P42 + P43)P4h29'n gn + a(P2 + 2P3)h2g'n9n + +a( P2 + 2 P,)h2 gngn + O(h3),

where the elementary differentials 9n, gn, 9'n9n, 9'gn, gn9n and gngn are calculated at an approximate solution yn.

Taylor expansion of the exact solution y(tn+]) is of the form

y (tn+i) = y(tn ) + h(9 + g ) + +0.5h2(9 ' 9 + 9 ' g + g '9 + g'g ) + O(h3),

where the elementary differentials 9, g, 9 9, 9 g, g9, and g g are calculated at the exact solution y(tn). Comparing the above expansions for the condition yn = y(tn), we arrive at the second-order conditions for the scheme (4), i. e.

Pi + P2 + P3 + P4 = P2 + P3 = 1>

a( P2 + 2P3) — 0.5,

( P41 + P42 + P43 ) P4 = 0-5' (P42 + P43 ) P4 = 0.5. This results in

P n 4a -1 1 - 2a

P4i = P2 ^^-, P3 =

2a

2a

as z ^ —». From the form of Q(x,z), it follows that this relation is valid provided that P2 = a and P42 = 0. As a result, taking into account (5), we obtain the set of the coefficients

P41 = P42 = 0> P2 = a, P3 = 1 - a,

P4 = - Pi — 0.5P-1

for the second-order accuracy scheme (4) where P43 is a free parameter and a is a root of the equation a2 — 2a + 0.5 = 0. Then, the stability function Q(x,z) of the scheme (4) has the form

Q(x, z) — [1 + x + 0.5x2 + (1 - 2a)z + +(1 - 2a)xz] / (1 - az)2.

For 9(y) = 0, the scheme (4) coincides with the Z-stable (2,i)-method

yn+i = yn + ak2 + (i - a)k3 [7] with the stability function Q(0, z) of the form

Q(0, z) = [i + (i - 2a)z]/(i - az)2 and the local error 8 r of the form

n,L

8nL — (a - i / 3)h3g'2g + h3g g2 / 6 + O(h4).

The equation

a2 - 2a + 0.5 = 0

has two roots a1 = 1 - 0.572 and a2 = 1 + 0.5V2. We take a = a1 since in this case the coefficient of the principal term of the local error of the (2,1)-scheme is smaller. For g(y) = 0, (4) is degenerated into an explicit method of the Runge — Kutta type of the form

y+1 = y + (1 — °.5/P43^1 + °.5MV Notice that the local error 5 „ „of this

n,RK

scheme can be written in the form

Sn,RK = h3[(P 'V 6 +

+(1 / 6 - 0.25 / p43> ''92] + O(h4).

Hence, the local error of the explicit formula is minimal provided that P43 = 2/3. Finally, we have the coefficients of the second-order accuracy scheme (4), i. e.

a = 1 -V2/2, P41 =p42 = 0, P43 = 2/21,

P4 =-P1 = 3/4, P2 = a, P3 = 1 - a.

III. Calculation Accuracy Control

Calculation control for the scheme (4) is performed with a first-order accuracy method. With the help of the stages of (4), we can construct a family of first-order numerical formulae of the form

Jn+1,1 = Jn + b1k1 + b2 k2 + b3k3 + b4 k4 + b5k5, (7)

where k5 = hg(yn), b. are numerical coefficients. Using Taylor expansion of the stages, we see that (7) is of first-order accuracy provided that

which is the case for many problems (2) for B = df(y)/dy, then it makes sense to take the set of coefficients

and

b1 + b2 + b3 + b4 = 1

b2 + b3 + b5 = 1.

Then an error estimate e for the scheme

n

(4) can be calculated by the formula

en = yn+1 yn+1,1.

When choosing the coefficients b,, 1 < i < 4, one can be guided by different considerations. For instance, if the function g(y) is completely responsible for stiffness of the problem (3),

or

b1 + b3 + b4 + b5 :

b1 + b2 + b4 + b5 :

0 and b = 1

0 and b = 1.

This technique of the error estimation is used with advantage when implementing the (2,1)-method with analytical calculation of the Jacobi matrix. However, if, for instance, in the problem (2) a diagonal approximation of the Jacobi matrix is used, then for many problems (3) we may not consider the function ^(y) as a non-stiff part. In this case, such an estimate may result in the loss of calculation accuracy due to arising instability of the explicit part of the numerical formula (4). From this reasoning, in (7) the coefficients b1 + b5 = 1 and b2 = b3 = = b5 = 0 are taken. In this case, (7) is rearranged to the form

yn+1,1 = yn + M^y,) + g(yn)].

Numerical results show that the application of this scheme in the estimate results is more reliable control of calculation accuracy.

We point out an important feature of the proposed error estimate. From Z-stability of the scheme (4), it follows that for the stability function Q(x,z) we have Q(x,z) ^ 0 as z ^ —». For the exact solution

y(tn+1) = exp(x + z)y(tn)

of the problem (6) a similar property holds, hence, it is natural to require that the error estimate sn approaches zero as z ^ —». However, for the proposed estimate we have sn = O(z). Thus, to improve the asymptotic behaviour, instead of s we consider the estimates s (j) of

n n^ n

the form s (j ) = D 1-jns , 1 < j < 3. Observe, that

n n n n n

in the sense of the principal term, i. e. the first term of Taylor expansion of an error in powers of h, the estimates s and s (j) coincide for any

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

7 n nv n/ J

value of jn, besides, sn(3) ^ 0 as z ^ —». Now, for calculation accuracy control we can use the inequality ||en(jn)|| < e, 1 < jn < 3, where e is the required accuracy of calculations. Notice, that the use of e (j ) instead of e does not result

n n n

in considerable increase of a computational cost. For z ^ 0, the estimate e (1) = e is in

nn

agreement with the behaviour of an error and there is no need to check it for other values of jn. With the drastic increase of the step, the behaviour of en may become inadequate which manifests itself in unreasonable decrease of the step and repeated calculations of a solution. Thus, when implementing an integration algorithm, the inequality for accuracy control is used as follows. For each n the smallest value of jn which provides this inequality is taken. If it does not hold for any value of jn, then the step decreases and the solution is recalculated.

IV. A Numerical Scheme for Non-Autonomous Problems

We consider the Cauchy problem for a non-autonomous system of the form

y = 9(t, y) + g(t, y), y(t„) = yo, to < t < tk,

where y, 9, and g are real vector functions of dimension N and t is an independent variable.

Further, we again assume that the function g(t, y) is completely responsible for the stiffness, and 9(t, y) is the non-stiff part. For the numerical solution of a non-autonomous problem we consider a method of the form

yn+i = yn + Piki + ... + P4k4, D = E - ahg'n, K = h9(tn, yn), Dk2 = h[9(tn, yn) + g(tn + ch, yn)], Dk3 = k2, (8) K = hf (tn + [P41 + P42 + p43]h,

yn + P4ik +p42k2 +p43ks),

where E is the identity matrix; gn = dg(tn,yn)/dy; k. are the stages of the method; c, a, p,, p4j are numerical coefficients.

To study the scheme (8), we use Taylor expansion of the stages ki in powers of h up to the terms of order of h2. Substituting these series in the first formula of (8), we get

yn+1 = yn + (Pi + P2 + P3 + P4)h9n +

+ (P2 + Pl)hgn + (P41 + P42 + p43)P4h29 'tn +

+ c(P2 + P3)g'tn + (p41 + p42 + p43)P4h29 'yn 9n + + (p42 + p43) P4 h 9 yn gn + a(P2 + 2P3)h g'yn9n +

+ a( P2 + 2 P,)h2 g'yngn + O(h3),

where the elementary differentials are calculated at an approximate solution.

Taylor expansion of the exact solution y(tn+1) in the neighbourhood of the point tn has the form

y(tn+1) = y(tn) + h(9 + g) + 2 h2(9't + g't + + 9 'y 9 + 9 'y g + g'y9 + gyg) + O(h3),

where the elementary differentials are calculated at the exact solution. comparing the series for the condition yn = y(tn), we arrive at the second-order accuracy conditions for the scheme (7), i. e.

P1 + P2 + P3 + P4 = P2 + P3 = ^

(p41 +p42 +p43) P4 = "2, c( P2 + P3) = ^ (p42 +p43)P4 = ^^

a( P2 +2 P3) = 2.

This yields c = 0.5. With considerations similar to those for the scheme (4), we obtain the coefficients for the numerical formula (8) of the form

a = 1 -f, p41 = p42 = 0, c = ^, p43 = ^,

3 ,

P4 =-P1 = 4, P2 = a, P3 = 1 - a.

An inequality for calculation accuracy control is constructed similarly to that for the scheme (4), where in the estimate sn an approximation to a solution, obtained with the second-order accuracy method (8), and an approximate solution, calculated by a firstorder method of the form

y+1,1 = yn + h[9(tn,yn) + g(tn + 0.5h,yn)]

are used. The choice of the step with respect to the accuracy is performed in the same way as in the case of an autonomous system.

V. The Analysis of Numerical Results

In what follows, the proposed algorithm is called ASODE2. Freezing the Jacobi matrix, i.e. the use of the matrix

D = E - ahg

n ° yn

at several integration steps, is performed by the following rule. If the Jacobi matrix is not

recalculated, then the integration step remains the same to keep the stability of the numerical scheme. An attempt to use the former matrix is performed after each successful integration step. The following three reasons result in unfreezing:

1) violation of an inequality for calculation accuracy control;

2) the number of steps with a frozen matrix exceeds ih;

3) the forecasted integration step exceeds the last successful step by the factor of qh.

The parameters ih and qh can be used to adjust the method to a specific problem. If ih ^ x and qh ^ x, then the number of integration steps with one and the same Jacobi matrix increases. If ih = 0 and qh = 0, then the matrix is not frozen. Hence, for a large-scale system of ordinary differential equations, it makes sense to take ih and qh sufficiently large. In the numerical results presented below, ih = 20 and qh = 2.

All the examples below are rearranged to the form (2). The required calculation accuracy is s = 10-2. The calculations were performed with PC Intel(R) Core i7-3770S [email protected] with double precision. The scheme (4) is of second-order accuracy, hence, there is no sense in higher-order precision in this case. The norm ||Z|| in inequalities for accuracy control is calculated by the formula

Zi |

Z ||= max

1<i<N | | y'

+r

where i is the number of components, and r is a positive parameter.

If the inequality |yj < r holds for the i-th component of a solution, then an absolute error sr is controlled, otherwise the relative error e is controlled. In the calculations, the parameter r is taken so that an actual accuracy for all components of the solution is not lower than the required one. Below is, if, idec, isol denote the total number of integration steps, of right-hand sides of the system (1), of decompositions of the Jacobi matrix, and of calculations of backward Gauss, respectively.

A. Example 1 [4]

y' = -0.013y1 - 1000y1 y3, y2 = -2500y2y3,

y3 = -0.013y1 -1000y1 y3 - 2500y2y3, (9) t e [0,50], y^O) = 1, y2(0) = 1, y3(0) = 0, h0 = 2.9 • 10-4.

The problem (9) is solved by the method (4) with a diagonal approximation of the Jacobi matrix, i. e. in the numerical formula (4) the diagonal Jacobi matrix with the diagonal entries

b.. of the form

ll

bu = -0.013 - 1000y3; b22 = -2500y3; b33 = -1000y1 - 2500y2

is used.

since in this case the computational cost of the method (4) is close to that of explicit methods, this method is compared in terms of efficiency with the well-known Merson method [3] of the fourth-order accuracy. Calculating an approximate solution with accuracy e = 10-2 by the ASODE2 algorithm requires 687 steps, the remaining costs are calculated from the form of the scheme (4). The solution of the problem by Merson method requires 400,627 calculations of the right-hand side. In the case of full Jacobi matrix of system (9), the ASODE2 algorithm without freezing Jacobi matrix for the problem (9) requires 38 steps, 38 decompositions of the matrix, and 108 calculations of backward Gauss. The remaining costs are calculated from the form of the scheme (4). In calculations with freezing the Jacobi matrix, the computational cost is as follows:

is

if = 98, idec = 15, isol = 288.

B. Example 2 [4]

y' = -55y + 65y2 - yy2, y2 = 0.0785(y1 - y2), 0.1y, (10) t e [0,500], y1(0) = y2(0) = 1, y3(0) = 0,

h, = 2 • 10-2.

The problem (10) is solved by the method (4) with the diagonal approximation of Jacobi matrix where

bu = -55 - y3; b22 = -0.0785y3; b^ = 0.

An approximate solution with accuracy e = 10-2 by the ASODE2 algorithm is calculated

in 4,953 steps. The solution of the problem by Merson method requires 80,713 calculations of the right-hand side. In the case of the full Jacobi matrix of system (10), the ASODE2 algorithm without freezing Jacobi matrix requires 81 steps, 81 decompositions of Jacobi matrix, and 388 calculations of backward Gauss. In calculations with freezing the matrix, the computational cost is as follows:

is = if = 338, idec = 24, isol = 1,124.

C. Example 3 [8]

y = 77.27[y1(1 - 8.375 • 10-6y - y2) + y2], y2 = (y3 - (1 + y1)y2) / 77.27, y3 = 0.161(y1 - y3), t e [0,360], y1(0) = 1, y2(0) = 2,

(11)

y3(0) = 3, h0 = 10-6.

The problem (11) is solved by the method (4) with the diagonal approximation of Jacobi matrix where

¿11 = 77.27(1 - 1.675-10-7y1 - y2);

b22 = -(1+y1)/77.27; ¿33 = -0.161.

An approximate solution with accuracy s = 10-2 is calculated by the ASODE2 algorithm in 19,964 steps. Solving the problem by Merson method requires 23 700,664 calculations of the right-hand side. In the case of the full Jacobi matrix of the system (11), the ASODE2 algorithm without freezing Jacobi matrix requires 2,449 steps, 2,652 decompositions of Jacobi matrix, and 6,964 calculations of backward Gauss. The computational cost of the calculations with freezing the matrix is as follows:

is = if = 19,807, idec = 3,431, isol = 50,924.

VI. Conclusion

The proposed integration algorithm serves for the numerical solution of the problems of mechanics of continua after the space discretization by the finite element or finite difference method. In this case, in the problem (3) splitting into the functions g(y) and 9(y) is natural, g(y) is a symmetric part related to the second-order differentiation

operator and 9(y) is a nonsymmetric part (convective terms) related to the first-order differentiation operator. The implementation of the numerical formula (4) requires solving a linear system of algebraic equations twice. In the problems of mechanics of continua, efficiency of an integration algorithm can be improved by means of special methods for linear systems with a symmetric matrix that is positive in many cases.

The scheme (4) can also be applied to locally unstable problems. In this case 9(y) is responsible for the eigenvalues of Jacobi matrix with a positive real part. Contrary to ^-stable or Z-stable methods which usually have a small instability domain and are ^-stable or Z-stable not only in the left half-plane but in the right half-plane of the plane [hi] as well, explicit methods of Runge - Kutta type are unstable practically in the whole right half-plane, hence, they are preferable for determining an unstable solution. For locally unstable problems, splitting the right-hand side of a system of ordinary differential equations into functions 9(y) and g(y) from physical considerations usually does not involve difficulties.

The presented numerical results are not oriented to the solution of the problems of mechanics of continua or locally unstable problems, but are related to the study of possibilities of the integration algorithm for some conventional test examples.

The test examples are taken in order to demonstrate different features of the integration algorithm. In the case of similar behavior of several test problems, the simplest example is taken.

The purpose of the calculations is to verify working efficiency of the algorithm with variable step and freezing Jacobi matrix, reliability of the inequality for computation accuracy control and to study possibility of calculations with a diagonal approximation of Jacobi matrix. In the last case the computational cost per step for the proposed algorithm is close to that for explicit methods. In particular, from the analysis of numerical results for stiff problems, it follows that, in the case where methods with unbounded stability domain may not be applied, the algorithm (4) is considerably

more efficient than the Merson method being the most popular among explicit numerical schemes of Runge - Kutta type.

The work is supported by Russian Foundation of Fundamental Researches (projects 11-01-00106 and 11-01-00224).

references

1. Cooper G.J., Sayfy A. Additive Runge - Kut-ta methods for stiff ordinary differential equations. Mathematics of Computation, 1983, Vol. 40, No. 161, pp. 207-218.

2. Kaps P., Rentrop P. Generalized Runge -Kutta methods of order four with step size control for stiff ordinary differential equations. Numerical Math., 1979. No. 33, pp. 55-68.

3. Merson R.H. An operational methods for integration processes. Proc. Symp. on Data Proc. Weapons Research Establishment, Salisbury, Australia. 1957, pp. 329-330.

4. Novikov E.A. Explicit methods for stiff systems. Novosibirsk: Nauka, 1997. (rus)

5. Novikov E.A., Shitov Yu.A., Shokin Yu.I. One-step iteration-free methods for solving stiff

systems. Soviet Math. Dokl, 1989, Vol. 38, No. 1, pp. 212—216. (rus)

6. Novikov E.A. The additive third of order method for solving stiff of nonautonomous problems. Journal of Applied and Industrial Mathematics, 2010, Vol. 4, No. 4, pp. 1-12.

7. Novikov E.A., Shornikov Yu.V. Computer simulation of stiff hybrid systems. Novosibirsk: Publishing house NGTU, 2012, 450 p.

8. Rosenbrock H.H. General implicit processes for the numerical solution of differential equations. Computer J. 1963, No. 5, pp. 329-330.

9. Hairer E., Wanner G. Solving ordinary differential equations. Stiff and differential-algebraic problems. Berlin: Springer-Verlag, 1991. 528 p.

список литературы

1. Cooper G.J., Sayfy A. Additive Runge — Kutta methods for stiff ordinary differential equations. Mathematics of Computation, 1983, Vol. 40, No. 161, pp. 207-218.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

2. Kaps P., Rentrop P. Generalized Runge — Kutta methods of order four with step size control for stiff ordinary differential equations. Numerical Math., 1979. No. 33, pp. 55—68.

3. Merson R.H. An operational methods for integration processes. Proc. Symp. on Data Proc. Weapons Research Establishment, Salisbury, Australia. 1957, pp. 329—330.

4. Новиков Е.А. Явные методы для жестких систем. Новосибирск: Наука, 1997. 192 с.

5. Новиков Е.А., Шитов Ю.А., Шокин Ю.И. Одношаговые безытерационные методы

решения жестких систем // ДАН СССР. 1988. Т. 301. № 6. С. 1310—1314.

6. Novikov E.A. The additive third of order method for solving stiff of nonautonomous problems. Journal of Applied and Industrial Mathematics, 2010, Vol. 4, No. 4, pp. 1—12.

7. Новиков Е.А., Шорников Ю.В. Компьютерное моделирование жестких гибридных систем. Новосибирск: НГТУ, 2012. 450 с.

8. Rosenbrock H.H. General implicit processes for the numerical solution of differential equations. Computer J. 1963, No. 5, pp. 329—330.

9. Hairer E., Wanner G. Solving ordinary differential equations. Stiff and differential-algebraic problems. Berlin: Springer-Verlag, 1991. 528 p.

НОВИКОВ Евгений Александрович — доктор физико-математических наук, профессор, главный научный сотрудник отдела вычислительной математики Института вычислительного моделирования Сибирского отделения РАН.

660036, Россия, Красноярск, Академгородок, 50 [email protected]

© St. Petersburg State Polytechnical University, 2013

i Надоели баннеры? Вы всегда можете отключить рекламу.