Computer tools in education, 2021 № 3: 29-40 http://cte.eltech.ru
doi:10.32603/2071-2340-2021-3-29-40
NEW FEATURES IN MATHPARTNER 2021
Malaschonok G. 1.1, D.Sc., El [email protected], orcid.org/0000-0002-9698-6374 Seliverstov A. V.2, PhD, [email protected], orcid.org/0000-0003-4746-6396
1 National University of Kyiv-Mohyla Academy, Skovorody vul. 2, Kyiv 04070, Ukraine 2Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute),
We introduce new features in the MathPartner service that have recently become available to users. We highlight the functions for calculating both arithmetic-geometric mean and geometric-harmonic mean. They allow calculating complete elliptic integrals of the first kind. They are useful for solving many physics problems, for example, one can calculate the period of a simple pendulum. Next, one can calculate the modified arithmetic-geometric mean proposed by Semjon Adlaj. Consequently, one can calculate the complete elliptic integrals of the second kind as well as the circumference of an ellipse. Furthermore, one can also calculate the Sylvester matrices of the first and the second kind. Thus, by means of a few strings, one can calculate the resultant of two polynomials as well as the discriminant of a binary form. Some new matrix functions are also added. So, today the list of matrix functions includes the transpose, adjugate, conjugate, inverse, generalized inverse, and pseudo inverse of a matrix, the matrix determinant, the kernel, the echelon form, the characteristic polynomial, the Bruhat decomposition, the triangular LDU decomposition, which is an exact block recursive LU decomposition, the QR block recursive decomposition, and the singular value decomposition. In addition, two block-recursive functions have been implemented for calculating the Cholesky decomposition of symmetric positive-definite matrices: one function for sparse matrices with the standard multiplication algorithm and another function for dense matrices with multiplication according to the Winograd-Strassen algorithm. The linear programming problems can be solved too. So, the MathPartner service has become better and handy. It is freely available at http://mathpar.ukma.edu.ua/ as well as at http://mathpar.com/.
Keywords: computer algebra, arithmetic-geometric mean, geometric-harmonic mean, complete elliptic integral, pendulum, Sylvester matrix, Bruhat decomposition, LDU decomposition, QR decomposition, Cholesky decomposition, modern teaching technologies.
Citation: G. I. Malaschonok and A. V. Seliverstov, "New features in MathPartner 2021," Computer tools in education, no. 3, pp. 29-40, 2021; doi: 10.32603/2071-2340-2021-3-2940
The MathPartner service is useful at school, university, and work [15,16]. It can help you to solve problems in mathematical analysis, algebra, geometry, physics, and more. You can operate with functions and functional matrices, to obtain the exact numerical and analytical solutions
19 Bolshoy Karetny pereulok, bild. 1,127051, Moscow, Russia
Abstract
1. INTRODUCTION
and solutions in which the numerical coefficients have a required accuracy. Today it is available at http://mathpar.ukma.edu.ua/ as well as at http://mathpar.com/.
We present some new features and improvements. In particular, you can calculate the arithmetic-geometric mean and its modification to calculate the complete elliptic integrals of the first and the second kind [1, 2]. Thus, you can calculate the circumference of an ellipse as well as the period of a pendulum. Another application is also proposed to compute packing properties [13, 20, 23]. One can also calculate the Sylvester matrix [3, 4] as well as the resultant of two polynomials. Some new matrix functions have been implemented too [8,14,17,18].
2. SIX MEANS AND THE COMPLETE ELLIPTIC INTEGRALS
Given two non-negative numbers x and y, one can define their arithmetic, geometric and harmonic means as x++y, jxy, and , respectively. Moreover, AGM(x, y) denotes the arithmetic-geometric mean of x and y. It was defined by Johann Carl Friedrich Gauss at the end of the 18th century. GHM(x, y) denotes the geometric-harmonic mean of x and y. At last, MAGM(x, y) denotes the modified arithmetic-geometric mean of x and y. It is defined by Semjon Adlaj [1, 2]. Every mean is a symmetric homogeneous function in their two variables x and y. In contrast to well-known means, AGM(x, y), GHM(x, y), and MAGM(x, y) are calculated iteratively.
The arithmetic-geometric mean AGM(x, y) is equal to the limit of both sequences xn and yn, where xo = x, yo = y, xn+i = ¡(x, + yn), and yn+i = jx^y,.
In the same way, the geometric-harmonic mean GHM(x, y) is equal to the limit of both sequences xn and yn, where x0 = x, y0 = y, xn+1 = jxnyn, and yn+1 = |x+y,". Note that AGM(x, y )GHM(x, y) = xy.
The modified arithmetic-geometric mean MAGM(x, y) is equal to the limit of the sequence x,, where xo = x, yo = y, zo = 0, x,+i = , yn+i = zn + v/(xn - zn)(yn - zn), and z,+i =
zn - sj (xn - zn )(yn — zn ).
For example, let us run the commands, where functions begin with the symbol \, SPACE denotes the ring of coefficients, and FLOATPOS denotes the number of decimal places SPACE=R64[]; FLOATPOS=3; a=\AGM(1,5); g=\GHM(1,5); m=\MAGM(1,5); [a,g,m];
The output is equal to [2.6o4,1.92o,2.611].
These means are applicable, in particular, to calculate the complete elliptic integrals of the first and second kind. Let us use the parameter o < k < 1.
The complete elliptic integral of the first kind K(k) is defined as
dt
K (k ) =
_!
Jo VfT^t2
\/(1 - 12)(1 - k212) It can be computed in terms of the arithmetic-geometric mean:
71
K(k) =
2AGM(1, VT—F)
On the other hand, for k < 1, it can be computed in terms of the geometric-harmonic mean:
n_____i 1
K (k) = - GHM 1,
2 l vT—k2,
The complete elliptic integral of the second kind E(k) is defined as
1- k212
r1 /1 - k2
Jo Vl-t
E (k) = / a/--r dt.
It can be computed in terms of the modified arithmetic-geometric mem:
E(k) = K(k)MAGM(1,1 - k2). The circumference of an ellipse is equal to
MAGM(a2, b2)
2n--—-—-,
AGM(a, b)
where the semi-major and semi-minor axes are denoted a and b. On the other hand, n can be expressed as
So, to calculate n one cm run the commands
SPACE = R[]; FLOATPOS = 24; w = \sqrt(2); (\AGM(1,w))A2/(\MAGM(1,2)-1); Every argument of functions AGM(), MAGM(), and GHM() must be either a number or a variable, i.e., compound expressions cannot be used as arguments.
Let a point mass be suspended from a pivot with a massless cord. The length of the pendulum is denoted by L. It swings under gravitational acceleration g = 9.80665m/s2. The maximum angle that the pendulum swings away from the vertical, called the amplitude, is denoted by 00. One can find the period T of the pendulum using the arithmetic-geometric mean
If L = 1m and d0 = 120°, then T = 2.7546s. To calculate the period one can run the commands SPACE = R64[]; FLOATPOS =4; L = 1; g = 9.80665; T = \value(2*\pi*\sqrt{L/g}/(\AGM(1, 0.5)); On the other hand, if d0 is small, then the period is equal to 2.0064s. Note that we use the value() command to evaluate n.
3. THE SYLVESTER MATRICES, THE RESULTANT, AND THE DISCRIMINANT
Let us consider two univariate polynomials f (x) and g (x), where deg( f) = n, deg(g) = m, and m < n hold. James Joseph Sylvester introduced two matrices associated to f (x) and g(x). Please, refer to [3, 4]. More precisely, there are two different Sylvester matrices associated with two
univariate polynomials. Let us denote f (x) = fnxn +-----+ f1 x+f0 and g(x) = gmxm +-----bg +1x+g0.
The Sylvester matrix of the first kind was introduced in 1840 [21]. It is the (n + m) x (n + m) matrix. Its determinant is called the resultant of f and g. For example, if f = x3 + px + q and g = 3x2 + p, then the Sylvester matrix of the first kind is equal to
and its determinant equals 4p3 + 27q2, i.e., it is the opposite of the discriminant of f.
n =-
MAGM(1,2) -1
2
(AGM(1, >/2))
10pq0 0 10 p q
3 0 p 0 0 030p0
0 0 3 0 p
The Sylvester matrix of the second kind was introduced in 1853 as an improvement of the Sturm theory [22]. It is the (2n) x (2n) matrix, where n > m. The first and the second rows are
The next pair is the first pair, shifted one column to the right; the first elements in the tovo rows are zero. The remaining rows are obtained the same way as above. For example, if f = x3 + px+q and g = 3x2 + p, then the Sylvester matrix of the second kind is equal to
Of course, if the resultant vanishes, then the determinant of the Sylvester matrix of the second kind vanishes too.
The Sylvester matrix of the first kind can be calculated in MathPartner by a ternary function called sylvester(-,-,0), where the third argument is equal to zero. In the same way, the Sylvester matrix of the second kind can be calculated in MathPartner by a ternary function called sylvester(-, •, 1), where the third argument is not equal to zero. The first and the second arguments are univariate polynomials, for example, f (x) and g(x). The variable must be the last one in the list of variables. For example, if the polynomials over the ring of integers depend on parameters p and q, then the declaration in MathPartner can be SPACE = Z[p,q,x].
The resultant of two univariate polynomials can be calculated as resultant( f, g). The variable must be the last one in the list of variables. For example, let us run
SPACE = Z[a, b, c, x]; f = a*xA2+b*x+c; g = 2*a*x+b; \resultant(f, g);
The output is equal to 4ca2 - b2a.
The discriminant of a univariate polynomial f (x) = f^xd +-----+ f0 is equal to
The discriminant cm be calculated immediately. For example,
SPACE = Z[a, b, c, x]; f = a*xA2+b*x+c; \discriminant(f); The output is equal to -4ca + b2. There exists another way to calculate the discriminant of the univariate polynomial x2 + bx + c, where b and c are parameters.
SPACE = Z[b, c, x]; f = xA2+b*x+c; -\det(\sylvester(f, \D(f, x), 0)); The output is equal to -4c + b2. Of course, D( f, x) calculates the first derivative of f.
Let us show an application of the resultant of two univariate polynomials. For this purpose, we consider a system of two polynomial equations in two variables and eliminate a variable. Of course, variable elimination can be done by computing a Grobner basis. So, there exists another way to solve a system of algebraic equations. Unfortunately, the Grobner basis approach
r 1 0 p q 0 0'
0 3 0 p 0 0
0 10 p q 0
0 0 3 0 p 0
0 0 10 p q
k 0 0 0 3 0 p ,
(_1)d (d -1)/2
discriminant^ f) =---resultant( f, f ').
fd
4. SYSTEMS OF ALGEBRAIC EQUATIONS
is sometimes very complicated. Contrariwise, the approach based on the resultant is often more effective. Let us consider the system
x2 + y2 = 1, 2x2 + xy + y2 = 1.
In this case, solutions to the system correspond to intersection points of the circle and the ellipse. Let us consider two univariate polynomials depending on one parameter x
f (y) = x2 + y2 - 1, g(y) = 2x2 + xy + y2 - 1.
Its resultant is equal to 2y4 - 3y2 + 1. On the other hand, the Grobner basis for the reverse lexicographical ordering consists of two polynomials x - 2y3 + 2y and 2y4 - 3y2 + 1. The second polynomial is equal to the resultant. Thus, every solution to the system satisfies the equation 2y4 - 3y2 + 1 = 0. So, one can eliminate this variable. There exist four solutions to the equation yi = -1, y2 = - >/2/2, y3 = >/2/2, and y4 = 1. The corresponding values of x are xi = 0, xi = >/2/2, xi = - >/2/2, and x4 = 0. So, there are four intersection points.
Next, let us show the corresponding program in MathPartner. The Grobner basis of a polynomial ideal can be obtained due to Bruno Buchberger. The algorithm is implemented as groebnerB(). The same basis can be calculated using a matrix algorithm that is similar to the F4 algorithm. It is implemented as groebner(). The ordering is reverse lexicographical. Note that functions should begin with the symbol \.
SPACE = Z[y, x]; f = xA2+yA2 -1; g = 2*xA2+x*y+yA2 -1; \groebner(f, g); The output consists of two polynomials [x - 2y3 + 2y, 2y4 - 3y2 + 1]. Another way to calculate the resultant is to run the commands SPACE = Z[y, x]; f = xA2+yA2 -1; g = 2*xA2+x*y+yA2 -1; \det(\sylvester(f, g, 0)); The output consists of one univariate polynomial 2y4 - 3y2 + 1. Of course, one can run SPACE = Z[y, x]; f = xA2+yA2 -1; g = 2*xA2+x*y+yA2 -1; \resultant(f, g); To calculate roots of a polynomial one can run solve().
SPACE = Q[y]; \solve(2*yA4 -3*yA2+1 = 0); The output consists of four numbers expressed in radicals [-1, >/2/2, ((-1) • >/2/2), 1]. Let us run the same command over the field of real numbers. We recommend using option SPACE = R64[y]. It denotes the set of 64-bit floating-point numbers with 52-digit mantissa, 11bit exponent, and one sign bit.
SPACE = R64[y]; \solve(2*yA4 -3*yA2+1 = 0); The output consists of four floating-point numbers [1.00, -1.00,0.71, -0.71]. Of course, systems of linear algebraic equations can be solved with solve(). For example,
SPACE = Q[]; M = [[1, 2], [3, 1]]; b = [5, 5]; \solve(M, b); The output is equal to [1,2]T. There exists another way
SPACE = Q[x, y]; \solve([x+2*y = 5, 3*x+y = 5]); The output is equal to [1,2].
Moreover, one can solve a system of inequalities in one variable. For example, SPACE = Q[x]; \solve([xA2+4*x -5 > 0, xA2 -2*x -8 < 0]); The output is equal to (1,4). In the next example
SPACE = Q[x]; \solve([x <0, x > 2]); The output is equal to the empty set 0.
5. THE GREATEST COMMON DIVISOR OF mO POLYNOMIALS
In this section we shall consider polynomials over either the field of rational numbers or the ring of integers. The problem of calculating the greatest common divisor of two polynomials is important for symbolic computations, in particular, over a finite extension of the field of rational numbers [5, 7, 10]. Unfortunately, the bit complexity of the Euclidean algorithm is exponential. There exists a polynomial upper bound on the number of arithmetic operations. But the size of a product of integers at intermediate steps can be very large. For some discussion about the computational complexity of powers of integers refer to [11].
A modified algorithm based on subresultant residues had been proposed by J.J. Sylvester [22] and later improved by Walter Habicht [9] and Alkiviadis Akritas [3].
The main result was obtained by Brown. He found a way to compute the subresultant PRS without using matrix reduction. He proposed to modify the Euclidean algorithm, reducing all coefficients by common factors so that they coincide with the subresultant PRS [6]. This algorithm is applied in MathPartner to compute the GCD of two polynomials. This approach was further developed in work [4].
To calculate the greatest common divisor one can run GCD( f, g); for example,
SPACE = Z[x]; \GCD(9*x, 6*x+6);
The output is equal to 3. To calculate Bezout coefficients one can run extendedGCD(f,g). The least common multiple can be calculated too.
SPACE = Z[x]; \LCM(9*x, 6*x+6);
The output is equal to 18x2 + 18x.
6. MATRIX FUNCTIONS
Today the list of matrix functions includes the transpose, adjugate, conjugate, inverse, generalized inverse, and pseudo inverse of a matrix, the matrix determinant, the kernel, the matrix echelon form, the characteristic polynomial, the Bruhat decomposition, the triangular LDU decomposition, which is an exact block recursive LU decomposition, the QR block recursive decomposition, and the singular value decomposition. In addition, two block-recursive functions have been implemented for calculating the Cholesky decomposition of symmetric positive definite matrices: one function for sparse matrices with the standard multiplication algorithm and another function for dense matrices with multiplication according to the Winograd-Strassen algorithm. The linear programming problems can be solved too.
For a given matrix A, the pseudo inverse of A is a matrix A- satisfying both equalities AA- A = A and A- AA- = A-. Furthermore, the generalized inverse Moore-Penrose A+ satisfies four equalities AA+ A = A, A+ AA+ = A+, (A+ A)r = A+ A, and (AA+)r = AA+. If A is a square non-degenerate matrix, then three inverses of A coincide, i.e., A-1 = A- = A+. If a n x m matrix A can be decomposed as A = BC, where B is a n x k matrix, C is a k x m matrix, and rank(A) = rank(B) = rank(C) = k, then A+ = CT(CCT)-1(BTB)-1BT. This idea was expressed by Vera Nikolaevna Kublanovskaya [12]. About big matrices refer to [19].
For a given matrix A, one can calculate:
• The transpose transpose(A) or AT;
• The conjugate conjugate(A) or A*;
• The matrix echelon form toEchelonForm(A);
• The kernel kernel(A);
• The determinant det(A);
• The inverse inverse(A) or A-1;
• The adjugate adjoint(A) or A*;
• The Moore-Penrose generalized inverse genInverse(A) or A+;
• The pseudo inverse pseudoInverse(A);
• The closure closure(A) or Ax. The closure of a matrix A is equal to the sum of matrices I + A + A2 + A3 +—. For the classical algebras it is equivalent to (I - A)-1.
To calculate the characteristic polynomial of a matrix A, you should work over the ring of polynomials in some new variable and run charPolynom(A). For example, let us run the commands SPACE=Z[x]; M=[[1, 2], [3, 5]]; f=\charPolynom(M); The output is equal to f = x2 - 6x -1. Let us take a closer look at some types of decomposition.
6.1. The Bruhat decomposition
To calculate the Bruhat decomposition of a matrix A one can run BruhatDecomposition( A). The result consists of three matrices [V, D, U], where both V and U are upper-triangular matrices, D is a permutation matrix multiplied by the inverse of the diagonal matrix [14]. If all entries of the matrix A are elements of commutative domain R, then all entries of matrices V, D-1, and U belong to the same domain R. Let us consider a 2 x 2 matrix over Z. For example,
M =
-1 2 11
Let us run the commands
SPACE = Z[]; M = [[-1, 2], [1, 1]]; \BruhatDecomposition(M); The output consists of three matrices
3 -1 01
0 1/3 10
11
03
An entry of the middle matrix D is not integer, but the inverse matrix has integer entries.
D-1
01
30
6.2. The LDU decomposition
The LDU decomposition of a matrix A can be calculated by means of the command LDU(A). The result consists of three matrices [L, D, U], where L is a lower-triangular matrix, U is an upper-triangular matrix, D is a permutation matrix multiplied by the inverse of a diagonal matrix. If all entries of the matrix A belong to a commutative domain R, then all entries of matrices L, D-1, and U belong to the same domain R, refer to [18]. Let us consider an example, where M is a 2 x 2 matrix.
1 2 31
M
Let us run the commands
SPACE = Z[]; M = [[1 The output consists of three matrices
2], [3, 1]]; \LDU(M);
10
35
10 0 1/5
12
05
Both first and third matrices are triangular matrices over Z. The middle matrix D has a rational entry, but the inverse matrix is defined over Z.
D-1 =
1 0
0 -5
To calculate the LDU decomposition of A together with decomposition of the pseudo inverse A- = WDM, one can run the command LDUWMdet(A). The result consists of five matrices and determinant of the largest non-degenerate corner block [L, D, U, W, M, det], where L and U are lower and upper triangular matrices, D is a truncated weighted permutation matrix, DM and WD are lower and upper triangular matrices. Moreover, A = LDU and A- = WDM. If entries of the matrix A belong to a commutative domain, then all matrices, except for D, also belong to this domain. Let us run the commands
SPACE = Z[]; M = [[1, 2], [3, 1]]; \LDUWMdet(M);
The output consists of
10
3 -5
0
-1/5
1 2
05
-5 10 05
-5 0 15 5
,[[-5]]
Of course, three of these matrices coincide with three matrices in the previous example. Next, let us consider a matrix over the ring Z [x, y]
M =
y x x y
and run the commands
SPACE = Z[x, y]; M = [[y, x], [x, y]]; \LDU(M); The output consists of three matrices
y
0
2 2 x y2 - x2
1/y
0
1/(y3 -yx2) )'{ 0 y2 -
y
Entries of the middle matrix D are rational functions. All entries of the matrices L, D 1, and U are polynomials over Z.
6.3. The QR block recursive decomposition
Let us consider a 2k x 2k matrix A over the field of reals. The QR decomposition of A can be calculated by means of the command QR(A). Note that if the order is not equal to 2k for any integer k, then the algorithm does not work because it is based on block recursion [17]. Let us consider an example, where M is a 2 x 2 matrix.
M
12 31
Let us run the commands
SPACE = R64[]; M = [[1, 2], [3, 1]]; \QR(M); The output consists of two matrices
0.32 -0.95 0.95 0.32
3.16 1.58 0 -1.58
The first matrix is orthogonal. The second matrix is upper-triangular. Their product is equal to the initial matrix M.
x
2
0
x
6.4. The singular value decomposition
To calculate the singular value decomposition (SVD) of a matrix A, one can run SVD(A). As a result, three matrices [U, D, V] will be calculated. The matrices U and V are unitary, the matrix D is diagonal, and A = UDV holds. Let us consider an example, where M is a 2 x 2 matrix.
M
2 3 1 0
Let us run the commands
SPACE = R64[]; FLOATPOS = 3; M = [[2, 3], [1, 0]]; \SVD(M); The output is equal to
-0.987 -0.16 -0.16 0.987
3.65 0 0 0.822
-0.585 -0.811 0.811 0.585
6.5. The Cholesky decomposition
In general, the Cholesky decomposition is a decomposition of a Hermitian positive-definite matrix into the product of a lower-triangular matrix and its conjugate transpose, which is useful for efficient numerical solutions. It was discovered by Andre-Louis Cholesky for real symmetric matrices [8]. And we also suppose that matrices are real. So, every real symmetric positive-definite matrix is equal to the product LLT, where L is a lower-triangular matrix.
The Cholesky decomposition can be calculated for a symmetric and positive definite matrix A by means of the command cholesky(A). The result consists of two lower triangular matrices L and S such that A = LLT and SL = I. Let us consider an example, where M is a 2 x 2 matrix.
M=
Let us run the commands
SPACE = R64[]; FLOATPOS = 2; M The output is equal to
" 1.73 0 1.15 1.63
32 24
[[3, 2], [2, 4]]; \cholesky(M);
0.58 0 0.41 0.61
For large dense matrices, whose size is greater than or equal to 128 x 128, one can use a fast algorithm cholesky(A, 1) that uses multiplication of blocks by the Winograd-Strassen algorithm.
7. MODULAR ARITHMETIC
The current version of the MathPartner service supports operations over a finite field Z/pZ, where p is a prime number. One should use either SPACE = Zp[] or SPACE = Zp32[]. The prime number p is equal to the constant MOD or MOD32, respectively. In the second case, p satisfies the inequality p < 231. The default value is 268435399. For example, working over the field Z/5Z one can run
SPACE = Zp32[x]; MOD32 = 5; \GCD(x+2,x-3);
The output is equal to x - 3 because -3 = 2 (mod 5). On the other hand, the same example over Z/7Z leads to another answer.
SPACE = Zp32[x]; MOD32 = 7; \GCD(x+2,x-3);
The output is equal to 1 .
All functions using only rational operations on input data can be calculated over finite fields. In particular, for two polynomials over Z/pZ one can calculate the greatest common divisor GCD() as well as Sylvester matrix sylvester(). One can calculate the Grobner basis of an ideal in a polynomial ring using either groebner() or groebnerB(). One can also calculate the determinant det(), echelon form toEchelonForm(), characteristic polynomial charPolynom(), Bruhat decomposition BruhatDecomposition(), LDU decomposition LDU(), and LDUWMdet() of a matrix.
8. CONCLUSION
Now the MathPartner service has become even better and allows us to solve new problems in geometry and physics. In particular, new functions allow to calculate the period of a simple pendulum as well as the circumference of an ellipse in terms of the arithmetic-geometric and the modified arithmetic-geometric means. The resultant of two univariate polynomials is a basic tool of computer algebra because it allows solving systems of polynomial equations. Matrix functions are also widely used to solve applied problems.
The reader is recommended to calculate examples of the considered quantities using the MathPartner service. These exercises will help you remember and understand the computer algebra methods better. On the other hand, new algorithms can be implemented by the user through the branch and loop operators. Moreover, the MathPartner service opens up the possibility of distance learning.
References
1. S. Adlaj, "An eloquent formula for the perimeter of an ellipse," Notices of the American Mathematical Society, vol. 59, no. 8, pp. 1094-1099, 2012; doi: 10.1090/noti879
2. S. Adlaj, "An Arithmetic-Geometric Mean of a Third Kind!" in Proc. of Computer Algebra in Scientific Computing. 21st Int. Workshop, CASC2019, Moscow, Russia, Aug. pp. 26-30, 2019, vol. 11661, 2019. pp. 37-56.
3. A. G. Akritas, Elements of Computer Algebra with Applications. NY: John Wiley and Sons, 1989.
4. A. G. Akritas, G. I. Malaschonok, and P. S. Vigklas, "Subresultant polynomial remainder sequences obtained by polynomial divisions in Q[x] or in Z[x]," Serdica Journal of Computing, vol. 10, no. 3-4, pp. 197-217, 2016.
5. P. E. Alaev and V. L. Selivanov, "Fields of algebraic numbers computable in polynomial time. I," Algebra and Logic, vol. 58, no. 6, pp. 447-469, 2020; doi: 10.1007/s10469-020-09565-0
6. W. S. Brown, "The Subresultant PRS Algorithm," ACM Transactions onMathematical Software, vol. 4, no. 3, pp. 237249, 1978; doi: 10.1145/355791.355795
7. D. A. Dolgov, "Polynomial greatest common divisor as a solution of system of linear equations," Lobachevskii Journal ofMathematics, vol. 39, no. 7, pp. 985-991, 2018; doi: 10.1134/S1995080218070090
8. J. F. Grcar, "How ordinary elimination became Gaussian elimination," HistoriaMathematica, vol. 38, pp. 163-218, 2011; doi: 10.1016/j.hm.2010.06.003
9. W. Habicht, "Eine Verallgemeinerung des Sturmschen Wurzelzahlverfahrens," Commentarii Mathematici Helvetia, vol. 21, pp. 99-116,1948 (in German); doi: 10.1007/BF02568028
10. J. van der Hoeven and G. Lecerf, "Fast computation of generic bivariate resultants," Journal of Complexity, vol. 62, article 10149920216 2021; doi: 10.1016/j.jco.2020.101499
11. A. M. Kotochigov and A. I. Suchkov, "A method for reducing iteration in algorithms for building minimal additive chains," Computer Tools in Education, no. 1, pp. 5-18, 2020 (in Russian); doi: 10.32603/2071-2340-2020-1-5-18
12. V. N. Kublanovskaya, "Evaluation of a generalized inverse matrix and projector," USSR Computational Mathematics and Mathematical Physics, vol. 6, no. 2, pp. 179-188,1966; doi: 10.1016/0041-5553(66)90064-4
13. F. Lamarche and C. Leroy, "Evaluation of the volume of intersection of a sphere with a cylinder by elliptic integrals," Computer Physics Communications, vol. 59, no. 2, pp. 359-369, 1990.
14. G. Malaschonok, "Generalized Bruhat decomposition in commutative domains." in Proc. of Computer Algebra in Scientific Computing. 15th Int. Workshop, CASC2013, Berlin, Germany, Sep. 9-13,2013, 2013, vol. 8136, pp. 231-242.
15. G. I. Malaschonok, "Application of the MathPartner service in education," Computer Tools in Education, 2017, no. 3, pp. 29-37 (in Russian).
16. G. I. Malaschonok, "MathPartner computer algebra," Programming and Computer Software, vol. 43, no. 2, pp. 112118, 2017; doi: 10.1134/S0361768817020086
17. G. Malaschonok, "Recursive matrix algorithms, distributed dynamic control, scaling, stability," in Proc. of 12th Int. Conf. on Comp. Sci. and Information Technologies (CSIT-2019). Sep. 23-27, Yerevan, pp. 175-178, 2019.
18. G. I. Malaschonok, "LDU-factorization," in ArXiv e-print, no. 2011.04108, 2020. [Online]. Available: https://arxiv. org/abs/2011.04108
19. G. Malaschonok and I. Tchaikovsky, "About big matrix inversion," in Computer algebra: 4th International Conference Materials. Moscow, Jun. 28-29,2021, Moscow, 2021, pp. 81-84; doi: 10.29003/m2019.978-5-317-06623-9
20. N. J. Mariani, G. D. Mazza, O. M. Martinez, and G. F. Barreto, "Evaluation of radial voidage profiles in packed beds of low-aspect ratios," The Canadian Journal of Chemical Engineering, vol. 78, no. 6, pp. 1133-1137, 2000; doi: 10.1002/cjce.5450780614
21. J. J. Sylvester, "A method of determining by mere inspection the derivatives from two equations of any degree," Philosophical Magazine, vol. 16, pp. 132-135,1840.
22. J. J. Sylvester, "On the theory of syzygetic relations of two rational integral functions, comprising an application to the theory of Sturm's functions, and that of the greatest algebraical common measure," Philosophical Transactions, vol. 143, pp. 407-548, 1853.
23. B.-X. Xu, Y. Gao, and M.-Z. Wang, "Particle packing and the mean theory," Physics Letters A, vol. 377, no. 3-4, pp. 145-147, 2013; doi: 10.1016/j.physleta.2012.11.022
Received 30-05-2021, the final version — 22-07-2021.
Malaschonok Gennadi, Doctor of Physical and Mathematical Sciences Professor, Department of Informatics, National University of Kyiv-Mohyla Academy, EZ [email protected]
Alexandr Seliverstov, PhD, Institute for Information Transmission Problems of the Russian Academy of Sciences (Kharkevich Institute), [email protected]
Компьютерные инструменты в образовании, 2021 № 3: 29-40 УДК: 519.67 http://cte.eltech.ru
doi:10.32603/2071-2340-2021-3-29-40
Новые функции в MathPartner 2021
Малашонок Г. И.1, доктор физико-математических наук профессор, ИЗ [email protected], orcid.org/0000-0003-4746-6396 Селиверстов А. В.2, кандидат физико-математических наук, И [email protected],
orcid.org/0000-0003-4746-6396
1 Национальный университет «Киево-Могилянская академия», ул. Григория Сковороды, 2, Киев 04070, Украина 2Институт проблем передачи информации им. А. А. Харкевича Российской академии наук, Большой Каретный переулок, д. 19 стр. 1, 27051, Москва, Россия
Аннотация
Мы представляем новые возможности сервиса MathPartner, которые недавно стали доступны пользователям. Мы выделяем функции для вычисления как арифметико-
геометрического среднего, так и геометро-гармонического среднего. Они позволяют вычислять полные эллиптические интегралы первого рода и полезны для решения многих задач физики, например, можно вычислить период математического маятника. Далее можно вычислить модифицированное арифметико-геометрическое среднее, предложенное Семёном Адлаем. Следовательно, можно вычислить полные эллиптические интегралы второго рода, а также периметр эллипса. Кроме того, также можно вычислить матрицы Сильвестра первого и второго рода. Таким образом, с помощью нескольких строк можно вычислить равнодействующую двух многочленов, а также дискриминант бинарной формы. Также добавлены некоторые новые матричные функции. Итак, на сегодняшний день в этот список входят транспонированная, присоединённая, сопряженная, обратная, обобщенная обратная и псевдообратная матрицы, определитель матрицы, ядро, ступенчатый вид или эшелонная форма, характеристический многочлен, разложение Брюа, 1_Р11-разложение, которое служит блочно-рекурсивным Ш-разложением, блочно-рекурсивное ОР-разложение и сингулярное разложение. Кроме того, реализованы две блочно-рекурсивные функции для вычисления разложения Холецкого симметричных положительно определенных матриц: одна функция для разреженных матриц со стандартным алгоритмом умножения и другая функция для плотных матриц с умножением по алгоритму Винограда-Штрассена. Задачи линейного программирования тоже могут быть решены. Итак, сервис Ма1:11Раг1:пег стал лучше и удобнее. Он находится в свободном доступе по адресу http://mathpar.ukma.edu.ua/, а также http://mathpar.com/.
Ключевые слова: компьютерная алгебра, арифметико-геометрическое среднее, геометро-гармоническое среднее, полный эллиптический интеграл, маятник, матрица Сильвестра, разложение Брюа, LDU-разложение, QR-разложение, разложение Холецкого, современные технологии обучения.
Цитирование: Малашонок Г. И., Селиверстов А. В. Новые функции в MathPartner 2021 // Компьютерные инструменты в образовании. 2021. № 3. С. 29-40. do¡: 10.32603/2071-2340-2021-3-29-40
Поступила в редакцию 30.05.2021, окончательный вариант — 22.07.2021.
Малашонок Геннадий Иванович, доктор физико-математических наук профессор кафедры информатики Национального университета «Киево-Могилянская академия» (НаУКМА), ИЗ [email protected]
Селиверстов Александр Владиславович, кандидат физико-математических наук, ведущий научный сотрудник Института проблем передачи информации им. А. А. Харкевича Российской академии наук, [email protected]