Научная статья на тему 'IMPLICIT ITERATIVE ALGORITHM FOR SOLVING REGULARIZED TOTAL LEAST SQUARES PROBLEMS'

IMPLICIT ITERATIVE ALGORITHM FOR SOLVING REGULARIZED TOTAL LEAST SQUARES PROBLEMS Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
47
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
IMPLICIT REGULARIZATION / TOTAL LEAST SQUARES / SINGULAR VALUE DECOMPOSITION / ILL-CONDITIONING / ITERATIVE REGULARIZATION METHODS

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Ivanov Dmitriy Vladimirovich, Zhdanov Aleksandr Ivanovich

The article considers a new iterative algorithm for solving total least squares problems. A new version of the implicit method of simple iterations based on singular value decomposition is proposed for solving a biased normal system of algebraic equations. The use of the implicit method of simple iterations based on singular value decomposition makes it possible to replace an ill-conditioned problem with a sequence of problems with a smaller condition number. This makes it possible to significantly increase the computational stability of the algorithm and, at the same time, ensures its high rate of convergence. Test examples shown that the proposed algorithm has a higher accuracy compared to the solutions obtained by non-regularized total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «IMPLICIT ITERATIVE ALGORITHM FOR SOLVING REGULARIZED TOTAL LEAST SQUARES PROBLEMS»

Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki

[J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2022, vol. 26, no. 2, pp. 311-321 ISSN: 2310-7081 (online), 1991-8615 (print) d https://doi.org/10.14498/vsgtu1930

MSC: 65F10, 65F22

Implicit iterative algorithm for solving regularized total least squares problems

D. V. Ivanov1'2, A. I. Zhdanov3

1 Samara National Research University, 34, Moskovskoye shosse, Samara, 443086, Russian Federation.

2 Samara State University of Transport, 2 B, Svobody str., Samara, 443066, Russian Federation.

3 Samara State Technical University, 244, Molodogvardeyskaya st., Samara, 443100, Russian Federation.

Abstract

The article considers a new iterative algorithm for solving total least squares problems. A new version of the implicit method of simple iterations based on singular value decomposition is proposed for solving a biased normal system of algebraic equations. The use of the implicit method of simple iterations based on singular value decomposition makes it possible to replace an ill-conditioned problem with a sequence of problems with a smaller condition number. This makes it possible to significantly increase the computational stability of the algorithm and, at the same time, ensures its high rate of convergence. Test examples shown that the proposed algorithm has a higher accuracy compared to the solutions obtained by non-regularized total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.

Keywords: implicit regularization, total least squares, singular value decomposition, ill-conditioning, iterative regularization methods.

Received: 15th May, 2022 / Revised: 6th June, 2022 / Accepted: 7th June, 2022 / First online: 30th June, 2022

Mathematical Modeling, Numerical Methods and Software Complexes Research Article

© Authors, 2022

© Samara State Technical University, 2022 (Compilation, Design, and Layout)

3 ©® The content is published under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/) Please cite this article in press as:

Ivanov D. V., Zhdanov A. I. Implicit iterative algorithm for solving regularized total least squares problems, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2022, vol. 26, no. 2, pp. 311-321. EDN: NFBOXC. DOI: 10.14498/vsgtu1930. Authors' Details:

Dmitriy V. Ivanov https://orcid.org/0000-0002-5021-5259

Cand. Phys. & Math. Sci.; Associate Professor; Dept. of Information Systems Security1; Dept. of Mechatronics2; e-mail: dvi85@list.ru

Aleksandr I. Zhdanov © https://orcid.org/0000-0001-6082-9097

Dr. Phys. & Math. Sci.; Proffessor; Dept. of Applied Mathematics & Computer Science3;

e-mail: zhdanovaleksan@yandex.ru

Introduction. Total least squares (TLS) are widely used in solving systems of linear algebraic equations with inaccurate data on the right and left sides.

Total least squares are widely used in many applied fields [1]. Including for system identification [2-5], image restoration [6, 7], tomography [8, 9], speech processing [10,11].

There are many algorithms for solving total least squares problems. The classical algorithm for solving the total least squares problem based on SVD (singular value decomposition) [12]. The solution of the total least squares problem based on augmented systems is considered in [13, 14]. To solve large-scale linear systems of equations or linear systems of equations with a sparse matrix, iterative algorithms for total least squares are used: the Newton method [15, 16], Rayleigh iterations [17], Lanchotz iterations [18].

Various regularization methods are used to solve very ill-conditioned total least squares problems. Today, there are two main approaches to the regularization of total least squares problems: based on the truncated SVD [19] and Tikhonov's regularization [20], as well as their modifications [21-24].

One way to improve the accuracy of the solution is to use iterative methods of regularization [25]. In [26], an implicit iterative algorithm for ordinary least squares based on SVD was proposed.

The condition number for total least squares is always greater than the condition number for ordinary least squares. Tikhonov's regularization for total least squares makes it possible to approximate the condition number to the condition number of ordinary least squares [20].

This article proposes an implicit iterative algorithm to solve total least squares problems. When using the proposed algorithm, the condition numbers at each iteration turn out to be less than the condition numbers of ordinary least squares. This makes it possible to find the total least squares solution for very ill-conditioned problems.

It is proposed to use a restriction on the length of the solution vector as a stopping criterion for the iterative algorithm. The simulation results showed the high solution accuracy of the proposed implicit iterative algorithm to solve regularized total least squares problems.

1. Problem Statement. Let the overdetermined system of equations be defined as

Ax = f, (1)

where A e Rmxra, f e Mm, m>n.

We will assume that the matrix A and vector f contain errors

A = A + 3, f = f + 6

It is required to find a solution for the overdetermined system (1) using perturbed data A and f.

To find an approximate solution vector from the errors data, the total least squares can be applied [12]. The total least squares approach minimizes the squares of errors in the values of both dependent and independent variables

min||£ 3||F, s.t. (A + 3x = f + where (£ 3 is the augmented matrix and || ■ is the Frobenius norm.

The solving of total least squares is reduced to finding the minimum of the objective function:

• 11^ - f II2

min --tHtt^- , (2)

1 + HxII2

where || ■ || = || ■ ||2 is the Euclidean norm.

The article proposes an implicit iterative algorithm for the regularized solution of the system of equations (1) according to the data with errors using the total least squares.

2. Iimplicit iterative algorithm for solving regularized TLS problems.

Using the SVD, an arbitrary matrix A can be represented as follows:

A = U EFT, (3)

where U = (ui ■ ■ ■ un) e Rmxra and V = (vi ■ ■ ■ vn) e Rraxra are orthogonal matrices; E = diag(ai(A) ••• an(A)); a\(A) ^ ••• ^ &n(A) are singular numbers of matrix A; m and Vi are respectively left and right singular vectors of matrix A.

Let the augmented matrix of the system of equations be defined as

A = (A,f).

A solution to the total least squares problem exists and is unique if the following condition is satisfied [27]:

a = an+i(A) <an(A). (4)

When condition (4) is satisfied, the solution to problem (2) can be obtained from a biased normal system of equations [27]:

(ATA - a2En)x = ATf. (5)

Let p be a positive constant. Equation (5) is equivalent to the following equation:

pAT Ax + x = pia2x + x + pAT f. (6)

The implicit iterative algorithm for equation (6) has the following form:

(p-1En + ATA)xk+i = (a2 + p-1)xk + ATf. (7)

We write (7) as

xk+i = (p-1En + A1 A)~l {(a2 + p-1)xk + ATf),

or

xk+i = + g^, (8)

where = (a2^ + p-i)(p-iEn + ATA)-\ ^ = (/j.-En + ATA)-iATf.

Using the SVD decomposition of the matrix A (3), let us perform the following transformations:

= (a2 + ^-l)(p-lEn + ATA) 1 =

n T

= (a2 + »-l)V(E + »-lEn)-lVT = (a2 + »-l) ^ ^

t^2 + i-V

9» = (p-1En + ATA) V/ =

= [ V (E + [i-lEn)V T]-V EU Tf = V (E + n-1En)-1VTV EU Tf =

n

= V(E + [-1En)-1EUTf = £vx 2 +1 _ 1 uTf.

1=1 v2 1

Then the implicit scheme (8) can be written based on the singular value decomposition in the following form:

n y n -p

^=(-2+[-1)e+zk=°,l>— (9)

i=1 ai = ai

3. Convergence and conditionality of an implicit iterative algorithm.

The spectral radius of the transition matrix is

p(^) = (na2 + 1) Amax [(En + [ATA)-1] = ^ + 1 - ^ + 1

Amin(En + »AT A) 1 + »ai ( A) '

where Amax, Amin are the maximum and minimum eigenvalues of the matrices, respectively.

The convergence condition of the implicit method of simple iterations (7) can be written as follows:

ia2 + 1

1 + »ai (A )

P(^) < i. (10)

If condition (4) is satisfied and [ > 0, condition (10) is always satisfied. This means that the iterative algorithm (8) converges for all cases where the biased normal system of equations has a unique solution.

It can be shown that the larger the value the higher the rate of convergence of the algorithm.

Let us show that algorithms (8) and (9) have different values of the condition numbers. The simple iteration method can be written as follows:

= argmin

xeRn

( A )*-i 1 \

\ ^/i-TEnJ \ V»-1 + a2Xk j

Formula (11) can be represented in the following form

Xk+l = A+f(k),

2

f A \ (k) ( f \

where A^ = _ , fjJi) = _ ; A+ is a pseudoinverse Moore-

\\/p-1EnJ \VV-1 + v2%k)

Penrose matrix.

Since rank(A^) = n, then A+ can be calculated by the formula:

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

A+ = (AJ, A^) AJ,.

In this case, the problem corresponds to the classical form of the implicit method of simple iterations:

xk+i = A A,)-1 Al f(k) = (AT A + p-1En)-1 (ATf + (p-1 + a2)xk ),

K (A^A + n-1w ) = WaTa + P-1En) = ^ + M-1 A + » En)= XmnAA + p-1En) = rt+ï-1.

For the implicit method based on SVD decomposition, the condition number is equal to the condition number of the matrix AM:

(A ) ( + V-1 )1/2

4. Stopping rule for an implicit iterative algorithm. There are a large number of stopping rules for iterative regularized algorithms [28-30]. In this article, we will use to stop the algorithm (5) the restriction on the value of the norm of the solution:

lkfc+1|| < S, (12)

where ô is the maximum allowable value of the Euclidean norm of the solution vector.

In contrast to Tikhonov's total least squares regularization [20], condition (12) is verified directly without calculating indirect parameters.

5. Simulation results. Regularization Toolbox [31] was used to generate test cases. A matrix A2ooox4 with singular values a = (5 ■ 10-4 104 106 107) was generated.

The true vector is £true = (1 1 1 1)T. The vector f

is f — A2000x4U.

Gaussian white noise with zero mean and standard deviation Of = a a = 10-2 was added to the matrix A2ooox4 and the vector f.

The algorithm (5) was compared with the classical SVD-based TLS algorithm [12], the solution based on augmented systems [13], and regularized total least squares [20]:

(ATA — a2En + aEn)x = ATf. (13)

The condition number of the matrix ATA — a2En + aEn i

is

K2(ATA — a2En + aEn) = ^ ^ .

< — (°2 — ot)

The parameter a was selected from the interval (0, a2) with a step 10 4a2: an = 10-4a2i, i = 0,1,..., 10000.

The algorithms were compared by the relative mean square error (RMSE) of the solution

5xk = 11Xk -Xt™H2 . 100%.

Untrue || 2

The simulation results are presented in Table 1. Figure 1 shows the relative root mean square error of the solution (8) in the k-th iteration for various values of the parameter p-1. Figure 2 shows the relative root mean square error of solution (13) depending on the choice of parameter a».

Table 1

RMSE of the solution

Algorithm for estimating parameters Sx, 100 % K2

Algorithm (5) with p-1 = 10-1a 7.53 • 10-2 2.02 107

Algorithm (5) with p-1 = 10-2a 0.2045 2.20 107

Algorithm (5) with p-1 = 10-5a 8.63 • 10-2 2.23 107

TLS [12] 49.51 4.75 10y

TLS [13] 49.51 6.34 • 10iu

RTLS [20] 17.73 5.32 • 1016

k

Figure 1. RMSE of the solution (8) at the kth iteration for various values of the parameter

1 - yr1 = 10-V, 2 - p,-1 = 10-V; 3 - p,-1 = 10-V

0.55

0.5

0.45

0.4

« 0.35

0.3 0.25 0.2 0.15

-----

0 2000 4000 6000 8000 10000

i

Figure 2. RMSE of the solution (13) for various values of the parameter a.i = 10-4a2i

Conclusion. The paper proposes a new implicit iterative algorithm for solving regularized total least squares problems. The simulation showed that the proposed algorithm has a higher accuracy compared to the solutions obtained by total least squares algorithms, as well as the total least squares solution with Tikhonov regularization.

The proposed implicit iterative algorithm makes it possible to implement a constraint on the length of the solution vector without solving additional nonlinear equations.

The condition number of problems solved at each iteration is less than the condition number of systems with Tikhonov regularization. Competing interests. We have no competing interests.

Author's Responsibilities. Each author has participated in the development of the concept of the article and in the writing of the manuscript. The authors are absolutely responsible for the submission of the final manuscript in print. Each author has approved the final version of the manuscript.

Funding. This work was supported by the Federal Agency of Railway Transport (projects nos. 122022200429-8, and 122022200432-8).

Acknowledgments. The authors thank the referees for careful reading of the paper and valuable suggestions and comments.

References

1. Markovsky I. Bibliography on total least squares and related methods, Stat. Interface, 2010, vol.3, no. 3, pp. 329-334. DOI: https://doi.org/10.4310/SII.2010.v3.n3.a6.

2. Pintelon R., Schoukens J. System Identification: A Frequency Domain Approach. Piscataway, NJ, IEEE Press, 2012, xliv+743 pp. DOI: https://doi.org/10.1002/9781118287422.

3. Pillonetto G., Chen T., Chiuso A., De Nicolao G., Ljung L. Regularized System Identification. Learning Dynamic Models from Data, Communications and Control Engineering. Cham, Springer, 2022, xxiv+377 pp. DOI: https://doi.org/10.1007/978-3-030-95860-2.

4. Markovsky I., Willems J. C., Van Huffel S., Bart De Moor, Pintelon R. Application of structured total least squares for system identification and model reduction, IEEE Trans.

Autom. Control, 2005, vol.50, no. 10, pp. 1490-1500. DOI: https://doi.org/10.1109/TAC. 2005.856643.

5. Ivanov D. V. Identification of linear dynamic systems of fractional order with errors in variables based on an augmented system of equations, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2021, vol.25, no. 3, pp. 508-518. EDN: RCYACI. DOI: https://doi.org/10.14498/vsgtu1854.

6. Fu H., Barlow J. A regularized structured total least squares algorithm for high-resolution image reconstruction, Linear Algebra Appl., 2004, vol.391, pp. 75-98. DOI: https://doi. org/10.1016/S0024-3795(03)00660-8.

7. Mesarovic V. Z., Galatsanos N. P., Katsaggelos A. K. Regularized constrained total least squares image restoration, IEEE Trans. Image Process., 1995, vol.4, no. 8, pp. 1096-1108. DOI: https://doi.org/10.1109/83.403444.

8. Zhu W., Wang Y., Yao Y., Chang J., Graber H. L., Barbour R. L. Iterative total least-squares image reconstruction algorithm for optical tomography by the conjugate gradient method, J. Opt. Soc. Am. A, 1997, vol.14, no.4, pp. 799-807. DOI: https://doi.org/10. 1364/josaa.14.000799.

9. Zhu W., Wang Y., Zhang J. Total least-squares reconstruction with wavelets for optical tomography, J. Opt. Soc. Am. A, vol.15, no. 10, pp. 2639-2650. DOI: https://doi.org/ 10.1364/josaa.15.002639.

10. Lemmerling P., Mastronardi N., Van Huffel S. Efficient implementation of a structured total least squares based speech compression method, Linear Algebra Appl., 2003, vol. 366, pp. 295-315. DOI: https://doi.org/10.1016/S0024-3795(02)00465-2.

11. Khassina E. M., Lomov A. A. Audio files compression with the STLS-ESM method, St. Petersburg State Polytechnical University Journal. Computer Science. Telecommunications and Control Systems, 2015, vol.229, no. 5, pp. 88-96. EDN: VAWFWT. DOI: https://doi.org/ 10.5862/JCSTCS.229.9.

12. Golub G. H., Van Loan C. An analysis of the total least squares problem, SIAM J. Matrix Anal. Appl., 1980, vol.17, no.6, pp. 883-893. DOI: https://doi.org/10.1137/0717073.

13. Zhdanov A. I., Shamarov P. A. The direct projection method in the problem of complete least squares, Autom. Remote Control, 2000, vol.61, no.4, pp. 610-620. EDN: LGBGAF.

14. Ivanov D., Zhdanov A. Symmetrical augmented system of equations for the parameter identification of discrete fractional systems by generalized total least squares, Mathematics, 2021, vol.9, no. 24, 3250. EDN: QFMGJB. DOI: https://doi.org/10.3390/math9243250.

15. Bjork A. Newton and Rayleigh quotient methods for total least squares problem, In: Recent Advances in Total Least Squares Techniques and Errors in Variables Modeling, Proceedings of the Second Workshop on Total Least Squares and Errors-in-Variables Modeling (Leuven, Belgium, August 21-24, 1996). Philadelphia, PA, USA, SIAM, 1997, pp. 149-160.

16. Bjorck A., Heggernes P., Matstoms P. Methods for large scale total least squares problems, SIAM J. Matrix Anal. Appl., 2000, vol.22, no. 2, pp. 413-429. DOI: https://doi.org/10. 1137/S0895479899355414.

17. Fasino D., Fazzi A. A Gauss-Newton iteration for total least squares problems, BIT Numer. Math., 2018, vol.58, no.2, pp. 281-299. DOI: https://doi.org/10.1007/ s10543-017-0678-5.

18. Mohammedi A. Rational-Lanczos technique for solving total least squares problems, Kuwait J. Sci. Eng., 2001, vol.28, no. 1, pp. 1-12.

19. Fierro R. D., Golub G. H., Hansen P. C., O'Leary D. P. Regularization by truncated total least squares, SIAM J. Sci. Comp., 1997, vol. 18, no.4, pp. 1223-1241. DOI: https://doi. org/10.1137/S1064827594263837.

20. Golub G. H., Hansen P. C., O'Leary D. P. Tikhonov regularization and total least squares, SIAM J. Matrix Anal. Appl., 1999, vol.21, no. 1, pp. 185-194. DOI: https://doi.org/10. 1137/S0895479897326432.

21. Lampe J., Voss H. Solving regularized total least squares problems based on eigenproblems, Taiwanese J. Math., 2010, vol.14, no. 3A, pp. 885-909. DOI: https://doi.org/10.11650/ twjm/1500405873.

22. Sima D. M., Van Huffel S., Golub G. H. Regularized total least squares based on quadratic eigenvalue problem solvers, BIT Numer. Math., 2004, vol.44, no. 4, pp. 793-812. DOI: https://doi.org/10.1007/s10543-004-6024-8.

23. Lampe J., Voss H. Efficient determination of the hyperparameter in regularized total least squares problems, Appl. Numer. Math., 2012, vol.62, no. 9, pp. 1229-1241. DOI:https:// doi.org/10.1016/j.apnum.2010.06.005.

24. Zhdanov A. I. Direct recurrence algorithms for solving the linear equations of the method of least squares, Comput. Math. Math. Phys., 1994, vol. 34, no. 6, pp. 693-701. EDN: VKRSPF.

25. Vainiko G. M., Veretennikov A. Yu. Iteratsionnye protsedury v nekorrektno postavlennykh zadachakh [Iteration Procedures in Ill-Posed Problems]. Moscow, Nauka, 1986, 177 pp.

26. Zhdanov A. I. Implicit iterative schemes based on singular decomposition and regularizing algorithms, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2018, vol.22, no. 3, pp. 549-556. EDN: PJITAX. DOI: https:// doi.org/10.14498/vsgtu1592.

27. Zhdanov A. I. The solution of ill-posed stochastic linear algebraic equations by the maximum likelihood regularization method, US'S'R Comput. Math. Math. Phys., 1988, vol.28, no. 5, pp. 93-96. DOI: https://doi.org/10.1016/0041-5553(88)90014-6.

28. Gfrerer H. An a posteriori parameter choice for ordinary and iterated Tikhonov regularization of ill-posed problems leading to optimal convergence rates, Math. Comp., 1987, vol. 49, no. 180, pp. 507-522. DOI: https://doi.org/10.1090/S0025-5718-1987-0906185-4.

29. Hamarik U., Tautenhahn U. On the monotone error rule for parameter choice in iterative and continuous regularization methods, BIT Numer. Math., 2001, vol.41, no. 5, pp. 10291038. DOI: https://doi.org/https://doi.org/10.1023/A:1021945429767.

30. Tautenhahn U., Hamarik U. The use of monotonicity for choosing the regularization parameter in ill-posed problems, Inverse Probl., 1999, vol. 15, no. 6, pp. 1487-1505. DOI: https:// doi.org/10.1088/0266-5611/15/6/307.

31. Hansen P. C. Regularization tools version 4.0 for Matlab 7.3, Numer. Algorithms, 2007, vol.46, no. 2, pp. 189-194. DOI: https://doi.org/10.1007/s11075-007-9136-9.

Вестн. Сам. гос. техн. ун-та. Сер. Физ.-мат. науки. 2022. Т. 26, № 2. С. 311-321

ISSN: 2310-7081 (online), 1991-8615 (print) EDN: NFBOXC

d https://doi.org/10.14498/vsgtu1930

УДК 519.612

Неявный итерационный алгоритм для решения задачи регуляризированных полных наименьших квадратов

Д. В. Иванов1,2, А. И. Жданов3

1 Самарский национальный исследовательский университет имени академика С.П. Королева,

Россия, 443086, Самара, Московское ш., 34.

2 Самарский государственный университет путей сообщения, Россия, 443066, Самара, ул. Свободы, 2 В.

3 Самарский государственный технический университет, Россия, 443100, Самара, ул. Молодогвардейская, 244.

Аннотация

Рассмотрен новый итерационный алгоритм для решения задач полных наименьших квадратов. Предложен новый вариант неявного метода простых итераций на основе сингулярного разложения для решения смещенной нормальной системы алгебраических уравнений. Применение неявного метода простых итераций на основе сингулярного разложения позволяет заменить плохо обусловленную задачу на последовательность задач с лучшей обусловленностью. Это дает возможность существенно повысить вычислительную устойчивость алгоритма и при этом обеспечивает высокую скорость его сходимости. Тестовые примеры показали, что предложенный алгоритм обладает более высокой точностью по сравнению с решениями, полученными нерегуляризированными алгоритмами полных наименьших квадратов, а также с решением полных наименьших квадратов с регуляризацией по Тихонову.

Ключевые слова: неявная регуляризация, полные наименьшие квадраты, сингулярное разложение, плохая обусловленность, итерационные методы регуляризации.

Получение: 15 мая 2022 г. / Исправление: 6 июня 2022 г. / Принятие: 7 июня 2022 г. / Публикация онлайн: 30 июня 2022 г.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Математическое моделирование, численные методы и комплексы программ Научная статья

© Коллектив авторов, 2022 © СамГТУ, 2022 (составление, дизайн, макет)

3 ©® Контент публикуется на условиях лицензии Creative Commons Attribution 4.0 International (https://creativecommons.org/licenses/by/4.0/deed.ru) Образец для цитирования

Ivanov D. V., Zhdanov A. I. Implicit iterative algorithm for solving regularized total least squares problems, Vestn. Samar. Gos. Tekhn. Univ., Ser. Fiz.-Mat. Nauki [J. Samara State Tech. Univ., Ser. Phys. Math. Sci.], 2022, vol. 26, no. 2, pp. 311-321. EDN: NFBOXC. DOI: 10.14498/vsgtu1930.

Сведения об авторах

Дмитрий Владимирович Иванов А https://orcid.org/0000-0002-5021-5259 кандидат физико-математических наук, доцент; доцент каф. безопасности информационных систем1; доцент каф. мехатроники2; e-mail: dvi85@list.ru Александр Иванович Жданов © https://orcid.org/0000-0001-6082-9097 доктор физико-математических наук, профессор; профессор каф. прикладная математика и информатика3; e-mail: zhdanovaleksan@yandex.ru

Конкурирующие интересы. Конкурирующих интересов не имеем. Авторский вклад и ответственность. Все авторы принимали участие в разработке концепции статьи и в написании рукописи. Авторы несут полную ответственность за предоставление окончательной рукописи в печать. Окончательная версия рукописи была одобрена всеми авторами.

Финансирование. Работа выполнена при поддержке Федерального агентства железнодорожного транспорта (проекты №№ 122022200429-8, 122022200432-8). Благодарность. Авторы благодарны рецензентам за тщательное прочтение статьи и ценные предложения и комментарии.

i Надоели баннеры? Вы всегда можете отключить рекламу.