Научная статья на тему 'Solving of a minimal realization problem in Maple'

Solving of a minimal realization problem in Maple Текст научной статьи по специальности «Математика»

CC BY
160
43
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
ДИСКРЕТНАЯ ЛИНЕЙНАЯ КОНЕЧНОМЕРНАЯ СТАЦИОНАРНАЯ ДИНАМИЧЕСКАЯ СИСТЕМА / ДРОБНАЯ ФАКТОРИЗАЦИЯ / МИНИМАЛЬНАЯ РЕАЛИЗАЦИЯ / АЛГОРИТМЫ РЕШЕНИЯ ЗАДАЧИ РЕАЛИЗАЦИИ / DISCRETE-TIME LINEAR TIME-INVARIANT SYSTEMS / FRACTIONAL FACTORIZATION / MINIMAL REALIZATION / ALGORITHMS FOR SOLVING OF REALIZATION PROBLEM

Аннотация научной статьи по математике, автор научной работы — Adukov V. M., Fadeeva A. S.

In the computer algebra system Maple, we have created a package MinimalRealization to solve the minimal realization problem for a discrete-time linear time-invariant system. The package enables to construct the minimal realization of a system starting with either a finite sequence of Markov parameters of a system, or a transfer function, or any non-minimal realization. It is designed as a user library and consists of 11 procedures: ApproxEssPoly, ApproxSpace, Approxrank, ExactEssPoly, FractionalFactorizationG, FractionalFactorizationMP, MarkovParameters, MinimalityTest, MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization. The realization algorithm is based on solving of sequential problems: (1) determination of indices and essential polynimials (procedures ExactEssPoly, ApproxEssPoly), (2) construction of a right fractional factorization of the transfer function (FractionalFactorizationG, FractionalFactorizationMP), (3) construction of the minimal realization by the given fractional factorization (MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization). We can solve the problem both in the case of exact calculations (in rational arithmetic) and in the presence of rounding errors, or for input data which are disturbed by noise. In the latter case the problem is ill-posed because it requires finding the rank and the space of a matrix. We use the singular value decomposition as the most accurate method for calculation of the numerical rank (Approxrank) and the numerical space (ApproxSpace). Numerical experiments with the package MinimalRealization demonstrate good agreement between the exact and approximate solutions of the problem.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Solving of a minimal realization problem in Maple»

ПРОГРАММИРОВАНИЕ

MSC 93С05 DOI: 10.14529/mmp140306

SOLVING OF A MINIMAL REALIZATION PROBLEM IN MAPLE

V.M. Adukov, South Ural State University, Chelyabinsk, Russian Federation, adukovvm@susu.ac.ru,

A.S. Fadeeva South Ural State University, Chelyabinsk, Russian Federation, fadeevaas@susu.ac.ru

In the computer algebra system Maple, we have created a package MinimalRealization to solve the minimal realization problem for a discrete-time linear time-invariant system. The package enables to construct the minimal realization of a system starting with either a finite sequence of Markov parameters of a system, or a transfer function, or any non-minimal realization. It is designed as a user library and consists of 11 procedures: ApproxEssPoly, ApproxNullSpace, Approxrank, ExactEssPoly, Fractional-FactorizationG, FractionalFactorizationMP, MarkovParameters, MinimalityTest, Mini-malRealizationG, MinimalRealizationMP, Realization2MinimalRealization. The realization algorithm is based on solving of sequential problems: (1) determination of indices and essential polynimials (procedures ExactEssPoly, ApproxEssPoly), (2) construction of a right fractional factorization of the transfer function (FractionalFactorizationG, FractionalFactorizationMP), (3) construction of the minimal realization by the given fractional factorization (MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization). We can solve the problem both in the case of exact calculations (in rational arithmetic) and in the presence of rounding errors, or for input data which are disturbed by noise. In the latter case the problem is ill-posed because it requires finding the rank and the null space of a matrix. We use the singular value decomposition as the most accurate method for calculation of the numerical rank (Approxrank) and the numerical null space (ApproxNullSpace). Numerical experiments with the package MinimalRealization demonstrate good agreement between the exact and approximate solutions of the problem.

Keywords: discrete-time linear time-invariant systems; fractional factorization; minimal realization; algorithms for solving of realization problem.

Introduction

In this paper we describe the package MinimalRealization that is destined for solving of a minimal realization problem. The problem is one of a fundamental problem in linear system theory. Our package is a Maple implementation of results of the work [1], where a new algorithm for solving of the minimal realization problem was suggested. The algorithm is based on notions of indices and essential polynomials for a matrix sequence [2].

The system of computer algebra Maple was chosen for the implementation of the algorithm because it allows to solve the problem exactly if input data are rational numbers. This enables us to construct model examples for testing of algorithms for approximate solving of the problem. All routines of the package are designed in such a way that the problem is solved exactly if input data are rational numbers, and algorithms of approximate solving are used if the data are floating-point numbers.

In the presence of rounding errors or for noisy input data the minimal realization problem is ill-posed because it uses ranks and null spaces of matrices. We apply the singular

value decomposition (SVD) as the most accurate method for calculation of the numerical rank and the numerical null spaces. An application of SVD to the problem was described in the review [3]. There are other approximate methods for solving of the realization problem. For example, a series of works of S.G. Pushkov and coauthors devoted to interval computations in the realization problem (see [4]).

Numerical experiments with the package MinimalRealization showed a good correspondence between the exact and approximate solutions of the problem.

1. Setting of the Problem

There are the various settings of the minimal realization problem (see, e.g., [3,5,6] From a formal point of view, they can be reduced to the following three problems.

1.1. Construction of the Minimal Realization by the Impulse Response (by the Sequence of the Markov Parameters) of a System

Given a sequence of real or complex p x q matrices {Gk}£Li (the impulse responce or the sequence of the Markov parameters of a system). Find a triplet £ = (A,B,C) of matrices such that

CAB = Gk, k =1, 2,

(1)

We call £ the system. The sequence {Gkis said to be realizable if there is a solution of the problem, the system (A,B,C) is called a realization of the sequence {Gkand order n of the matrix A is the order of the realization. The sequence is realizable if and only if the infinite Hankel matrix

/Gi G2 G3 G2 G3 G4

V ■ ■ ■

\

/

(2)

has the finite rank p.

A realization is minimal if its order is minimal. The minimal realization is unique up

to a similarity transformation. The order 8 of the minimal realization coincides with the p

of an upper bound 80 for the order of the realization: 8 ^ 80. If such an estimate is known then for construction of the realization the finite sequence G\,G2, ..., G2s0 of the Markov parameters is required.

1.2. Construction of the Minimal Realization by the Transfer Function of a System

Given a p x q matrix function G(z) (the transfer function of a system). Find a triplet £ = (A, B,C) of matrices such that

G(z) = C(zI - A)-1B. (3)

G(z)

(A, B, C) G(z) n A

G(z)

proper rational matrix function. A realization of G(z) is minimal if its order is minimal. The order 8 of the minimal realization of G(z) is said to be the McMillan degree of the matrix function G(z). There is an upper bound for the McMillan degree [5, Sec. 3.2]:

iiJ ^ a,, ^ pA . (.i=1 i=1 )

8 ^ 80 = min^V a,i}YPi\ . (4)

{

Here a, p ) is the degree of the led for entries of the z-th row (column) of the rational matrix function G(z).

Let G(z) be the realizable matrix function. Let us expand it as a Laurent series about z = œ:

G(z) =z k ■ k=i z

Hence, the Laurent coefficients of G(z) are the Markov parameters of the system. The matrix function G(z) = is a strictly proper rational matrix function if and

n=1

only if the Hankel matrix (2) has a finite rank. Thus, realization Problem 1.1 and 1.2 are equivalent.

Informaly, the realization problem is to reconstruct a discrete-time linear timeinvariant system S

Xj+1 = Axj + Buj yj = Cxj

that generates the given impulse response {Gk j^Lp Note that the impulse response of the system can be measured and, in practice, the measured data can be disturbed by noise.

We identify the system S with the triple (A, B, C). The system Sm = (Am, Bm, Cm), that corresponds to the minimal realization, generates the same impulse response as S. However, Sm has important additional properties: it is observable and controllable. A system (A, B, C) of order n is called observable if the observability matrix

( c \

CA

On(C,A)= . (5)

yCAn-1j

has full rank. Similarly, the system is called controllable if the controllability matrix

Cn (B, A) = (B AB ••• An-1B) (6)

has full rank.

By the Kalman theorem [6, Th. 6.2-3], a realization is minimal if and only if it is observable and controllable. The condition that Cn have full rank is use as a minimality test.

1.3. Construction of the Minimal Realization

by Reduction of a Non-Minimal Realization

Given a system £ = (A,B,C) of order n that is non-minimal. Find a minimal realization £m = (Am,Bm,Cm) of the system. Here we have the upper bound 8 ^ 8o = n.

2. Algorithms and Procedures

In this section we describe algorithms for solving of these problems with the package MinimalRealization.

The package is designed as a user library and consists of 11 procedures: ApproxEssPoly, ApproxNullSpace, Approxrank, ExactEssPoly, FractionalFactorizationG, FractionalFactorizationMP, MarkovParameters, Minimality Test, MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization.

The main tools for solving of the realization problem are indices and essential polynomials of a finite matrix sequence [1, 2]. The procedures ExactEssPoly, ApproxEssPoly allow to find these characteristics of the sequence. ExactEssPoly is used for computation in rational arithmetic and ApproxEssPoly is applied if input data are floatingpoint numbers. In order to find the indices and the essential polynomials we should be able to calculate the ranks and null spaces of matrices. In the procedure ExactEssPoly we use the commands Rank and NullSpace of the LinearAlgebra package. In the presence of rounding errors finding of ranks or null spaces are ill-posed problems. The most widely used method for solving of these problems is the singular value decomposition (SVD). We also use SVD in the procedures Approxrank and ApproxNullSpace. To determinate the numerical e-rank of a matrix it is required to choose the tolerance epsilon. This is a critical point of the problem and many works are devoted to this issue (see, e.g., [7]).

In the procedures Approxrank, ApproxNullSpace the tolerance epsilon is a natural number and the singular numbers of a matrix are treated as zero if they are less than

1Q—epsilon

The indices and essential polynomials allow to construct a right fractional factorization of the transfer matrix of a system [1, Theorem 2], [8]. We can do this both by the matrix function G(z) (the procedure FractionalFactorizationG) and by the sequence of the Markov parameters (the procedure FractionalFactorizationMP). We note that a construction of the fraction factorizations has an independent significance because these factorizations also play an important role in a problem of a generalized inversion of Toeplitz and Hankel matrices [9] and in a Wiener-Hopf factorization problem [8].

The minimal realization can be constructed if we know the fractional factorization [1, Theorem 3]. An implementation of the algorithm from this article carried out in the procedures MinimalRealizationMP, MinimalRealizationG and Realization2MinimalRealization. They solve Problems 1.1, 1.2, and 1.3, respectively. The procedure Minimality Test is applied to test the minimality of the realization. For this we check that matrices (5) and (6) have the full ranks.

We now describe algorithms that are used in the procedures.

2.1. The Algorithm for Calculation of the Indices and Essential Polynomials.

The Procedures ExactEssPoly, ApproxEssPoly

The parameters, that are passed to ExactEssPoly, are M, N cMN. Here M < N are integers, cMN is a sequence cM, cM+1,..., cN of complex p x q matrices written in the form

( Cm \

CM

of the block row cMN

<M+1

\CN J

For ApproxEssPoly we must also pass the tolerence epsilon that is used in Approxrank and ApproxNullSpace for the numerical determination of the e-rank of a matrix.

We impose a technical restriction on the sequence cMN: u = 0 (the definition of the defect u of the sequence see below in the description of Algorithm 1). We use the packages LinearAlgebra, PolynomialTools, numapprox of Maple.

The procedures return the ind ices fil,...,^p+q and the matrix Rho := (R1(z) ... Rp+q(z)) consisting of right essential polynomials of the sequence cMN.

The algorithm of an exact calculation of ¡ii h Rho is as follows (for details, see [1,2]).

Algorithm 1. Exact calculation of indices and essential polynomials

1: procedure ExactEssPoly( M, N, cMN)

2: Formation of the sequence cm , cm+1,..., on by the matrix cMN;

3: for k from M to N do Tk = \\c—j|| i=k,k+i,...,N;

j=0,1,...,k-M

4: end do;

5: dM-1 :=0 dN+i := (N - M + 2)q;

6: for k from M to N do rank:=Rank(Tk), dk := (k — M + 1)q — rank Ak+1 := dk+1 — dk;

7: end do;

8: a := dm, u := p + q — An+1;

9: if u = 0 then

10: Info: "error"; STOP;

11: end if;

12: Determination of the different indices jl1,..., jls and their multiplicities;

13: Formation of the sequence fi1,..., np+q of the indices taking into account their

multiplicities; p+q

14: if Y, m = (N + l)p + (M — l)q then

15: 'info: "error"; STOP;

16: end if;

17: for m from l to s do kernm := NullSpace(TJam+1);

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

18: end do;

19: Formation of the matrix A(ki) consisting of the first k1 columns of the matrix A from

the criterion of essentialness (see [2], Theorem 4.1);

20: if Rank(A(Kl)) = k then

21: Info: "error"; STOP;

22: end if;

23: A := A(X1\ Rho := (R1 (z) ... RXl(z)), Rj € kernn 2s

25: for k from ^o dim kernTO do

26: Formation of the column Ak corresponging to vector Uk € kernKm;

27: if Rank(A, Ak) > Rank(A) then A := (A, Ak), Rho := (Rho, Uk(z));

28: end if;

29: end do;

30: end do;

31: if Rank(A) = p + q then

32: Info: "error"; STOP;

33: end if;

34: end procedure

In the procedure ApproxEssPoly we use Approxrank and ApproxNullSpace instead of the functions Rank and NullSpace.

The following example shows that we have to use ApproxEssPoly in the presence of rounding errors or for noise input data.

Example 1. Let us consider the matrix sequence from Example 1 of the article [1]:

0\ 0 0

Gi = I 2 0 I , G2 = I 0 -2

1 V° 1

'0 1 \ /0 -3N

G4 = I 0 -4 I , G5 = I 0 4 0 1 0 1

0 -1 G3 = i0 0 01

'0 5 Gfi = I 0 -12 01

We form the matrix cMN and call the procedure ExactEssPoly (hereinafter for brevity we will delete some lines from Maple worksheets):

> ExactEssPoly(1, 6, cMN);

Indicies

mu[l]=l, mu[2] =2, mu[3]=6, mu[4]=6, mu[5]=6,

"alpha, omega, mu, Rho, L are now available"

> Rho;

[[1,-1/2 z~2,0,z~6,0], [0,-1/2-1/2 z+z~2,z~6,0,z~5]]

The procedure returns the indices ^ and the matrix Rho. Since the condition ^

1 is not satisfied, the indices of the sequence are unstable under small perturbations.

cMN

RandomMatrix:

>RandG:=cMN+RandomMatrix(18,2,density=l,generator—10"(-6)..10~(-6));

Now we call the procedure ExactEssPoly again. The following result is obtained:

> ExactEssPoly(1, 6, RandG);

Indicies

mu[l]=4, mu [2] =4, mu[3]=4, mu[4]=4, mu[5]=5,

"alpha, omega, mu, Rho, L are now available"

Thus, the use of the exact algorithm does not allow to find the correct values of the indices.

Now we call the procedure ApproxEssPoly with the tolerance epsilon = 5:

> ApproxEssPoly(1, 6, RandG, 5);

"Indicies of the sequence:" mu[l]=l, mu [2] =2, mu[3]=6, mu[4]=6, mu[5]=6,

"alpha, omega, mu, dmu, Rho are now available"

> Rho;

[ [0.99999999999983568,

-0.186319344532183046+0.389874976941426200 z-0.340856515264002624 z~2, 0.,0.,1. z~6],

[0.,-0.340856623124891012-0.340856551772813099 z+0.681713292957929928 z~2, l.,l. z,0.]]

We obtain for the perturbed sequence the same indices as for the unperturbed one. Note that we do not compare the matrices Rho consisting of right essential polynomials because a basis of the null space of a matrix is not uniquely determined.

2.2. The Algorithm for Construction of the Fractional Factorization.

The Procedures FractionalFactorizationG, FractionalFactorizationMP

The procedure FractionalFactorizationG is applied for construction of a right fractional factorization

G(z) = NR(z)D~Rl(z)

of the transfer function G(z) by Theorem 2 of the work [1]. For this we need the indices and the essential polynomials of the sequence G\,..., Gm, m ^ 25, consisting of the Laurent

G(z)

The Markov parameters Gj are found by the procedure MarkovParameters. We pass to MarkovParameters two parameters: the matrix function G := G(z) and the number mM := m of the required parameters. To estimate the McMillan degree 5 inequality (4) is applied. The procedure MarkovParameters returns the block column GMarkov consisting of the Markov parameters.

In order to find the indices and the essential polynomials we use the procedure Exact-EssPoly if all entries of GMarkov are rational numbers, and we use ApproxEssPoly if there are floating-pointed numbers.

To the procedure FractionalFactorizationG we pass two parameters: G := G(z) and the tolerance epsilon that is used in ApproxEssPoly. The procedure returns the matrix polynomials DRight := DR(z) and NRight := NR(z).

The algorithm for the procedure is given bellow (see algorithm 2).

Algorithm 2. Construction of the fractional factorization

l: procedure FractionalFactorizationG (G, epsilon)

2: Finding the upper bound of the McMillan degree, mM := 2 * 50;

3: G M ar kov:=MarkovParameters (G, mM);

4: if all entries of GMarkov are rational numbers then

5: ExactEssPoly(1, mM, GMarkov);

6: else

7: ApproxEssPoly(1, mM, GMarkov, epsilon);

8: end if;

9: Selection of the submatrix (R1(z)... Rq (z)) from Rho;

10: DRight := (zwRi(z-1)... z^Rq(z-1));

mM

11: NRight1 := E Gjz-j * DRight; i

12: NRight := the polynomial part of (NRight1); 13: end procedure

The procedure FractionalFactorizationMP is used to construct the right fractional factorization of G(z) by the sequenee G\,..., Gm, m ^ 28o, of the Markov parameters. The upper bound 80 of the McMillan degree 8 must be known.

To the procedure FractionalFactorizationMP we pass three parameters: mM := m, GMarkov, epsilon. The procedure returns the matrices DRight := DR (z), NRight := NR(z). The algorithm for the FractionalFactorizationMP is the part of Algorithm 2 beginning with Line 4.

Example 2. Consider the matrix function

/1 -

G(z)

1

\0

z(z-1)(z+2) 2(z+1)

z(z-1)(z+2) - )

from Example 1 of the article [1]. In this work we found the fractional factorization by the sequence of the Markov parameters. Now we construct the factorization directly by G(z)

> FractionalFactorizationG(G, 5);

Indicies

mu[l]=l, mu[2] =2, mu[3]=19, mu[4]=19, mu[5]=19,

"alpha, omega, mu, Rho, L are now available" DRight=[[z,-1/2],[0,-1/2 z~2-l/2 z+1]] NRight=[[1,0],[2,1],[0,-1-1/2 z]]

We obtained the same result as in ill.

2.3. The Algorithms for Construction of the Minimal Realization. The Procedures MinimalRealizationMP, MinimalRealizationG and Realization2MinimalRealization

The minimal realization (Am,Bm,Cm) of a system is found by Theorem 3 of the work [1]. For this we need the matrix polynomials DR(z), NR(z) from the right fractional factorization of the transfer function G(z) of the system. Let us denote by Dcoi the invertible matrix consisting of leading coefficients of the columns of DR(z). In Theorem 3 [1] we showed that the matrices Am, Bm, Cm can be constructed by entries of D-DR(z), D— and NR(z), respectively.

We pass to the procedure MinimalRealizationMP (MinimalRealizationG) the same parameters as FractionalFactorizationMP (FractionalFactorizationG). To call Realiza-

ABC

system £ and the tolerence epsilon.

All procedures return the matrices Am := Am, Bm := Bm, Cm := Cm that define the minimal realization of the system. The procedure MinimalityTest is used for checking minimality of the system. Algorithm 3 is the algorithm for MinimalRealizationMP.

The algorithm for MinimalRealizationG is the same as the previous one. The only difference is that we use the procedure FractionalFactorizationG instead of FractionalFactorizationMP.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Algorithm 3. Construction of the minimal realization by the Markov parameters 1: procedure Mixixiai.Hkai.iz vrioxMPliiiM. GMarkov, epsilon) 2: FractionalFactorizationMP(mM, GMarkov, epsilon); 3: Finding of the matrix DRightCol := Dcol; 4: if ApproxRank(DRightCol) = q then 5: Info: "error"; STOP;

6: end if;

7: DRightNorm := (DRightCol)"1 * DRight; 8: Formation of Am by DRightNorm; 9: Formation of Bm by (DRightCol)"1; 10: Formation of Cm by XRight: 11: Test :=Minimality Test (Am, Bm, Cm); 12: if Test=0 then

13: Info: "the system (Am, Bm, Cm) is minimal";

14: else

15: Info: "a minimal realization does not construct";

16: end if;

17: end procedure

The algorithm for a reduction of a non-minimal realization to the minimal one has the following form (see Algorithm 4).

Algorithm 4. Reduction of a non-minimal realization to the minimal one

l: procedure Realization2MinimalRealization(A, B, C, epsilon)

2: Tcsi : Minimality Test ( A. B, C);

3: if Test=Q then

4: Info: "the system (A, B, C) is minimal", STOP;

5: else

6: Info: "the system (A, B, C) is not minimal";

7: end if;

8: 11 : RowDimension ( A ) :

9: for j from 1 to 2 * n do Gj := CAj-1B;

10: end do;

11: Formation of the matrix GMarkov;

2*n

13: end procedure

3. Numerical Experiments

Example 3. Let us find the minimal realization of the transfer function

G .= (z/(z + 1)2 1/z2 )

G 'V 1/(z + 1) 1/(z + 2)J

from the article [5] with the help of MinimalRealizationG:

> MinimalRealizationG(G,4);

Indicies

mu[l]=2, mu[2]=3, mu[3]=ll, mu[4]=12, Am=[[0,1,0,0,0],[-1,-2,0,0,0],[0,0,0,1,0],[0,0,0,0,1],[0,0,0,0,-2]] Bm= [[0,0] , [1,0] , [0,0] , [0,0] , [0,2]] Cm=[[0,1,1,1/2,0],[1,1,0,0,1/2]]

Testing by formula (3) shows that (Am, Bm, Cm) is a realization of G(z), the minimality test is also fulfilled.

Now we find the minimal realization for the perturbed matrix function

= (0.9999999786z/(z + 1.0000000361)2 1.Q/(z2

G :V 1.0/(z + 1.0000000361) 0.999999941 / (z + 1.9999999234)

with epsilon I:

> MinimalRealizationG(Gl,4);

Indicies

mu[l]=2, mu[2]=3, mu[3]=ll, mu[4]=12,

Am=[[0,1,0,0,0],[-1.00000007699999993,-2.00000007700000015,-0.,-0.,-0.], [0,0,0,1,0], [0,0,0,0,1],[-0.,-0.,-0.,-0.,-1.99999991899999996]] Bm=[[0,0],[-2.44948983676636000,0.],[0,0],[0,0],[0.,2.23606790799958155]] Cm=[[-1.90000000000000006 10"(-9),-0.408248266099999980, 0.894427182499999973, 0.447213609399999990,0.],

[-0.408248291300000010,-0.408248274799999978,2.99999999999999998 10"(-9), -1.90000000000000006 10"(-9),0.447213583000000026]]

Am Bm Cm G G1 is unique up to a similarity transformation only. However, we see that the entries of the Am G G1

formula (3) we checked that we really had obtained an approximate minimal realization of G1. Hence, the algorithm is robust.

Example 4. To verify the procedure MinimalRealizationMP we consider the example from the work [10, pp. 433, 434]. Let us take the first 6 Markov parameters and put x\ = 17/3, x2 = 1/37. Since the input data are cumbersome, we do not cite them.

For the initial sequence the procedure returns the following values of indices

mu[l] =2, mu[2]=3, mu[3]=4, mu[4]=5,

Now we perturb the input data by RandomMatrix with the entries in the range [—10—7, 10—7]. For epsilon = 6 we get the following approximate minimal realization

mu[l] =2, mu[2]=3, mu[3]=4, mu[4]=5,

Am=[[0,1,0,0,0],[-2.00000856399999982,3.00000255500000002, -4.95945338499999976,0.749432599000000032,-0.929948897000000052], [0,0,0,1,0],[0,0,0,0,1] ,

[-0.0000157399999999999984,0.00000549999999999999986,-6.97297860000000026, -3.33332246000000021,2.99999742000000014]]

Bm=[[0,0], [-15.3882633401427551,-20.6797652569520914], [0,0], [0,0],

[-41.9701472234983654,-41.9701418934536434]]

Cm=[[0.188982356099999998,-3.43999999999999970 10"(-8),

0.143975022300000005,-0.0931164741999999934,-0.0238264474300000016],

[-3.09695591100000002 10"(-8),-1.38333411999999998 10"(-9),

-0.0238265404300000012,1.84852747800000018 10"(-8),

-1.56090803199999994 10"(-9)]]

In order to verify the result we calculated the Markov parameters Gk for the initial sequence and the perturbed one. It turns out that the entries of the first 6 Markov parameters coincide up to the first 5 digits after decimal point.

Example 5. Here we test the procedure Realization2MinimalRealization. Let us consider the system (A, B,C), where

A := Matrix([[2/3, 0, -1, 3/7], [-2, 1, 1/2, 0], [0, 0, -2, 1/2], [0, 0, 0, 0]]);

B := Matrix( [[1/3, 0, -1], [-1, 1/2, 0], [-1/2, 0, 1], [0, 0, 0]]); C := Matrix([[1, -3, -1, 1], [-1/2, 1, 1/2, -1/2]]);

We obtain the following result

> Realization2MinimalRealization(A, B, C, 5);

"The system (A,B,C) isn't minimal"

Am=[[9,0,-25/6],[49,1,-49/2], [22,0,-31/3]] Bm=[[25/6,0,-9],[49/2,-1,-49], [31/3,0,-22]] Cm=[[-11,3/2,5/4], [27/8,-1/2,-5/16]] "The system (Am,Bm,Cm) is minimal"

Now we find the minimal realization if the initial data disturbed by noise.

A1 := A+RandomMatrix(4, 4, density = 1, generator = -10"(-6) .. 10"(-6)); B1 := B+RandomMatrix(4, 3, density = 1, generator = -10"(-6) .. 10"(-6)); CI := C+RandomMatrix(2, 4, density = 1, generator = -10"(-6) .. 10"(-6)); Realization2MinimalRealization(Al, Bl, CI, 5);

Am=[ [0.807667894000000052,-0.330770221899999994,0.0205231110000000002], [0.0257488389999999992,-1.16876715800000008,-0.975788304000000050], [0.230421724999999994,-1.75904463600000006,0.0277649500000000000]] Bm=[ [5.62275499172367521,-0.876606563327936184,-9.13991031580920499], [10.8115998415014688,0.446659061140488090,-20.8163076602667552], [24.0189824469517284,-1.01594201125095229,-49.4999576454217234]] Cm=[ [2.31639664900000009,0.156871677099999996,-0.453276107799999995], [-0.755875162299999958,-0.0473018757999999970,0.139258362200000014]]

For checking we calculated the first 10 Markov parameters for (A, B, C^d (A1, B1, C1). The entries of these matrices coincide up to the first 4 digits after decimal point.

References

1. Adukov V.M. On a Problem of Minimal Realization. Bulletin of the South Ural State University. Series: Mathematical Modelling, Programming & Computer Software, 2013, vol. 6, no. 3, pp. 5-17. (in Russian)

2. Adukov V.M. Generalized Inversion of Block Toeplitz Matrices. Linear Algebra and Its Applications, 1998, vol. 274, pp. 85-124. DOI: 10.1016/S0024-3795(97)00304-2

3. De Schutter B. Minimal State-Space Realization in Linear System Theory: An Overview. Journal of Computational and Applied Mathematics, Special Issue on Numerical Analysis in the 20th Century. Vol. I: Approximation Theory, 2000, vol. 121, no. 1-2, pp. 331-354.

4. Pushkov S.G., Krivoshapko S.Y. On the Problem of Realization in the State-Space Model for Interval Dynamic Systems. Computing Technologies, 2004, vol. 9, no. 1, pp. 75-85. (in Russian)

5. Sinha N.K. Minimal Realization of Transfer Function Matrices. A Comparative Study of Different Methods. International Journal of Control, 1975, vol. 22, no. 5, pp. 627-639. DOI: 10.1080/00207177508922109

6. Kailath T. Linear Systems. N.J., Prentice-Hall, Inc., Englewood Cliffs, 1980.

7. Foster L.V., Davis T.A. Algorithm 933: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions Using SuitesparseQR. ACM Transactions on Mathematical Soßware (TOMS), 2013, vol. 40, issue 1, Article no. 7 (23 pages).

8. Adukov V.M. Fractional and Wiener-Hopf Factorizations. Linear Algebra and Its Applications, 2002, vol. 340, no. 1-3, pp. 199-213.

9. Adukov V.M. Generalized Inversion of Finite Rank Toeplitz and Hankel Operators with Rational Matrix Symbols. Linear Algebra and Its Applications, 1999, vol. 290, pp. 119-134. DOI: 10.1016/S0024-3795(98) 10216-1

10. Tether A.J. Construction of Minimal Linear State-Variable Models from Finite Input-Output Data. IEEE Transactions on Automatic Control, 1970, vol. AC-15, no. 4, pp. 427-436. DOI: 10.1109/TAC.1970.1099514

Received August 13, 2014

УДК 519.71 DOI: 10.14529/mmpl40306

РЕШЕНИЕ ЗАДАЧИ МИНИМАЛЬНОЙ РЕАЛИЗАЦИИ

В СИСТЕМЕ MAPLE

В.М. Адуков, A.C. Фадеева

В системе компьютерной математики Maple создан пакет MinimalRealization для решения задачи минимальной реализации линейной конечномерной стационарной динамической системы с дискретным временем. Пакет позволяет построить минимальную реализацию системы по конечной последовательности марковских параметров

системы, либо по передаточной матрице-функции системы, либо по произвольной не минимальной реализации. Он оформлен в виде пользовательской библиотеки и состоит из 11 процедур: ApproxEssPoly, ApproxNullSpace, Approxrank, ExactEssPoly, FractionalFactorizationG, FractionalFactorizationMP, MarkovParameters, MinimalityTest, MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization. Алгоритм реализации основан на последовательном решении трех задач: 1) нахождение индексов и существенных многочленов последовательности марковских параметров (процедуры ExactEssPoly, ApproxEssPoly), 2) построение правой дробной факторизации передаточной матрицы-функции (FractionalFactorizationG, FractionalFactorizationMP), 3) построение минимальной реализации по заданной дробной факторизации (MinimalRealizationG, MinimalRealizationMP, Realization2MinimalRealization). Предусмотрена возможность решения задачи как в условиях точных вычислений (в рациональной арифметике), так и при наличии ошибок округления или для начальных данных, возмущенных шумом. В последнем случае задача является неустойчивой, поскольку требует нахождения ранга и ядра матрицы. Используется сингулярное разложение матриц как наиболее надежный метод нахождения численного ранга (Approxrank) и ядра (ApproxNullSpace). Вычислительные эксперименты с пакетом MinimalRealization показали хорошее соответствие между точными и приближенными решениями задачи.

Ключевые слова: дискретная линейная конечномерная стационарная динамическая система; дробная факторизация; минимальная реализация; алгоритмы решения задачи реализации.

Литература

1. Адуков, В.М. О задаче минимальной реализации / В.М. Адуков // Вестник Южно-Уральского государственного университета. Серия: Математическое моделирование и программирование. - 2013. - Т. 6, вып. 3. - С. 5-17.

2. Adukov, V.M. Generalized Inversion of Block Toeplitz Matrices / V.M. Adukov // Linear Algebra and Its Applications. - 1998. - V. 274. - P. 85-124.

3. De Schutter, B. Minimal State-Space Realization in Linear System Theory: An Overview / B.De Schutter // Journal of Computational and Applied Mathematics, Special Issue on Numerical Analysis in the 20th Century. Vol. I: Approximation Theory. - 2000. - V. 121, № 1-2. - P. 331-354.

4. Пушков, С.Г. О проблеме реализации в пространстве состояний для интервальных динамических систем / С.Г. Пушков, С.Ю. Кривошапко // Вычислительные технологии. - 2004. - Т. 9, № 1. - С. 75-85.

5. Sinha, N.K. Minimal Realization of Transfer Function Matrices. A Comparative Study of Different Methods / N.K. Sinha // International Journal of Control. - 1975. - V. 22, № 5. - P. 627-639.

6. Kailath, T. Linear Systems / T. Kailath. - N.J.: Prentice-Hall, Inc., Englewood Cliffs, 1980.

7. Foster, L.V. Algorithm 933: Reliable Calculation of Numerical Rank, Null Space Bases, Pseudoinverse Solutions, and Basic Solutions Using SuitesparseQR / L.V. Foster, T.A. Davis // ACM Transactions on Mathematical Software (TOMS). - 2013. - V. 40, issue 1. - Article No. 7 (23 pages).

8. Adukov, V.M. Fractional and Wiener-Hopf Factorizations / V.M. Adukov // Linear Algebra and Its Applications. - 2002. - V. 340, № 1-3. - P. 199-213.

9. Adukov, V.M. Generalized Inversion of Finite Rank Toeplitz and Hankel Operators with Rational Matrix Symbols / V.M. Adukov // Linear Algebra and Its Applications. - 1999. - V. 290. - P. 119-134.

10. Tether, A.J. Construction of Minimal Linear State-Variable Models from Finite Input-Output Data / A.J. Tether // IEEE Transactions on Automatic Control. - 1970. -V. AC-15, No. 4. - P. 427-436.

Виктор Михайлович Адуков, доктор физико-математических наук, профессор, кафедра «Математический и функциональный анализ>, Южно-Уральский государ-ствбнныи университет (г. Челябинск, Российская Федерация), adukovvm@susu.ac.ru.

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Анна Сергеевна Фадеева, ассистент, кафедра «Математический и функциональный анализ>, Южно-Уральский государственный университет (г. Челябинск, Российская Федерация), fadeevaas@susu.ac.ru.

Поступила в редакцию 13 августа 2014 г.

i Надоели баннеры? Вы всегда можете отключить рекламу.