Научная статья на тему 'Time-series rate of convergence to quasi-periodic oscillations'

Time-series rate of convergence to quasi-periodic oscillations Текст научной статьи по специальности «Математика»

CC BY
76
25
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
TIME-SERIES / SELF-OSCILLATORY MODES / LYAPUNOV EXPONENTS / CONVERGENCE RATE

Аннотация научной статьи по математике, автор научной работы — Bespalov A.V., Vilkova E.V.

We propose three algorithms that can fairly accurately estimate the degree of convergence to the limit cycle using time-series generated by systems that converge to a quasi-periodic oscillation and consider their applicability ranges. As a proof-of-concept, a trivial two-dimensional case is studied. A practically important three-dimensional case is considered. Generalization of the algorithm to the space of any number of dimensions is presented. An example of these algorithms was used for estimating the Van-der-Pol system convergence.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Time-series rate of convergence to quasi-periodic oscillations»

TIME-SERIES RATE OF CONVERGENCE TO QUASI-PERIODIC OSCILLATIONS

A. V. Bespalov, E. V. Vilkova

ITMO University , Saint Petersburg, Russia [email protected]

PACS 05.45.Tp

We propose three algorithms that can fairly accurately estimate the degree of convergence to the limit cycle using time-series generated by systems that converge to a quasi-periodic oscillation and consider their applicability ranges. As a proof-of-concept, a trivial two-dimensional case is studied. A practically important three-dimensional case is considered. Generalization of the algorithm to the space of any number of dimensions is presented. An example of these algorithms was used for estimating the Van-der-Pol system convergence.

Keywords: Time-series, Self-oscillatory modes, Lyapunov exponents, convergence rate.

Received: 1 April 2014 Revised: 3 May 2014

1. Introduction

It is quite normal that a real technical system in its operating mode converges not to a stable equilibrium, but to some self-oscillatory process (quasi-periodic limit cycle) [1, 2]. This is not always a positive feature of the system, and usually self-oscillating regimes are considered to be harmful [1, 3, 4]. In most cases, such behavior results from system nonlinearities. Moreover, a certain class of systems uses this regime as normal operating mode [1, 2, 5, 6].

The Lyapunov exponents method allows one to estimate convergence rate for processes in steadily operating systems. Exponents can be evaluated analytically from a differential equations system [7, 8, 9], or estimated from the time-series generated by system dynamics [9-12]. The largest Lyapunov exponent is a criterion for the system's trajectory convergence to the steady state [7, 9, 12]. However, in self-oscillatory modes, the largest exponent vanishes and does not provide sufficiently accurate estimates of the convergence to characterize changes in system parameters [9]. Moreover, the main approach to self-oscillatory processes analysis is frequency domain analysis, such as different modifications of harmony balance techniques and Fourier analysis [13-15]. Thus, we still face the problem of creating effective tools for time domain analysis of such processes.

In this paper, we present three time domain algorithms based on similar principles which are able to estimate the degree of convergence to the limit cycle using time-series. The first algorithm can be used for time-series generated by simple oscillations with a sine or cosine limit cycle. The second algorithm can be used for three-dimensional phase space reconstruction with unusual waveforms. The third algorithm can be used with any-dimensional phase space.

2. Algorithms inputs and assumptions

Let us consider a time-series x(t) corresponding to some system trajectory in phase space, which is the solution to some differential equation:

x (t) = {x (t0), x (t0 + At),..., x (t0 + nAt)}, (1)

where t0 > 0 is the initial time moment, At > 0 is the time-series discretization step, n > 0, n g N is the time-series length. We assume the considered trajectory to meet the conditions of continuity; system attractor (repeller) [9] not to be strange; phase space structure to be regular in the neighborhood of each trajectory point and the solution to be periodic relative to the focus F G R up to space compression (or expansion) operator H(t), i.e.:

x (t + T) = H(t + T)x (t), (2)

where T is the solution period. It is also assumed that the trajectory is converging or diverging exponentially, i.e. H (t) = AeMi + r, A, r, ^ g R, A > 0, r > 0 within phase space area of interest. The convergence coefficient ^ is to be found. In the case of F0 = 0, the periodic solution focus is shifted to 0 by the following transformation:

y (t) = x (t) - x (t + T), (3)

where t ^ T. This conclusion comes from the following transformations:

x (t) - x (t + T) = (Ae^ + r) f (t) + Fo - ((Ae^(i+r> + r) f (t + t) + Fo) =

= (Ae^ + r) f (t) - (AeM(i+r> + r) f (t + t), (4)

where f (t) is the periodic function with some period. Obviously this happens when t ^ ro, the resulting function tends to the limit cycle "radius" r of function f (t). If the condition on t is met, the contraction operator changes very little from one period to another. Furthermore, we assume that F0 = 0 for all trajectories.

3. First algorithm (trivial)

In the case of simple oscillations:

y (t) = H(t)sin(ut) = (Ae^ + r) sin(ut), (5)

where u > 0 is the unknown oscillation frequency, and ^ can be estimated using the following procedure:

For the trajectory generated by equation (5) delay-reconstruction is performed. With delay condition 0 < t < T:

y (t) = (AeM(i+r) + r) sin(u(t + t)). (6)

The resulting reconstructed trajectory looks like a spiral which tends to the limit cycle or is unwinding from the focus. The plane that contains the spiral can be found using two arbitrary trajectory points and focus. Since the focus point has zero coordinates, we can assume that x0 = z0 = y0 = 0. In the case of the first order reconstruction, the coordinates of trajectory points are determined by the following vectors p = (y (ti), y (ti) , 0), p2 = (y (t2) ,y(t2), 0), t1, t2 being arbitrary time moments, |t1 -12| = 2n, where n g N. The plane equation will be:

z (y (ti) y(t2) - y(ti) y (t2)) = 0. (7)

The equation of a plane contains a vector which is perpendicular to this plane, in this case: N = (0, 0, A). Having three points N = (0, 0, A), F0 = (0, 0, 0), pi = (y (ti) ,y (ti), 0) we can build a section plane for the spiral:

x (y (ti) - A) + y(y (ti) - A) = 0.

(8)

This plane intersection with reconstructed trajectory does not depend on H(t), and depends only on the system period, so frequency u can be found from the following equation:

(9)

U = -—-=-, (10)

Thereafter:

sin(wt1) = y (t1) — A.

arcsin (y (t1) — A) + nk

" ti '

where k E Z. However, we only need one solution in case k = 0 and the system period = . Since we have the period, we can now make a new series consisting of the original expression (5) points:

Y (t) = {y (ti) , y (ti + T),..., y (ti + NT)}, N E N. (11)

In this series, the periodic part takes the same values, and all points of the series satisfy the expression:

Y (t) = Cie^ + C2. (12)

For the series described by expression (11), there are several well-developed tools for ^ estimation [10-12]. The basic concept of this solution is shown graphically in Figure 1.

Fig. 1. Phase trajectory y (t) with oscillations tending to the limit cycle. The highlighted horizontal plane contains spiral, vertical plane is the section plane. Points are the starting point xo, the point of focus F0 and trajectory points lying a period apart Y (t)

If we initially have a discrete series, we can build the series (11) for a set of reconstructed trajectory points describing the full period of the spiral and estimate ^ by averaging over all the intermediate results.

Obviously, expression (5) can be derived by less complicated considerations. However, this example is a perfect illustration of the method that can be used for more complex dynamics.

However, when the process is quasi-periodic, like this:

x (t) = H (t) f (t) = H (t) f (t + nT), (13)

where n g N, the following difficulties arise:

(1) Self-intersections of the reconstructed trajectory appear

(2) Spatial trajectory does not lie in one plane

(3) Perpendicular section plane may intersect with trajectory points which are not a period or half-period apart.

To solve these problems, we propose an algorithm that can deal with such difficulties.

4. Second algorithm (three-dimensional)

At first, we set some conditions that should apply to the reconstructed phase trajectory:

(1) Reconstructed trajectory should have the number of dimensions sufficient to avoid self-intersections.

(2) If at some point, x(t0), the derivative vector along the trajectory has a direction of A, then the plane S that is perpendicular to S and contains the focus and x(t0) has no other codirectional to A intersections with the trajectory near the plane S, except a number of points described by x (t0 + kT), k G N.

If these conditions are met for three-dimensional reconstruction of the series (1), described by expression (13), ^ can be estimated by the following algorithm:

the attractor is reconstructed in phase space that has K = 3 dimensions with reconstruction delay Tr. Start time is set to zero. We select three points on the trajectory. The first point x (t0) = xo, where t0 = t, and two points x(t0 + t) and x(t0 - t). Now, we find equations of the plane S using points [x(t0 - t), x (t0 + t) ,F0]; and section plane S using points [x(t0),p1, Fo] where p1 - point from the vector perpendicular to S.

Then, passing through the points of time-series (1) starting with xo, we seek a point x(t1) = xmin, which is the time-series value a period after xo. This point is sought as follows: first, the reconstructed vector function x (t) is substituted into the equation of section plane and sign is extracted from the resulting number. This gives us function s0(t) that shows the relative to the plane side of the trajectory point. Then, passing through all the series points, we take all the points where s0(t0+kAt) = s0(t0+(k+1)At) and s0 (t0 + (k + 1) At) = s0(t0), thus obtaining a series of points h (t) that crosses the section plane in a certain direction.

Then, we seek for the point of series h(t) with minimum distance from the section plane S and the plane S (in the two-dimensional case, the second condition is satisfied automatically). The distance from a point to a plane can be estimated as the absolute value which is obtained by substituting points coordinates in the equation of the plane.

Therefore, for a single period the following minimization problem can be solved:

J (t) = L (x (t), + L (x (t), S) + a (x (t), ^ mm, (14)

where L (x (t), S) g [0; 1] is the normalized per unit distance from the point x (t) to the plane

5, a (x (t), S) G [0; 1] is the weight function which has a minimum value at the intersection

of the trajectory with the plane S in the starting point direction. The weight function, J (t), is the representation of the second condition which must be applied to the trajectory. J (t) = 0 only when the trajectory simultaneously intersects S S and x (t) codirectional to the starting point. The trajectory might never intersect S near "period point" but there is only one point where the trajectory can come close to S, so that L (x (t) , S) is small and

a ^x (t), Sj + L ^x (t), Sj =0 simultaneously - the period point. This ensures that J (t)

has only one minimum per period. In practice, J (t) < e condition is used to find J (t) minimum for the entire series, e is small and e > 0.

As a result, we obtain an array of trajectory points which are period apart and then select the point with minimal time thereout.

Here is an example of such search algorithm applied to the reconstructed series generated by the expression:

x (t) = (sin (3t) + 0.7cos(5t))(e-°'°5i + 1), (15)

is shown in Fig. 2, 3.

As is shown in figure 2, all J(t) components tend to zero in the area outlined by the circle. The trajectory intersects the plane S and is tangential to the plane S, and the required side change occurs.

Also, we can follow the trajectory only up to T/2, using —a(x (t), S?) condition. Then, trajectory points evaluation ends after the first sign s0(t) change. However, this only makes sense if the trajectory is symmetric with respect to the focus.

Knowing the period, one can construct a number of series (11) for the selected set of trajectory points from the beginning to the end of the first period. Then, for each of these series, is estimated by the known formulas and the results are averaged.

This algorithm still has some problems. You have to raise the reconstruction dimension to avoid trajectory intersections. However, if the reconstruction dimension is higher than 3, the task of seeking section planes becomes more complicated, as multiple dimensions increase the number of equations in the system which defines the plane. The third presented algorithm helps to avoid this complication.

5. Third algorithm (generalization)

This algorithm is almost the same as the second one. The main difference is that we propose to change J (t). The new weight function J(t) is described by the following equation:

J (t) = F (t) + F (t) + ai (t, F (t)) ^ min, (16)

where F (t) = — (cos (xo, x (t)) — 1) and cos (xo, x (t)) - cosine of the angle between the vectors formed by points F0, xo and F0, x (t), F (t) = — (cos (Vo, V (t)) — 1), where cos (V0, V (t)) -cosine of the angle between vectors V0 and V (t) along the trajectory traveling through points x0 and x (t), ai (t, F (t)) e [0; 1] is the weight function that has a minimum value when the sign of the F(t) derivative changes from negative to positive. For example, ai can be described by the following equation:

( 0, F = 0, F > 0

ai (t, F(t))= [ 1, F = 0, F < 0 (17)

[ 0.5, F = 0

Analytically, this function works well with all F(t) where F = 0 while F = 0. In practice, we always can see in which direction function will change right after F = 0.

2.5

X

Fig. 2. Reconstructed phase trajectory x (t) generated by the expression (15). Asterisks denote the starting point xo, the point of focus F0 and line shows series Y (t)

Tims,s

Fig. 3. J(t) components for one period: L ^x (t), Sj - trajectory distance

from S plane, L (x (t) , S) - trajectory distance from S plane, a(x (t), S?) -weight function which has a minimum value at the intersection of the trajectory with the plane SS in the starting point direction

The function F (t) = 0 only at the points situated at the virtual plane S which contains xo and Fo. Moreover, these points are situated at the virtual section plane S?, which also contains x0 and Fo. It is hard to form equations of these planes, but we know that they exist and can be formed by two vectors. The function F (t) = 0 only if vectors So and S (t) are codirectional. Due to the second condition on the trajectory F (t) = 0 and F (t) = 0 only if x (t) and xo are a period apart. This ensures that J (t) has only one minimum per period. The function a1 (t, F(t)) is not analytically necessary but it is helpful in practical implementation, where we can use J (t) < e condition to seek minimum. A constant e limits our assumption on how near the trajectory can come to the virtual planes intersection while it is still not the period point.

An example of this search algorithm, applied to the reconstructed series generated by expression (15), is shown in Fig. 4.

Timers

Fig. 4. J (t) components for one period: F(t) - solid line. Weight function based on cosine for angle between vector xo and vector function x (t), F (t) -dashed line. Weight function based on cosine for angle between vector xS0 and vector function S(t), a1 (t, F(t)) - the weight function that has a minimum value when the sign of the F(t) derivative changes from negative to positive

The problem with this evaluation of a1(t, F(t)), as well as with the whole proposed algorithm, is that for a noisy process, such an approach would give a large error due to the frequent change of the derivative sign and a1(t, F(t)) should be evaluated differently. In three dimensions, this would not happen, because trajectory intersects section plane (usually) only once through the period, regardless of the noise.

As an example, let us evaluate ^ at convergent Van-der-Pol system using second algorithm and third algorithm:

x 1 = x2

x2 = x2 (1 - x2) - x1 (18)

y = x1(e-0.05t + 1)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

The system is simulated with a step of t = 0.01s. for 100 s. The third algorithm gives the period value of Ti = 6.68 s., the second algorithm estimated period to be T2 = 6.67 s. Knowing T, we build series (11) for every 30th point of first period. For each slice, exponential convergence to the limit cycle is computed. The result is averaged over all slices. Resulting values obtained with both algorithms coincide an equal —0.054. Simulation results are shown in Fig. 5, 6.

time,3

a) b)

Fig. 5. a) Phase trajectory x (t) of Van-Der-Pol reconstructed system (18). Straight lines indicate the sections Y;(t) for which ^ is evaluated. xo is the starting point, F0 is the point of focus b) J (t) components for one period: F(t) - solid line. Weight function based on cosine for angle between x0 and x (t), F (t) - dashed line. Weight function based on cosine for angle between xo and x (t), a1(t, F(t)) - the weight function that has a minimum value when the sign of the F (t) derivative changes from negative to positive

~70 5 10 15 30 25

i (Section index]

Fig. 6. Estimated ^ for each trajectory slice

6. Conclusion

Three new algorithms to estimate the convergence rate ^ to limit cycle are presented in this paper. The first proposed algorithm can be used to estimate ^ for converging sine waves, but has problems with periodic processes with unusual waveforms. The second algorithm which can be used for three-dimensional reconstructions can be useful for the majority of practical cases but it can't work in case of self-intersecting trajectories. This self-intersection can happen if the three-dimensional reconstruction is not sufficient, so the third proposed algorithm solves this problem and can be used for reconstructing dimensions higher than 3. Still, the third algorithm has problems in accurate evaluation of ^ for noisy time-series, due to the frequent change of the derivative sign in noisy processes. Nevertheless, all three algorithms used together allow sufficiently accurate ^ estimation and can be used for quite a large number of real processes. The main advantage of the proposed methods is that they can make accurate estimations of such process characteristics as period and convergence rate without using frequency domain techniques.

References

[1] Jenkins A., Self-oscillation, Phys. Rep., 525, P. 167-222 (2013).

[2] Lazarus A., Barois T., Perisanu S., Poncharal P. Simple modeling of self-oscillations in nanoelectrome-chanical systems. Applied Physics Letters, 96(19), P. 193114 (2010).

[3] Hodges D.H., Pierce A. Introduction to structural dynamics and aeroelasticity. Cambridge (2002).

[4] Hoque M. E. Active flutter control, LAP Lambert Academic Publishing. Germany (2010).

[5] Lukin K. A., Maksymov P. P. Terahertz self-oscillations in reverse biased P-N junctions, in Proc Physics and Engineering of Microwaves, Millimeter and Submillimeter Waves and Workshop on Terahertz Technologies, 2007. MSMW '07. The Sixth International Kharkov Symposium, Kharkov, 25-30 June, 2007, P. 201-203.

[6] Muljadi E., Sallan J., Sanz M. Butterfield C. P. Investigation of self-excited induction generators for wind turbine applications, in Proc. IEEE Industry Applications Conference 1999, vol. 1, P. 509-515.

[7] Benettin G., Galgani L., Giorgilli A., Strelcyn J.M. Lyapunov characteristic exponents for smooth dynamical systems and for hamiltonian systems; a method for computing all of them, part 2: numerical application. Meccanica, 15(1), P. 21-30 (1980).

[8] Leonov G. A. Chaotic dynamics and classical theory of the dynamic stability, Research center "Regular and chaotic dynamics", Izhevsk, 186 p.

[9] Malinestky G., Potapov A., Podlazov A. Nonlinear dynamics: approaches, results, hopes (synergetics: "from past to future"). KomKniga, Moscow (2006), 280 p.

10] Bespalov A., Chistyakov Y. On the local stability estimation using first Lyapunov exponent calculation, in Proc. Eurocon 2009, Saint Petersburg, 2009, P. 1985-1990.

11] Rosenstein M. T., Collins J. J., De Luca C.J A. Practical method for calculating largest Lyapunov exponents from small data sets Source, Physica D. 65(1-2), P. 117-134 (1993).

12] Wolf A., Swift J. B., Swinney H. L. Determining Lyapunov exponents from a time series. Physica, P. 285317 (1985).

13] Gilbert J., Kergomard J., Ngoya E. Calculation of the steady-state oscillations of a clarinet using the harmonic balance technique, J. Acoust. Aoc. Amer., 86(1), P. 35-41 (1989).

14] Nakhla M., Vlach J. A piecewise harmonic balance technique for determination of periodic response of nonlinear systems". IEEE Transactions on Circuits and Systems, IEEE Transactions on Circuits and Systems, 1976, CAS-23, P. 85-91.

15] Smirnov D. Characterization of weak coupling between self-oscillation systems from short time series: Technique and applications. Journal of Communications Technology and Electronics, 51(6), P. 534-544 (2006).

i Надоели баннеры? Вы всегда можете отключить рекламу.