Научная статья на тему 'How to reduce reproducible measurements to an ideal experiment?'

How to reduce reproducible measurements to an ideal experiment? Текст научной статьи по специальности «Математика»

CC BY
137
33
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Fourier transform / Prony’s decomposition / intermediate model / data/signal processing / EPR measurements with/without memory / apparatus function

Аннотация научной статьи по математике, автор научной работы — Nigmatullin R.R., Rakhmatullin R.M., Osokin S.I.

Is it possible to suggest a general theory for consideration of reproducible data that are measured in many experiments? One can prove that successive measurements have a memory and this important fact allows separate all data on two large classes: ideal experiments without memory and experiments with a memory. We introduce the concept of an intermediate model (IM) that helps to describe quantitatively a wide class of reproducible data. Experiments with memory require for their description the Prony’s decomposition while experiments without memory are needed for their presentation the Fourier decomposition only. In other words, it means that a measured function extracted from reproducible data can have a universal description in the form of the amplitude-frequency response (AFR) that belongs to the generalized Prony’s spectrum (GPS). It is shown also how real data distorted by the experimental equipment and how to eliminate these uncontrollable factors in order to reproduce approximately the conditions corresponding to ideal experiment. New and elegant solution of elimination of the apparatus (instrument) function is suggested. In an ideal case the decomposition coefficients belong to the Fourier transform and presentation of reproducible data in this form leads to the IM for this case. The suggested general algorithm allows considering many experiments from the unified point of view. The real example based on available electron paramagnetic resonance (EPR) data confirms this general concept. The unified “bridge” between the treated experimental data and a set of competitive hypothesis that pretend for their description is discussed. The results obtained in this paper help to put forward a new paradigm in data/signal processing.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «How to reduce reproducible measurements to an ideal experiment?»

ISSN 2072-5981

Volume 16, Issue 2 Paper No 14203, 1-19 pages 2014

http://mrsej.kpfu.ru http://mrsej.ksu.ru

Established and published by Kazan University Sponsored by International Society of Magnetic

Resonance (ISMAR) Registered by Russian Federation Committee on Press,

August 2, 1996 First Issue was appeared at July 25, 1997

«I5

© Kazan Federal University (KFU)

"Magnetic Resonance in Solids. Electronic Journal" (MRSey) is a

peer-reviewed, all electronic journal, publishing articles which meet the highest standards of scientific quality in the field of basic research of a magnetic resonance in solids and related phenomena. MRSey is free for the authors (no page charges) as well as for the readers (no subscription fee). The language of MRSey is English. All exchanges of information will take place via Internet. Articles are submitted in electronic form and the refereeing process uses electronic mail. All accepted articles are immediately published by being made publicly available by Internet (http://MRSez.kpfu.ru).

Editors-in-Chief Jean Jeener (Universite Libre de Bruxelles, Brussels) Boris Kochelaev (KFU, Kazan) Raymond Orbach (University of California, Riverside)

Executive Editor Yurii Proshin (KFU, Kazan) [email protected]

[email protected]

Editors

Vadim Atsarkin (Institute of Radio Engineering and Electronics, Moscow) Yurij Bunkov (CNRS, Grenoble) Mikhail Eremin (KFU, Kazan) David Fushman (University of Maryland,

College Park)

Hugo Keller (University of Zürich, Zürich) Yoshio Kitaoka (Osaka University, Osaka) Boris Malkin (KFU, Kazan) Alexander Shengelaya (Tbilisi State University, Tbilisi) Jörg Sichelschmidt (Max Planck Institute for Chemical Physics of Solids, Dresden) Haruhiko Suzuki (Kanazawa University,

Kanazava) Murat Tagirov (KFU, Kazan) Dmitrii Tayurskii (KFU, Kazan)

In Kazan University the Electron Paramagnetic Resonance (EPR) was discovered by Zavoisky E.K. in 1944.

*

How to reduce reproducible measurements to an ideal experiment?^

R.R. Nigmatullin*, R.M. Rakhmatullin, S.I. Osokin Kazan Federal University, Kremlevskaya 18, 420008 Kazan, Russia * E-mail: [email protected]

(Received: February 26, 2014; accepted: April 19, 2014)

Is it possible to suggest a general theory for consideration of reproducible data that are measured in many experiments? One can prove that successive measurements have a memory and this important fact allows separate all data on two large classes: ideal experiments without memory and experiments with a memory. We introduce the concept of an intermediate model (IM) that helps to describe quantitatively a wide class of reproducible data. Experiments with memory require for their description the Prony's decomposition while experiments without memory are needed for their presentation the Fourier decomposition only. In other words, it means that a measured function extracted from reproducible data can have a universal description in the form of the amplitude-frequency response (AFR) that belongs to the generalized Prony's spectrum (GPS). It is shown also how real data distorted by the experimental equipment and how to eliminate these uncontrollable factors in order to reproduce approximately the conditions corresponding to ideal experiment. New and elegant solution of elimination of the apparatus (instrument) function is suggested. In an ideal case the decomposition coefficients belong to the Fourier transform and presentation of reproducible data in this form leads to the IM for this case. The suggested general algorithm allows considering many experiments from the unified point of view. The real example based on available electron paramagnetic resonance (EPR) data confirms this general concept. The unified "bridge" between the treated experimental data and a set of competitive hypothesis that pretend for their description is discussed. The results obtained in this paper help to put forward a new paradigm in data/signal processing.

PACS: 89.75.-k, 06.30.-k, 02.50.

Keywords: Fourier transform, Prony's decomposition, intermediate model, data/signal processing, EPR measurements with/without memory, apparatus function

List of acronyms:

AFR: amplitude-frequency response. AF: apparatus (instrumental) function. IM: intermediate model. GPS: generalized Prony's spectrum. LLSM: linear least square method. QP: quasi-periodic.

REMV: reduced experiment to its mean values. SRA: sequence of the ranged amplitudes. EPR: electron paramagnetic resonance.

^This paper is originally written by authors on the occasion of eightieth birthday of Professor Boris I. Kochelaev.

1. Introduction

The section of experimental physics associated with treatment of different data is considered as well-developed. For any newcomer to suggest some new and rather general idea that can touch the basic of this section it seems to be impossible. Many books written by prominent scientists (mathematicians, experimentalists, specialists in statistics of different kind and etc.) [1-10] created a basic stream in this area. This branch of physics and mathematics is very general and many researches that deal with signal/noise processing should understand the basic of this science. The fresh information related to recent achievements in the fractal signal processing is collected in books [11-14]. This information "explosion" creates a certain trend and it definitely increases the limits of applicability of many methods that are developed in this area for analysis of different random sequences and signals. The chaotic and random phenomena are originated from variety of reasons and their specificity dictates different methods for their quantitative description. One of the authors of this paper (RRN) also tried to develop different methods that proved their efficiency in solution of many complex problems [15-20], where the conventional methods do not work properly.

All data can be divided on two large classes: reproducible and unrepeatable data, accordingly. In the first case an experimentalist enables to reproduce relatively stable conditions of his experiment and can measure the response of the system (object) studied again with some accuracy. For the second type of data (economic-, meteo- geologic-, medical and etc.) the repetition of the same initial conditions becomes impossible and many special methods for analysis of different time series were suggested [6-10]. In the second case the control variable x is random and the response created by the action of this variable on the object studied is also random and so all responses in the second case cannot be reproducible. But nevertheless in spite of the modern tendency that exists in this area one can formulate the following general question: Is it possible to develop a general theory (or IM) that allows considering all reproducible data in the frame of the unified and verified concept? This theory should satisfy to the following requirements:

R1. It should give a possibility to express quantitatively a set of the measured functions by means of the unified and common set of the fitting parameters.

R2. This set of the fitting parameters should form the unified model and many data can be compared in terms of one quantitative "language". It means that there is a possibility to create general metrological standard for consideration of reproducible data from the unified point of view.

R3. All calculations that are contained in this general theory should be error controllable.

R4. It should give a possibility to eliminate the apparatus (instrumental) function and reduce reproducible measurement to an ideal experiment.

The further and attentive analysis of the recent results obtained in [21] allows finding a positive answer on the general question posed above.

2. The basic positions and conclusions of the suggested theory

Let us remind some important points that are necessary for understanding of new theory. Under ideal experiment we understand the measured response from the object studied (during the period of time T) that is reproduced in each measurement with the same accuracy. If Pr(x) is

chosen as the response (measured) function then from the mathematical point of view it implies that the following relationship is satisfied

Um(x) = Pr(a; + m • Tx) = Pr(a; + (m — 1) • Tx), m = l,2,...,M. (1)

Here x is the external (control) variable, Tx is a "period" of experiment expressed in terms of the control variable x. In expression (1) we make a supposition that the properties of the object studied during the period of "time" Tx is not changed. If x = t coincides with temporal variable then Tx = T coincides with the conventional definition of a period. The solution of this functional equation is well-known and (in case of discrete distribution of the given data points x = Xj, (j = 1,2,..., N) coincides with the segment of the Fourier series.

Pr(a;) = A0+Y^

fx

Acu cos I 27rk— ) + Asu sin I 2ixk— 1 T• \ %

k= 1

(2)

We deliberately show only the segment of the Fourier series because in reality all data points are always discrete and the number of "modes" (coinciding with the coefficients of the Fourier decomposition) is limited. We define here and below by the capital letter K the finite mode. This final mode K is chosen from the requirement that it is sufficient to fit experimental data with the given (or acceptable) accuracy. As we will see below the value of K can be calculated from the expression (8) for the relative error located in the given interval [1%-10%]. This interval provides the desired fit of the measured function y(x) to Pr(a;) with initially chosen number of modes k figuring in (2). From these relationships an important conclusion is following. For ideally reproducible experiment, which satisfies to condition (1) the F-transform (2) can be used as intermediate model (IM) and the number of decomposition coefficients (Ao, Ack, Ask) equaled 2K +1 can be used as a set of the fitting parameters belonging to the IM. The meaning of these coefficients is well-known and actually this set defines approximately the well-known amplitude-frequency response (AFR) associated with the recorded "signal" y(x) Pr(a;) coinciding with the measured function. Here we increase only the limits of interpretation of the conventional F-transform with respect to any variable x (including frequency also, if the control variable x coincides with some current w) and show that the segment of this transformation can be used for description of an ideal experiment. Let us consider more general functional equation

F(x + Tx) = aF(x) + b, (3)

This functional equation has been considered in the first time in paper [22] by the first author (RRN). The solution of this equation can be written in the following form [22]

a/1: F(x) = exp (Xx/T) Pr(a;) + Co, A = ln(a), Co = 6/(1 — a),

a = 1 : F(x) =Pt(x)+ bx/Tx. ^

The interpretation of this equation was considered in [22]. From equation (3) the obvious conclusion follows

F (x + mTx) = aF (x + (m - 1) Tx) + b, m = l,2, ...,M. (5)

It can be interpreted as repetition of a set of successive measurements corresponding to an ideal experiment with memory. Again, a supposition about stable properties of the object studied during the period Tx used for the measurements is conserved (it means that constants a and

b in (5) do not depend on time). This situation, in spite of initial attractiveness cannot be performed in reality because a set of uncontrollable factors (as we will see below on real data) can change the set of the fixed slope (a) and intercept (b). In reality, we might expect that all these constant parameters including the period Tx will depend on the current number of a measurement m

ym+i(x) = amym + bm or

(6)

F (x + (m + 1) Tx(m)) = amF (x + m- Tx(m)) + bm, m = 1,2,..., M - 1.

But, nevertheless, the solution (4) is valid in this case also and in the result of the fitting of the function (4) one can express approximately the current measurement ym{x) in terms of the function (4) that represents itself the chosen IM. From this IM one can obtain a fitting function for description of reproducible measurements with the shortest memory (6). So, for each measurement from expression (4) one can derive easily the following fitting function

Vm{x) = Fm (x) = Bm + Emexp(Xmx/Tx(m)) +

k

+ [Ack(m)yck(x, m) + Ask(m)ysk(x, m)}, k= l

yck(x,m) = ex_p(\mx/Tx(m))cos(2'n:kx/Tx(m)),

ysk(x,m) = ex\){\mx/Tx{m)) sm{2iikx/Tx{m)).

As a matter of fact, there is a period of time T that determines the temporal interval when one cycle of measurement is finished. But experimentalist prefers to work not with temporal variable; frequently he works with another variable x (wavelength, scattering angle, magnetic field and etc.) that is determined by experimental conditions and available equipments used. In this case the connection between the "period" Tx defined above and real period T is not known. But the desired nonlinear fitting parameter Tx that enters in (7) can be calculated from the fitting procedure. In order to find the optimal value of this parameter Topt that provides the accurate fit we notice that this value should be located approximately in the interval [Tmax/2, 2Tmax], where the value of Tmax(x), in turn, should be defined as Tmax(x) = Ax ■ L{x). The value Ax is a step of discretization and L{x) = £max — xm\n is a length of the interval associated with the current discrete variable x. This important observation helps us to find the optimal values of Topt and K from the procedure of minimization of the relative error that always exits between the measured function y(x) and the fitting function (7)

min [RelErr] = min

stdev{y(x) - F(x;Topt,K)}

mean | y\ 1% <min[RelErr(K)] < 10%, T0pt G [Tmax/2, 2Tmax], Tmax — (Xj Xj—\) ■

100%,

(8)

The direct calculations show that instead of minimization of the surface RelErr(T, K) with respect to the unknown variables T and K one can minimize the cross-section at the fixed value of K. This initially chosen value of K should satisfy to the condition that is given by the second row of expression (8). It is obvious that this procedure should be realized for each successive measurement and so we omit the index m (m = 1,2,..., M) in (8) in order not to overload this expression with additional parameters. Any experimentalist wants to realize the conditions that are close to an "ideal" experiment with memory expressed by relationship (3). For this aim one

can average the set of constants am and bm together with the measured functions ym in order to replace equation (6) by an approximate equation that is close to the ideal case (3)

Y(x + (Tx)) ^ (a) Y{x) + (6),

m M-I (9)

Y{x + (Tx)) = w—2Yjym{x), Y{x) = w—2Yjym{x).

m=2 m= 1

The second row in (9) defines the averaged functions that are obtained from the given set of the reproducible measurements. We define this functional equation as the reduced experiment to its mean values (REMV). We should notice here that the constants am and bm are calculated from (6) as neighboring slopes and intercepts

am = s\ope(ym+1,ym), bm = intercept{ym+ i, ym),

1 M—i 1 M-i (10)

m = 1,2,..., M - 1, {a} = M _ 1 am, {b} = M_1 Y bm>

m=1 m= 1

and this set of numbers entering in (9) should be equal M — 1. So, in order to save the calculation resources one can reduce initially the data treatment procedure to consideration of the functional equation (9) for the averaged functions only. The total set of measurements is necessary to justify the functional equation (6). In every experiment the requirement of the shortest memory cannot be realized. So, in general case instead of equation (6) showing the realization of the simplest case between the neighboring measurements it is necessary to consider an ideal situation when the memory covers L neighboring measurements. In this case we can write

L-1

F(x + LTX) = Y aiF(x + lTx) + b- (11)

1=0

In reality, as we will see below, one can calculate easily the set of the parameters ai, b by the LLSM if we suppose that L = M, where M coincides with the last measurement. But, up to now, we do not know how to calculate the true value of L. This value is gained by deep physical reasons and true nature of this memory merits a special research. The functional equation (11) describes mathematically a wide class of the QP processes and can be interpreted as follows. The measurement process that takes place during the interval [(L — 1)TX, LTX] partly depends on the measurements that have been happened on the previous temporal intervals [.LTX, (I + 1 )TX] with I = 0,1,..., L - 2. The set of the constants [ai] (I = 0,1,..., L - 1) can be quantitatively interpreted as the influence of a memory (strong correlations) between the successive measurements. The solution of this generalized functional equation (11) was considered in paper [22] and can be presented in two different forms (A) and (B)

L-1 L ,

(A) ^ 1 : F{x) = Y {Ki)x/TxVn{x) + Co, Co = ■

1=0 1=1 L-1 L

L—l '

1 - E ai

1=0

(12)

(.B) =1: F{X) = Y(ki)X/Tx^I{X) + C1^-, Cl

1=0 1=1

Tx L~l

L - E I ■ di 1=0

Here the functions Prj(a;) define a set of periodic functions (I = 1,2,..., L) from expression (2), the values ni coincide with the roots of the characteristic polynomial

L—l

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

P(k) = nL - Y ¿W = (13)

1=0

In general, these roots can be positive, negative, g-fold degenerated (with the value of the degeneracy g) and complex-conjugated. We should note also that for the case B in (12) one of the roots ki coincides with the unit value (ki = 1) that leads to the pure periodic solution. As before, the finite set of the unknown periodic functions Pri(x, Tx) (l = 1,2,..., L) is determined by their decomposition coefficients Ac]l), As]l), l = 1,2,..., L; k = 1,2,..., K.

K >1

Pri(x,Tx) = A0l) + Y, k=1

Acf cos ( ) + As® sin ( 2vrk^-

Tx Tx

(14)

We want to stress here the following fact. The conventional Prony's decomposition [23-25] did not have any specific meaning and is considered as an alternative decomposition alongside with other transformations (Fourier, wavelet, Laplace and etc.) used in the signal processing area. But in paper [21] we found an additional meaning of this decomposition associated with successive measurements. Solution (12) has general character and other roots from algebraic equation (13) can modify the conventional solution. All possible solutions of the general functional equation (11) for different types of roots were considered in [22]. In the matter of fact, a set of the reproducible measurements having a memory associated with L neighboring measurements should satisfy to the following functional equation

L-1

ym(x) = F(x + (L + m)Tx(m)) = V a(m)F(x + (l + m)Tx(m)) + bm,

i=o (15)

m = 1,2,..., M.

As before, one can realize initially the REMV procedure that close to the ideal case (11) with the help of relationships

L-1

Y(x + L (Tx)) = ^ (ai> Y(x + l (Tx» + (b),

l=0 (16) 1 m v '

Y(x + I (Tx)) = ——- Yj F (x + (m + l) Tx(m)), I = 1,2,..., L, L < M.

m=l

The second row in (16) demonstrates a possible averaging procedure that can be applied for calculation of the mean functions. From mathematical point of view the functional equations (16) and (11) are similar to each other but have different meaning. The first one (11) is associated with the ideal experiment with memory while the second one (16) describes the typical situation of the real experiment when random behavior of the initial measured functions is reduced to its successive mean values. The averaged coefficients (al) and (b) in (16) are found by the linear least square method (LLSM) from the first row of (16). In practice, it is desirable to receive the minimal value of L, because L increases considerably the number of the fitting parameters that are needed for the final fitting of the measured function. From the mathematical point of view, the reproduction of the general solution of this equation is similar to solution (12) and so it can be omitted. Equation (16) has clear meaning and corresponds to the linear presentation of a possible memory that can exist between repeated measurements after averaging procedure. These coefficients reflect also (in some extent) the influence of the experimental uncontrollable factors coming from the measured equipment impact. Earlier, these factors were taken into account only statistically but new concept suggests a direct way for their evaluation. Here we want to demonstrate how to eliminate the apparatus function and reduce the set

of the real reproducible measurement to an ideal experiment containing the set of periodic functions only. Let us come back to equation (11). From (11) it follows that the functions F(x), F{x + T),..., F(x + (L — 1 )T) are linear independent and available from experimental measurements in averaged (see expression (16)) or in another sense. So, we have the following system of linear equations

l

F(x) = YEPi(x) + c o, i=i l

l

F (x + (L - 1) T) = Y rf^EPiix) + c0, i=i

EPi(x) = (ki)x/t* Pri(x), I = 0,1,..., L - 1.

From this linear system one can find the unknown functions EPi{x) and then restore the unknown periodic functions Prj(a;). It means that becomes possible to realize the reduction of a wide class of reproducible measurements presented initially in the frame of the desired IM and corresponding to the Prony's decomposition to an ideal experiment. We note that the L-th order determinant of system (17) coincides with well-known Vandermonde determinant [26]. It does not equal to zero if all roots of equation (13) are different. So, finally we have the ideal periodic function that corresponds to reduction of the real set of measurements to an ideal (perfect) experiment

l—l

Pf(x) = J2(18) 1=0

To be exact, from our point of view, this function can serve as a "keystone" in the arc of the "bridge" between theory and experiment. In the simplest case (4) we have the obvious relationships

(b) ' l-(a)_

x x ,, Tx

So, these simple formulas give a solution for the elimination of the apparatus function based on the verified suppositions (9) and (16). In the same manner we can consider the case B. The solution for this case is trivial and similar to linear equation (17) with replacement of the left hand side by the functions

$ (x + IT) = F (x + IT) - a + /) , I = 0,1,... , L - 1. (20)

It is instructive also to give formulas for the case L = 2. These expressions will be used for treatment of the EPR data considered below. After some simple algebraic manipulations one can obtain the following expression corresponding to an ideal experiment for this case.

Prtot(a?) = Pri(a;) +Pr2(a;),

PrlW =(*<*>.exp(-flnKl),

\ «1 - «2 / V Tx ) (21)

PriW=(i»Hi),ip(_ilQ4

\ Ki-K2 / \ Tx J

Y0(x) = F (x) — c0, Yi(x) = F (x + Tx) — c0.

(a) / 1, Pr(a:) = ((a))~{x/Tx) (a) = 1, Pr(a;) = Y(x) - (b}

Y(x) -

(19)

Attentive analysis of many data prompts the efficiency of the following simplified algorithm.

51. From available set of data one can calculate the mean measurement (with the usage of the conventional expression)

1 M m=1

and the distributions of the corresponding slopes and intercepts that demonstrate "marginal" measurements having a maximal deviations from its mean value (center of the statistical cluster)

SLm = slope(ym(x), (y(x))), Intm = intercept(ym(x), (y(x))). (23)

52. From these distributions one can find the measured functions having maximal deviations and form two limits (maximal deviations from both sides with respect to mean function.

(y(x)) = a1yup(x) + a0ydn(x) + b (24)

Here we realize the case of the reduced memory and "marginal" functions yup(x), ydn(x) describe the limits of the statistical cluster on two opposite limits, correspondingly. The coefficients |ao,1, b} are found from (24) by the LLSM.

53. The desired roots k1,2 are found from the quadratic equation

k2 — a1K — a0 = 0, (25)

and the fit of the function ydn(x) to the Prony's decomposition allows finding the optimal value of Tx.

54. These values (k1, k2 and Tx(opt)), in turn, allow to find the complete periodic function Prtot (x) from (21) and thereby to realize the desired reduction of the measured data to an ideal experiment, when the final function should be expressed in terms of the Fourier decomposition only. In other words, this function obtained from reproducible data measurements can pretend on comparison with competitive hypothesis obtained from the existing theory. If the proper theoretical hypothesis ("best fit" model) is absent then Prtot(x) is decomposed to the finite Fourier series and the AFR of this function can be considered as an IM for this experiment considered.

3. EPR experiment

3.1. The elimination of the AF in the presence of a sample

In this paper we consider only one illustrative example. For verification of the basic algorithm we prepare two types of data that are considered in the frame of the unified algorithm described above. The characteristics of the equipment and conditions (parameters) of the EPR equipment used for this simple experiment are the following: The sample used: Ce0.8975Gd0.0025Y0.1O2-x(powder with grain size 5 ±2nm). Measurements were performed by means of the EPR spectrometer BRUKER ESP 300 [27] with working frequency 9.43 GHz. The spectra were collected at room temperature with the center magnetic field 3.3 kG, sweep width 2kG, modulation amplitude 2 G, microwave power 10 mW. Twenty scans (each scan contains 1024 measured data points) were realized.

The resonance (integrated from the measured signal) line for this sample is shown on Fig. 1(a). Three resonance curves corresponding to the mean curves and the limiting measurements are merged practically with each other. The limiting curves being plotted with respect to the mean resonance curve form ideal straight lines that signifies about the high quality of the resonance equipment used. The Figs. 2(a,b) show the distribution of the slopes and intercepts with respect to the mean measurement. This information is useful for the selection of the "marginal" measurements (having the maximal deviations from the mean value to two opposite (up and down) sides) and formation of the approximate relationship (24) for elimination of the AF (apparatus function). Figure 3 demonstrates the test of the relationship (11) for L = M. The last measurement y20 represents a linear combination of all measurements (y1,y2,... ,y19) involved in this measurement process. The accuracy of this fitting is very high and the value of the relative error does not exceed 1%. For this case one can verify easily with the help of the LLSM the link between all successive measurements. The distribution of the coefficients [{am, b}, m = 1,2,...,M — 1] is shown in Fig. 4. Based on relationships (24) and (25) and the value of period of Tx ^^ Tmax that is calculated from expression (8) we can eliminate the influence of the AF and receive the function (21) corresponding to an ideal experiment. In the matter of fact, Fig. 5 represents a central result of this paper. The grey resonance curve is free from the influence of the measured device and can be used by theoreticians for comparison with the "best fit" models. If this theory is absent then this curve can be decomposed to the Fourier series (2) and the coefficients of this decomposition can be used as the quantitative parameters of the IM. The fit and distribution of the F-coefficients are shown, respectively, in Figs. 6(a,b).

3.2. Do high-frequency fluctuations characterizing an empty resonator remember each other?

The second type of measurements was associated with random fluctuations measured from empty resonator (cell not containing a sample). We analyze 20 successive measurements at room temperature. The main problem can be formulated as follows: in spite of the fact that high frequency fluctuations destroy a memory are there some correlations between measurements that still remember each other? We want to give the justified answer on this question. In Figs. 7(a,b) we demonstrate the obvious fact: the HF fluctuations destroy a memory between measurements. It is seen also in Fig. 7(b) when correlations disappear. The quite opposite situation is observed between integrated curves. The integration destroys the HF fluctuations and restores a possible link between measurements. The Figs. 8(a,b) confirm this observation. The Figs. 9(a,b) show the distributions of the slopes and intercepts calculated with respect to the averaged integrated curve. It is interesting to note that these distributions are concentrated in the vicinity of one (for slopes) and zero (for intercepts) that correspond to an ideal experiment. Figure 10 shows that memory phenomenon between integrated successive curves is conserved. The value of the fitting error is close to 10% and this fit can be considered as acceptable. The distribution of the memory coefficients together with free constant b is given by Fig. 11. Based on idea of reduction of the memory phenomenon to three significant functions (see equation (24)) one can select "marginal" functions yup(x) and ydn(x) describing two opposite limits of the statistical cluster. It helps to calculate the desired ideal curve (24) and use it as the fitting function expression (2). This reduced ideal curve is shown in Fig. 12 (grey line), its fit and the distribution of the decomposition coefficients entering into the F-decomposition are shown in Figs. 13 and 14, accordingly.

6000

Figure 1. (Color online) Here we demonstrate the EPR resonance line for the chosen sample. All measurements are located inside the limited ones. The mean measurement (y) (blue line) serves as envelope. It signifies about the accuracy of the equipment used. Being plotted with respect to the mean curve their corresponding slopes (slope(y2, (y)) = 1.04648, slope(y21, (y)) = 1.05429) are very close to the unit value.

¡5 —.—i—i—i—i—i—i—,—.—i—,——i—i—i—i—i—i—i—,—,——i —i—i—i—i—i—.—i—.—i—i—i—i—i—.—i—,—i—i—i—i-

0 5 10 15 20 25 0 5 10 15 20

m 1 < m < 20

Figure 2. (a) The distribution of the slopes with respect to mean measurement (y) corresponding to different measurements is shown here. The measurements having the maximal deviations are marked by squares. They will be used for elimination of the apparatus function with the help of relationship (24). (b) The knowledge about the distribution of intercepts is useful also. The "marginal" measurements (having maximal intercepts) are helpful for elimination the apparatus function with the help of relationship (24).

5000

2.5 3.0 3.5 4.0

H (kG)

Figure 3. He we demonstrate the long memory phenomenon between the measurements. The last measurement represents (black balls) a linear combination of all previous measurements. The fitting curve is expressed by solid grey line. The distribution of the coefficients is shown on Fig. 4.

0 5 10 15 20

m

Figure 4. Here the distribution of the linear coefficients providing the accurate fit (depected on the previous figure 3) is shown. The value of the free constant b is shown inside the frame. It is interesting to note that some measurements give the negative contribution. The origin of this phenomenon is not clear but it is tested easily by the LLSM.

Figure 5. This figure represents the central result of the whole research. The grey curve represents the resonance contour that is obtained after elimination of the apparatus (instrumental) function from black curve. Only this curve should be selected for comparison of the existing theory with experiment obtained from this equipment. The standardization of all equipments including the most accurate ones becomes important and actual problem for the whole concept of measurements.

Figure 6. (a) The fit of the ideal resonance contour corresponding to an ideal experiment by means of the F-transform. The coefficients of this transformation are found from the fitting procedure. (b) Here we demonstrate the distribution of the decomposition coefficients entering to the F-transform (2). In order to provide the accurate fit of the curve depicted on Fig. 6(a) only 30 modes are necessary. But this relatively large number of the fitting parameters follows from general concept and the "best fit" model is supposed to be unknown.

Figure 7. (Color online) (a) A noise recorded from empty resonator (cell). The high-frequency fluctuations destroy a memory between successive measurements. The discrete integration based on summation of rectangles (which is equivalent the taking of the conventional arithmetic mean operation) suppresses the HF fluctuations (magenta line). (b) This plot shows clearly that HF fluctuations destroy a memory between successive measurements.

H (kG) mean integrated curve

Figure 8. (Color online) (a) The situation is changed after integration. Being integrated with respect to its mean value the HF fluctuations are suppressed and the correlation links are partly restored. The fitting curve is marked by pink solid line. The value of the fitting error is close to 10%. (b) Being plotted with respect to each other these integrated measurements form curves close to the segments of straight lines. So the integration restores the "hidden" memory and these integrated measurements can be treated in the same manner as the previously considered resonance curves.

1.1 -

(0 0> a) i?

CL CD

o > in c

's w o a)

■II

3 a)

fi > 1.0H

■a 1 □ e

0.9

5 10 15

number of measurement (m)

20

10

Q.

0) y

d>

o

c o

3

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

-Q -10

(0 □

-20

Intercepts

5 10 15

number of measurement (m)

20

Figure 9. (a) As before, we show the distribution of slopes between integrated measurements with respect to the averaged integrated curve. The "marginal" measurements are squared. They are distributed in the vicinity of the slope equaled one that signifies about their proximity to an ideal case. (b) We demonstrate here the distribution of intercepts between integrated measurements with respect to the averaged integrated curve. The "marginal" measurements are squared again. They are distributed in the vicinity of zero value that signifies again about their proximity to an ideal case.

1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-r

2.5 3.0 3.5 4.0

H (kG)

Figure 10. (Color online) Here we show that the memory between integrated measurements is still conserved. The previous measurements (Jy1, Jy2 ..., Jy19) demonstrate the satisfactory fit of the last curve Jy20. The value of the relative error does not exceed 10%.

previous measurements counted off the zero value

Figure 11. Here we demonstrate the distribution of the linear coefficients providing the acceptable fit shown in Fig. 10. The value of the free constant b is equaled to -5.88344. It is interesting to repeat again that some measurements give the negative contribution. The origin of this phenomenon (existence of memory between correlated sequences) is not clear now but it is tested easily with the help of the LLSM.

Figure 12. The application of expression (21) related to elimination of the AF helps to calculated the curve (solid green line) corresponding to an ideal experiment. The influence of the AF to different types of measurements is different. The grey curve (corresponding to elimination of the instrumental function) can be fitted with the help of expression (2). This plot can be considered also as original and the central result of this paper.

Figure 13. The fit (yellow curve) of the reduced integrated curve by means of the F-transform. The coefficients of this transformation are found by means the LLSM.

--1-1-1-1-1-1-1-1-1-1-1-1-1-1-1

0 10 20 30

number of modes k

Figure 14. Here we demonstrate the distribution of the decomposition coefficients entering to the F-transform (2). In order to provide the accurate fit of the curve depicted in Fig. 13 only 30 modes are necessary. But this relatively large number of the fitting parameters follows from a general concept and the "best fit" model for this experiment is supposed to be unknown.

4. Results and Discussions

In this paper we develop a general theory that allows considering all reproducible data from the unified point of view based on the concept of the IM. This IM has a very general character and allows to express quantitatively a wide set of reproducible data in terms of the Prony's decompositions. We show also how to extract the reduced function corresponding to an ideal experiment and this function serves as a keystone between competitive (theoretical) hypotheses with the periodic function obtained from experimental observations. In order to obtain this important result, it is necessary to eliminate the influence of the so-called apparatus (instrumental) function [28-30] that always distorts the reproducible data and present the periodic function in the "pure" form. Only this function can conciliate two opposite points of view and solve the constant debate between theory and experiment. In the last section we should stress the basic supposition that was used in construction of this general theory. We suppose that during the measurement process the properties of the object and equipment are not changed essentially in time. If this phenomenon takes place then it is necessary to choose the corresponding range of the control variable x and optimize the value Tx in order to suppress this undesirable temporal dependence.

We want to show here that the general solution (12) solves another important problem as prediction of behavior of the measured function F(x) out of the interval of observation of the control variable x. Imagine that the measured data are fitted properly in the frame of the model (12) and researcher wants to consider this fit out of the interval [0, x] adding some shift A to the admissible interval x + A. Is it possible to solve this problem in the frame of the general concept or not? From mathematical point of view, it is necessary to express the function F(x± A) with the help of the function F(x) reducing new interval of observation to the previous one. The solution expressed in the form of the Prony's decomposition (12) admits this separation. The general formula given below contains the positive answer for the question posed above.

l

F(x) = YEPi(x)+co, FPi(x) = (ki)x/tPti(x),

i=1 L (26)

F{x ± A) = Y, (^)±A/T ^if/T Pr(z, A), 2=1

where

k

Pr(x, A) = Y [M (± A) cos (2vrA;|;) + Aslk (± A) sin (27^) ,

k= 1

Ac®{± A) \ = / cos (2vrfc(f)) ± sin (2trfc (f)) A f Ac® \

As® (±A) J y =F sin (27rk (f)) cos (2trk (f)) J ' ^ AsW J "

As one can see from expression (26) the variables x and A are separated and researcher receive a possibility to consider the shifted function F(x + A) staying in the initial observation interval for control variable x. The decomposition coefficients Ac®, As®oi the previous periodic function Pr(a;) are related with the new ones by means of rotation matrix.

Finishing this final section it is necessary to remind a couple of problems that can merit an interest for the further research:

1. The memory problem that is appeared between neighboring measurements is not solved. We proved only that the high-frequency fluctuations destroy a memory but the deep physical reasons that lead to the functional equation (11) between correlated measurements

are not known. In other words, in spite of the fact that long memory during the period of "time" Tx between measurements exits (L = M) the reasons of appearance of a partial memory when L < M are not clear. In the paper we show only how to reduce this true memory with the help of the approximate procedure (24). The explanation of this phenomenon will be interesting for many researches working in different branches of natural science.

2. We found the key point that conciliates theory and experiment. All competing hypothesis should be presented in the form of the F-transform and compared with the function (18) that is obtained from reproducible measurements. This specific check-point can be sometimes crucial for experimentalists and theoreticians trying to understand the natural phenomenon studied from two opposite sides. But the justified "logic" of the paper prompts that the coincidence of the arguments from both sides should be focused on the expression (18). The illustrative example taken from the conventional EPR experiment leads to the same conclusions. So, one can formulate a problem of creation of the unified metrological concept that should be accepted by many experimentalists in order to supply by reliable data of the theoreticians that want to understand the phenomenon studied from the opposite side.

Acknowledgments

One of us (RRN) wants to devote this fundamental research to his scientific teacher Dr. Prof.

Boris I. Kochelaev to his 80th anniversary that will be take place in April of 2014 year. The

authors are grateful to D. Zverev for his help in performing of necessary EPR measurements.

References

1. Tukey J.W. Exploratory Data Analysis, Princeton University (1977)

2. Watson G.S. The Annals of Mathematical Statistics 42, 1138 (1971)

3. Johnson N.L., Leone F.C. Statistics and Experimental Design in Engineering and the Physical Sciences (2nd edition), John Wiley & Sons (1976)

4. Schenck H.(Jr). Theories of engineering experimentation (2nd edition), McGraw-Hill Book Company (1972)

5. Sharaf M.A., Illman D.L., Kowalski B.R. Chemometrix, John Wiley & Sons (1988)

6. Aivazyan S.A., Yenyukov I.S., Meshalkin L.D. Applied Statistics. Study of Relationships, Reference Edition, Moscow: Finansy i Statistika (1985) [in Russian]

7. Shumway R.H., Stoffer D.S. Time Series Analysis and Its Applications with Examples, Springer (2006)

8. Elsner J.B., Tsonis A.A. Singular Spectrum Analysis. A New Tool in Time Series Analysis, Springer (1966)

9. Hamilton J.D. Time Series Analysis, Princeton University Press (1994)

10. Brockwell P.J., Davis R.A. Time series: Theory and Methods, Springer (1991)

11. Hu Sheng, Yang Quan Chen, Tian Shuang Qui Fractal Processes and Fractional-Order Signal Processing. Techniques and Applications, Springer (2012)

12. Baleanu D., Guvench Z.B., Tenreiro Machado J.A. (Editors) New trends in Nanotechnology and Fractional Calculus Applications, Springer (2010)

13. Baleanu D., Tenreiro Machado J.A., Luo A.C.J. (Editors) Fractional Dynamics and Control, Springer (2012)

14. Luo A.C.J., Tenreiro Machado J.A., Baleanu D. (Editors) Dynamical Systems and Methods, Springer (2012)

15. Ciurea M.L., Lazanu S., Stavaracher I., Lepadatu A.M., Iancu V., Mitroi M.R., Nigmatullin R.R., Baleanu C.M. J. Appl. Phys. 109, 013717 (2011)

16. Nigmatullin R.R., Baleanu D., Dinch E., Ustundag Z., Solak A.O., Kargin R.V. J. Comput. Theor. Nanosci. 7, 1 (2010)

17. Nigmatullin R.R., Baleanu Ed.D., Guvench Z.B., Tenreiro Machado J.A. New Trends in Nanotechnology and Fractional Calculus Applications, pp. 43-56, Springer (2010)

18. Nigmatullin R.R. Commun. Nonlinear Sci. Numer. Simul. 15, 637 (2010)

19. Nigmatullin R.R. Signal Process. 86, 2529 (2006)

20. Nigmatullin R.R., Ionescu C., Baleanu D. Signal, Image and Video Processing, pp. 1-16, D0I:10.1007/s11760-012-0386-1 (2012)

21. Nigmatullin R.R., Khamzin A.A., Machado J.T. Phys. Scr. 89, 015201 (2014)

22. Nigmatullin R.R. Phys. Wave Phenom. 16, 119 (2008)

23. Osborne M.R., Smyth G.K. SIAM J. Sci. and Stat. Comput. 12, 362 (1991)

24. Kahn M., Mackisack M.S., Osborne M.R., Smyth G.K. J. Comput. Graph. Stat. 1, 329 (1992)

25. Osborne M.R., Smyth G.K. SIAM J. Sci. Comput. 16, 119 (1995)

26. Horn R.A., Johnson Ch.R. Topics in Matrix Analysis, Cambridge University Press (1991) [See Section 6.1]

27. The detailed description of this standard instrument can be found in the website: http://eqdb.nrf.ac.za/equipment/bruker-esp-300-esr-spectrometer

28. Weisstein E.W. Instrument Function. From MathWorld - A Wolfram Web Resource, http://mathworld.wolfram.com/InstrumentFunction.html

29. O'Connor D.V., Phillips D. Time-correlated single photon counting, Academic Press, London (1984)

30. Gorelic V.A., Yakovenko A.V. Tech. Phys. 42, 96 (1997) [Zh. Tekh. Fiz. 67, 110 (1997), in Russian]

i Надоели баннеры? Вы всегда можете отключить рекламу.