20 40 60 80 100 120 140
Figure 3. The reliability function from example 2
The corresponding density function is shown in
Figure 4.
f (t) = l exp{-1 [t -1 + exp(-t)] } [1 - exp(-t)], t > 0.
Those functions with parameter l = 0.2 are shown in
Figure 5 and Figure 6.
Figure 4. The density function from example 2
3.4. The Poisson process as a failure rate
Suppose that the random failure rate {■ (t): t > 0} is the Poisson process with parameter l > 0. Of course, the Poisson process is the Markov process with the counting state space S = {0,1,2,...}. That process can be treated as the semi-Markov process defined on by the initial distribution p = [1,0,0,...] and the kernel
Q(t) =
G0(t)
0
0 0 00
0 0 ... Gi (t) 0 ... 0 G2 (t) 0
Figure 5. The reliability function for the Poisson process
0.14 0.12 0.1 0.08 0.06 0.04 0.02
5 10 15 20
Figure 6. The density function for the Poisson process
3.5. The Furry-Yule process as a failure rate
The Furry-Yule is the semi-Markov process on the counting state space S = {0,1,2,...}with the initial distribution p = [1,0,0,... ] and the kernel similar to the Poison process
Q(t) =
0 G0(t) 0 0 G1 (t) 0
0 0
0 G2(t) 0
1
5
10
15
20
20
40
60
80
100
120
140
where
Gt (t) = 1 - et, t > 0, i = 0,1,2,...
The Poisson process is of course a Markov process too.
Applying equation (19), Grabski [3] proved the following theorem:
If the random failure rate {■ (t): t > 0} is the Poisson process with parameter ■ > 0 , than the reliability function defined by (16) takes form R(t) = exp{-1 [t -1 + exp(-t)]}, t > 0.
The corresponding density function is given by the formula
where
Gt (t) = 1 - e(i+1)t, t > 0, i = 0,1,2,...
The Furry-Yule process is also the Markov process. Assume that the random failure rate {■ (t): t > 0} is the Furry-Yule process with parameter ■ > 0 . The following theorem is proved by Grab ski [4]: If the random failure rate {■ (t): t > 0} is the Furry-
Yule process with parameter ■ > 0 , then the reliability function defined by (1) is given by
= (1 +1)exp(-11) , t > 0. 1 + 1 exp[-(1 + 1)t ]
The corresponding density function is (t) =1 (1 + 1)exp[1 - (1 +1) t] t ^ 0
{1 + 1 exp[-(1 + 1)t]}2 , .
Those functions with parameter 1 = 0.2 are shown in
Figure 7 and Figure 8.
Figure 7. The reliability function for the Furry-Yule
process
Figure 8. The density function for the Furry-Yule
process
4. Conclusion
Frequently, because of the randomly changeable environmental conditions and tasks, the assumption that a failure rate of an object is a random process seems to be proper and natural. We obtain the new interesting classes of reliability functions for the different stochastic failure rate processes.
References
[1] Cinlar, E. (1961). Markov renewal theory. Adv. Appl. Probab. 1, No 2, 123-187.
[2] Grabski, F. 2002. Semi-Markov models of reliability and operation. Warszawa: IBS PAN.
[3] Grabski, F. (2003). The reliability of the object with semi-Markov failure rate. Applied Mathematics and Computation, 135, 1-16. Elsevier.
[4] Grabski, F. (2006). Random failure rate process. Submitted for publication in Applied Mathematics and Computation.
[5] Kopocinska, I. & Kopocinski, B. (1980). On system reliability under random load of elements.
AplicationesMathematicae, XVI, 5-15.
[6] Kopocinska, I. (1984). The reliability of an element with alternating failure rate. Aplicationes Mathematicae, XVIII, 187-194.
[7] Limnios, N. & Oprisan, G. (2001). Semi-Markov Processes and Reliability. Boston, Birkhauser.
[8] Королюк, В. С. & Турбин, А.Ф. (1976). Полумарковские процессы и их приложения. Изд-во «Наукова думка», Киев.
[9] Сильвестров, Д. С. (1969). Полумарковские процессы с дискретным множеством состояний. Советское радио. Москва.
Grabski Franciszek Zal^ska-Fornal Agata
Naval University, Gdynia, Poland
The model of non-renewal reliability systems with dependent time lengths of components
Keywords
reliability, dependent components, series systems, parallel systems Abstract
The models of the non-renewal reliability systems with dependent times to failure of components are presented. The dependence arises from some common environmental stresses and shocks. It is assumed that the failure occurs only because of two independent sources common for two neighbour components. The reliability function of series and parallel systems with components depending on common sources are computed. The reliability functions of the systems with dependent and independent life lengths of components are compared.
1. Introduction
The problem of determining the reliability function of the system with dependent components is important but difficult to solve. Many papers are devoted to it i.e. [1], [2], [3], [4]. In the Barlow & Proshan book [1975], there is defined, based on the reliability theory, multivariate exponential distribution as a distribution of a random vector, the coordinates of which are dependent random variables defining life lengths of the components. Their dependence arises from some environmental common sources of shocks. Using that idea we are going to present some examples of systems with dependent components, giving up the assumption that the joint survival probability is exponential and accepting the assumption that the failure occurs only because of two independent sources common for two neighbour components.
Assume that due to reliability there are n ordered components
E = (e^ e2 ,..., e„ ).
Assume also that n+1 independent sources of shocks are present in the environment
Z = (zi , z 2^.^ zn , z„+i)
and each component ei can be destroyed only because of shocks from two sources zi and zi+1. Let Ui be non-negative random variable defining the time to failure of the component caused by the shock from the source zi. Thus the life length of the object depends on the random vector
U = (U i,U 2,...,U n, U n+i). (1)
Admit that the coordinates of the vector are independent random variables with distributions defined as follows
G, (u) = P(Ut £ u,.), i = 1,2,...,n + 1. (2)
The life length of the component ei is a random variable satisfy
T = min(Ui,Ui+1), i = 1,2,...,n . (3)
Notice that two neighbour components in the sequence (e1,e2,...,en) have one common source of shock -
depend on the same random variable. The random variables T\, T2,...,Tn are dependent. Their joint distribution is expressed by means of the multivariate reliability function and it can be easily determined:
R(t1,t2,...,t„) = P(T1 >tl,T2>t2,...,Tn >t„)
R11(t1,t1) = P(T1 >ti,Tj >t}.)
= P(min( Ul,U2)>tl, min(C/2. Ui)>t2,
,min (U„M„+l)>t„)
= P(U1 >t1,U2> max(t1,t2).
...M„ >max(t„_ut„)M„+1 >t„)
= P(Ul >tl)P(U2 > max(ij,i2))
...P(U„ >max(tn_lJn))P(Un+l >tn).
Thus
R{tx,t2,..„ t„) = Gx{tx ) G2(max(tx,t2))...
(4)
Gn(max(in_l5in)Gn+1(i„) where
G, («,. ) = P(U, > «,. ) = 1 - G, («,. ) = P(T< «,. ), i = 1, 2,...,/7 + 1.
The reliability functions of the components can be obtained as marginal distributions computing the limit of the function (4), when
h —» 0+,..., tj_l —» o+, tM —» o+,..., tn —> o+ .
R, {t, ) = P(Ti >t,) = G, {t, ) GM {t,\ i = 1,2,..., n. (5)
The bivariate reliability functions can be determined by computing the limit of (4), when
Rn(t1,t1) = P(T1>t1,T]>t])
= G1{t1)GM{t1)G]{t])G]+l{t]), ' + 1 <j, i,j = 1,2,..., w-l,
(6)
= G, (t, ) GM (max(i,, tM ))G,+2 (i ,+1 ), ' + 1 = J, hj = 1,2,...,/7 -1,
it could be proved that
=ii(r1>i1|r2>i2)
and generally
>tl\TM>tM,...,Tn>tn)
= P(T1>t1\TM>tM), i = 1,2,...,/7-1.
(7)
(8)
(9)
That property asserts that the life length of e, depends only on the life length of the next component ei+1, does not depend on the life lengths of the rest of the components. That is a certain kind of Markov property.
2. Reliability of the object with the series structure
If the object has a series reliability structure then its life length T is the random variable defined by the formula
T = mm(Tl,T2,...,Tn). (10)
Using (4) we can determine the reliability function:
R{t) = P{T>t) = P{Tx >t,T2 >t,...,T„ >t)
= R(t,t,...,t)
= G,{t) G2(t)...G„(t) G„+1 (i).
(11)
Let us compare the function with the reliability function of a series system in which the life lengths of the components Tu T2, ...,T„ are independent and their reliability functions are defined by (5). Let R(/),/>0 be a reliability function of that system. It satisfies
R(0 = P(T >t) = P(TX >t,T2>t,- ,Tn > t)
= Rl(t)R2(t)' R„(t)
= Gj (t)G2 (t)G2 (t)G3 (t) • G„ (t)G„ (t)Gn+l (t)
= G2(t)G3(t)' Gn(t)R(t)
(12)
Thus, for t > 0
R(i)<R(0
holds.
The inequality means that the reliability of a series system with dependent (in the considering sense) life lengths of components is greater than (or equal) to the reliability of that system with independent life lengths of components and the same distributions as the marginals of T2. ..., T„ .
Accepting the assumption about independence of the life lengths of the components even though the random variables describing the life lengths are dependent, we make an obvious mistake but that error is „safe" because the real series system has a greater reliability. That estimation is very conservative.
Example 1.
Assume that a non-negative random variable Ui, describing time to failure of the component caused by the shock from source z, has a Weibull distribution with parameters
a,,A,,, i = 1,2,...,« + 1
m
Figure 1. The graph of the series reliability function with dependent components
The reliability function R(f), t > 0 of the series system with independent life lengths of components, the same marginals satisfies
R(i) = P(T >t)= g-CO.»1'2+0.4^+0.2^+0.2^)
3. Reliability of the object of the parallel structure
The life length of the object of a parallel structure is a random variable defined by
T = max(Tx,T2 ,...,Tn ).
(13)
for ii, > 0
Gj («, ) = P(U, >Uj) = e
-h'ti '
i = 1,2,..., n +1.
Let us compute the reliability function of the object: R(0 = P(T > t) = 1 - P(T < t)
The reliability function of a series system with dependent components satisfies
R(0 = P(T >t) = Gx (0 G2 (t)... G„ (t) G„+1 (0
= e-(htai +..+>.„+iil7»+1 )
For n = 3 and
= l-P(Tl <t,T2<t,...,Tn<t)
= P{{Tx>t)yj {T2>t)yj ...u {T„>t}).
(14)
Using the formula of probability of a sum of events we obtain
R(i) = P(T >t) = ÍP(T, > i) - Z P(T, > t,T, > t)
<j=i
aj =1.2 ,A,j = 0.1, a, = 2, =0.2,
a3 =2.2, X3 =0.1, a4 =3, X4 =0.2
+ Z P(T, >t,TJ >t,Tk >t)~...
i,j,k=1 i<j<k
we get
R(i) = P(T >t)= g-CO-W1'2+0.2^+0.1^+0.2^)
+ (-i)"+1p(7; >t,r2 >t,...,T„ >t). Hence and from (4), (6), (7) we get
The graph of the function is presented in Figure 1.
R(0 = Z G, (t) GM (t) - Í G, (t) GM (t) G} (t) G]+x (t)
;=1 i,]=\ i+l<j
-TG^G^m^it) (15)
;=1
... + (-l)"+1G1(0 G2(t)... Gn(t) Gn+l(t). Particularly, for n 3 we have, R(0 = Gi (i) G2 (i) + G2 (i) G3 (i) + G3 (i) G4 (i) - G, (t) G2 (i) G3 (i) - G2 (i) G3 (i) G4 (i)
-G;(og2(OG3(OG4(O
(16)
+ Gi(0G2(i)G3(0G4(0 = Gj (0 G2 (0 + G2 (0 G3 (0 + G3 (0 G4 (0
If 7'|. 7'2. ....'/',, are independent then
¿(0 = 1-^(7; <í5r2<í5...,rB<0
= 1-7^(7; <t)P(T2 <t)...P{Tn <o
= l-[l-7?1(0][l-7?2(0]...[l-7?„(0] = 1-[1-G1(0G2(0][1-G2(0G3(0]
,..[1-G„(0G„+1(0]. For n = 3
¿(0=i - [i - G; (o G2 (0] [i - G2 (o G3 (0] •[1-G3(0G4(0].
After multiplication we get
R(0 = Gx (0 G2 (0 + G2 (0 G3 (0 + G3 (0 G4 (0
-G;(og2(OG2(OG3(O
- Gj (0 G2 (í )G3 (i) G4 (0
-G2(0G3(0G3(0G4(0
+ Gj(0 G2(0G2(0 G3(0G3(0 G4(0. Notice that
R(0 - R(0 = Gx (0 G2 (0 G3 (0 + G2 (0 G3 (0 G4 (0
-G;(og2(OG2(OG3(O -G;(og2(OG3(OG4(O
-G2(0G3(0G3(0G4(0 + + G; (0 G2 (0 G2 (0 G3 (0G3 (0 G4 (0 = G2(t)G3(t)[Gx(t) + G4(t)
-Gx(t)G2(t)-Gx(t)G4(t)
- G3(0 G4(0 + Gj(0 G2(0G3(0 G4(0].
Let A,. /=1,2,3,4 be independent events with probabilities defined by
P(A1) = G1(t), i = 1,2,3,4.
The expression
Gx (0 + G4 (0 - Gj (0 G2 (0 - G; (0 G4 (0 - G3 (0 G4 (0 + Gj (0 G2 (0G3 (0 G4 (0
can be rewritten as P{Ax)+P{A4)-P{Ax)P{A2)
-P(Ax)P(A4)-P(A3)P(A4) + P(Ax)P(A2)P(A3)P(A4) = P(AX uA4)~P((Ax P(A3 n4)).
As