CAN THE RELIABILITY THEORY BECOME A SCIENCE?
Paolo Rocchi
IBM, via Shangai 53, 00144 Roma, Italy LUISS University, via Romania 32, 00197 Roma, Italy [email protected]
ABSTRACT
The amount of works in the reliability domain is really huge although this field of research has not yet evolved as a science. It is worth reminding that Gnedenko took the first step to create the reliability science. He adopted the deductive logic - typical of exact sciences such as mechanics, electronics and thermodynamics - and demonstrated the general form of the reliability function. However the hazard rate, which tunes up this function, has not been demonstrated so far. We believe that one should follow and complete the Gnedenko's seminal work and should demonstrate the various trends of the hazard rate using the deductive approach.
We have conducted a theoretical research using traditional mathematical methods and have even introduced a new tool named Boltzmann-like entropy. The present paper makes a summary of various contributions published in the past decade and means to show the deductive implications developed.
1 INTRODUCTION
Reliability theory is an abstract approach aimed to gain theoretical insights into engineering and biology. The vast majority of researchers make conclusions about populations based on information extracted from random samples. Authors usually adopt statistical methods which furnish various algorithms and likelihood criteria for qualifying the finish of an inductive reasoning process. The final statement never turns out to be absolutely true or false. By definition, the premises of inductive logic provide some degree of support for the final statement of an argument.
The inductive approach is ruling over reliability theory that has not yet evolved into a science. In fact exact sciences - such as mechanics, electronics, and thermodynamics - comply with deductive logic, that is to say theorists derive the results from principles and axioms using theorems. Each theoretical construction starts with precise assumptions and, using the rules of logical inference, draws conclusions from them. A sequel of logic implications lies at the base of the deductive approach
A ^ B (1)
Each close B is certain provided that the argument's assumption A is true. The sentence B is undisputable no matter it expresses a deterministic or indeterministic statement. Also probability functions, which authors usually adopt to describe reliable systems, can be established through deductive mathematical demonstrations as Gnedenko taught us. A deductive reasoning regardless it is deterministic or indeterministic lies far apart from statistical modeling since the truth of the premises provides a guarantee of the truth of the conclusion.
Ancient philosophers popularize the term 'deduction' as reasoning from the general to the specific; and specified 'induction' as reasoning from the specific to the general. These definitions, which modern thinkers deem not perfectly accurate, have nonetheless the virtue of making more
Paolo Rocchi - CAN THE RELIABILITY THEORY BECOME A SCIENCE ? R&RAJA, # "J,'-36,'
(Vol.10) 2015, March
evident the peculiarity of the deductive mode which proceeds from the general principle A to the specific conclusion B.
Gnedenko was the first to adopt the deductive mode and laid the first stone for the construction of the reliability science [Gnedenko et al. 1965]. He assumes that the system S is a Markov chain and from this assumption concludes that the probability of good functioning without failure is the general exponential function:
P(t) = e 0 (2)
Where l(t) determines the reliability of the system in each instant:
№ = - P' (t)/P(t) (3)
Gnedenko demonstrates that function (2) comes from the conditional probability typical of Markov chains. Equation (2) originates from the operations that a system executes one after the other, and the following formal statement summarizes Gnedenko's inference:
Chained Units ^ General Exponential Function (4)
The hazard (or mortality) function X(t) is the key element in that X(t) tunes up the general exponential function (2), although there is dispute about the hazard function. Some authors believe that curve begins with a decreasing trend typical of the time period during which factory defective items are flushed out. The population thereafter reaches the useful life period where the failure rate is minimized. The curve is completed by the third phase where many essential parts of systems fail in an increasingly larger number. However significant evidence contradicts the tripartite form of X(f) usually called 'bathtub curve. For instance, researchers show the irregular degeneracy of electronic circuits [Jensen 1996]. The hazard rate presents humps so evident that Wong [1989] labels this: "roller coaster distribution". In biology X(t) has very differing trends. The recent article published in Nature [Owen et al. 2014] examines the mortality of 11 mammals, 12 other vertebrates, 10 invertebrates, 12 vascular plants and a green alga. The authors of this ponderous experimental research show the evident discrepancy between empirical data and the bathtub model. Ascher [1968] claims that "the bathtub curve is merely a statement of apparent plausibility which has never been validated". More recently Zairi [1991] and Klutke with others [2003] share skeptical judgments.
We observe that the hazard rate has not yet determined in a deductive and general manner. As second, it would be desirable that the reliability domain evolves as an exact science. In our opinion we should proceed with the method inaugurated by Gnedenko and should complete inference (4). We started a project that pursues these objectives and we published partial results during the last decade. The present paper brings together various upshots in order to provide the comprehensive view of this research and highlight its deductive pathway.
2. A LESSON FROM THERMODYNAMICS
The project has used some mathematical tools shared in literature and in addition has prepared a new mathematical tool that we briefly present in this section.
The second law of thermodynamics claims that the entropy of an isolated system will increase as the system goes forward in time. This entails - in a way - that physical objects have an inherent tendency towards disorder, and a general predisposition towards decay. Such a wide-spreading process of annihilation hints an intriguing parallel with the decadence of biological and artificial systems to us. The issues of reliability theory are not far away from some issues inquired by thermodynamics and this closeness suggests us to introduce the entropy function for the study of reliable/reparable systems.
As first we detail the Markovian model introduced by Gnedenko and assume that the continuous stochastic system S has m states which are mutually exclusive:
S = (Ai OR A2 OR.... OR Am), m > 0. (5)
Each state is equipped with a set of sub-states or components or parts which work together toward the same purpose. Formally, the generic state Ai (i = 1, 2, ... m) has this form:
A, = (Ai1 AND Ai2 AND ... AND A,„), n > 0. (6)
We consider that the states of the stochastic system S can be more or less reversible, and mean to calculate the reversibility property using the Boltzmann-like entropy Hi where Pi is the probability of the generic state A;:
H = H(A,) = ln (P) (7)
The proof is in [Rocchi 2000]. We confine our attention to:
- The functioning state Af and the reliability entropy Hf;
- The recovery state Ar and the recovery entropy Hr.
The meanings of Hf and Hr can be described as follows. When the functioning state is irreversible, the system S works steadily. In particular, the more Af is irreversible, the more Hf is high and S is capable of working and reliable. On the other hand, when Hf is low, S often abandons Af in the physical reality. The system switches to recovery state since S fails and is unreliable.
The recovery entropy calculates the irreversibility of the recovery state, this implies that the more Hr is high, the more Ar is stable. In practice S is hard to be repaired and/or cured in the world. In summary, the reliability entropy Hr expresses the aptitude of S to work or to live without failures; the entropy Hr illustrates the disposition of S toward reparation or restoration to health.
As an application, suppose a and b are two devices in series with probability of good functioning: P(a) = 10-200, P(b) = 10-150. We can calculate the probability of the overall system and later capability of good working of S with the entropy:
Pf (S) = [Pf (a) ■ Pf (b)] = 10-350 (8)
Hf (S) = log[Pf (S)] = log (10-350) = -805.9 (9)
The Boltzmann-like entropy is additive and one can follow this way with the same result:
Hf (S) = [Hf (a) + Hf (b)] =
= log[Pf (a)] + log[Pf (b)] = = log[Pf (a) ■ Pf (b)] =
= log [10-200 ■ 10-150] =
= log (10-350) = -805.9 (10)
As second case, suppose a device degrades during the interval (t1, t2); and the probability of good functioning are the following: Pf (t1) = 10-10, Pf (t2) = 10-200. The entropies Hf (t1) = log (10-10) = -23.0; and Hf (t2) = -460.5 qualify the irreversibility of the device and one obtains how much the capability of good functioning has sloped down:
AHf = Hf (t2) - Hf (t1) = -460.5 - (-23.0) = -437.5 (11)
3. BASIC ASSUMPTION ON INTRINSIC FACTORS
Real events are multi-fold in the sense that mechanical, electrical, thermal, chemical and other intrinsic effects interfere with S. The generic component Afg (g = 1, 2,...n) involves a series of collateral physical mechanisms that run in parallel with Afg and change it with time. Parallel interferences damage Afg by time passing and at last impede the correct functioning to Afg. Thus we can establish a general property for the system components:
The sub-state Afg degenerates as time goes by. (12)
For example, Carnot defines a model for the heat engine that includes two bodies at temperature T1 and T2 (T1 ^ T2), and the gas Afg that does the mechanical work via cycles of contractions and expansions. The mounting disorder of the molecules - qualified by the thermodynamic entropy - results in the decreasing performances of Afg. Several unwanted side effects - e.g. the attrition amongst the gears and the heat dispersion - impact on the components and progressively harm the effectiveness of the heat engine.
4. SIMPLE DEGENERATION OF SYSTEMS DUE TO INTRINSIC FACTORS
Hypothesis (12) is written in words, hence it is necessary to translate (12) into mathematical terms. We establish the regular degeneration of Afg using the reliability entropy Hfg that decreases linearly as time goes by:
Hfg (t) = - c t, c > 0. (13)
From hypothesis (13) one can prove that the probability of good functioning Pf - that is to say the probability that Af runs until the first failure - follows the exponential law with constant hazard rate:
Pf = Pf (t) = e—t = e~Xt. (14)
Where the hazard function is constant:
A = c. (15)
The proof may be found in [Rocchi 2005].
5. EXTREME SHOCKS
Beside the side effects that are extrinsic factors of failure, there are extreme shocks coming from the external that are able to stop S. Intrinsic effects work in such a manner as could not be otherwise; instead extreme shocks are not certain to happen. Several long-living systems do not run
Paolo Rocchi - CAN THE RELIABILITY THEORY BECOME A SCIENCE ? R&RAJA, # "J,'-36,'
(Vol.10) 2015, March
into such a kind of external attacks. In consequence of this irregular conduct, modern authors assume that extreme shocks have random magnitude and occur without any order [Gut & Husler 1999], in particular they divide the system lifetime into regular intervals with the constraints:
a) The shocks can occur only one per time interval,
b) The shock occurs with the same probability in each time interval,
c) The occurrence of the shocks takes place independently from each other in the time intervals.
(16)
6. SIMPLE DEGENERATION OF SYSTEMS DUE TO EXTRINSIC FACTORS
Coeval authors demonstrate that under assumptions (16) extreme shocks can be modeled by the Poisson distribution. In particular, supposing N(t) independent random shock loads which occur in the time interval (0, t), they obtain that the probability of k shocks occurring in the interval (0, t) is:
P [[(t) = k, t] = i^T-e-Ai, k = 0,1,2..; A> 0. (17) k!
Where X is the parameter typical of a certain distribution. From (17) one can derive the probability of good functioning until the first failure that is exponential of time:
Pf = Pf (t) = e^. (18)
Where the hazard function X is constant. The proof may be found in [O'Connor & Kleyner 2011].
Eqn (18) - well known in current literature - perfectly match with (14) and (15). Two results which deriving from distinct and far different assumptions lead to the same rule. Physical phenomena that have intrinsic and extrinsic origins and present differing behaviors, can be treated by identical mathematical tools.
7. COMPLEX DEGENERATION OF SYSTEMS
When assumption (13) comes true over a certain period of time, the components Af1, A/2, ., Afn worsen to the extent that they set up a cascade effect [Rocchi 2006]. The cascade effect consists of the generic part Aig that spoils one or more close components while the system proceeds to run. A cascade effect can be linear or otherwise compound. In the first stage we assume the component Aig harms the close part Aik and in turn this damages the subsequent part and so on:
The cascade effect is linear. (19)
The linear cascade effect occurs while principle (13) is still true of necessity and one can prove that the probability of good functioning is the exponential-power function:
Pf = Pf(t) = b e-a , a,b> 1. (20)
The hazard function is a power of time:
K(t) = af-1. (21)
In the second stage we suppose that the component Aig damages the components all around:
The cascade effect is compound. (22)
This hypothesis - alternative to linear waterfall effect - yields that the probability of functioning is the exponential-exponential function:
Pf = Pf (t) = g e-d^, g,d> 1. (23)
And the hazard rate is exponential of time:
K(t) = det. (24)
The proofs of equations (16) and (19) may be found in [Rocchi 2005, 2006].
8. CONCLUSIVE REMARKS
The closing comments are subdivided into four sections.
(A) Gnedenko derives the general exponential function (1) under the assumption that S is a Markovian chain [Gnedenko et al. 1965]. We specify the shape and behavior of this model using various narrow hypotheses such as eqns. (5) and (6) that depict special Markovian chains. The present paper develops the ensuing logical inferences that determine various trends of the hazard rate:
Regular degeneration of system's components ^ Exponential Function Random extreme shocks ^ Exponential Function
Regular degeneration + linear cascade effect ^ Exponential-Power Function
Regular degeneration + composite cascade effect ^ Exponential-Exponential Function
(25)
It is worth underlying that the mathematical results (14), (18), (20) and (23) are special cases of (1). Gnedenko's seminal work and the present study are consistent despite the different mathematical techniques in use.
(B) The present approach adopts the deductive logic which relates eqns. (15), (21) and (24) to precise causes and not to precise periods of system lifetime. In other words, the function X(t) can be a constant, it can follow the power or exponential distributions in any interval of the system life. Each result in (25) derives from a precise hypothesis that may come true during the system juvenile period, the maturity and the senescence alike.
(C) Authors recognize that sometimes the organs of appliances and biological beings degenerate at constant rate - in accord to (15) - during the middle age. Several machines have
linear structures and the probability of good functioning follows the Weibull distribution during ageing that corresponds to equation (20). The body of animals and humans appear rather intricate and during ageing equation (23) comes true in agreement with the Gompertz distribution. In conclusion, on one hand the present frame does not hold that the bathtub curve is the standard form of X(t) in accordance with empirical evidence. On the other hand the theoretical results obtained here do not exclude that a special system can take after the bathtub curve. The bathtub curve is a concept that may be used for describing a very special system.
(D) The Boltzmann entropy plays a fundamental role on the theoretical plane as it clarifies why systems follow the second law of thermodynamics; instead it is not so common in engineering calculations. The Boltzmann-like entropy has the same virtues and limits of the Boltzmann entropy. It helps us to pass from studying "how" a system declines, to studying "why" a system declines, though the use of the Boltzmann-like entropy in applications is not so manageable. It is our opinion, that the Boltzmann-like entropy sustains a promising approach for developing a deductive construction integrating mathematical methods with engineering notions and specific biological knowledge.
REFERENCES
Ascher, H. (1968) Evaluation of repairable system reliability using the "bad-as-old" concept. IEEE Trans. Reliab., R-17, 103-110.
Gnedenko, B.V.; Belyaev, Yu.K.; Solovyev, A.D. (1965) Mathematical Methods in Reliability Theory; (in Russian) Nauka: Moskow, Russia.
Jensen, F. (1996) Electronic Component Reliability; John Wiley & Sons: New York, NY, USA.
Klutke G.A., Kiessler P.C., Wortman M. A. (2003) - A Critical Look at the Bathtub Curve - IEEE Trans. on Reliability, 52(1): 125-129.
O'Connor, P.; Kleyner, A. (2011) Practical Reliability Engineering - John Wiley & Sons.
Owen, R.J.; Scheuerlein, A.; Salguero-Gómez, R.; Camarda, C.G.; Schaible, R.; Casper, B.B.; Dahlgren, J.P.; Ehrlén, J.; García, M.B.; Menges, E.S.; Quintana-Ascencio, P.F.; Caswell, H.; Baudisch, A.; Vaupel1, J.W.; (2014) Diversity of ageing across the tree of life. Nature, 505, 169-174.
Rocchi, P. (2000) System reliability and reparability. Livre des Actes de la Deuxième Conférence Internationale sur les Méthodes Mathématiques en Fiabilité, Bordeaux, France, 4-7 July 2000; Volume 2, pp. 900-903.
- (2002) Boltzmann-like entropy in reliability theory. Entropy , 4, 142-150.
- (2005) The notion of reversibility and irreversibility at the base of reliability theory. In Proceedings of the Intl. Symp. on Stochastic Models in Reliability, Safety, Security and Logistics, Beer Sheva, Israel, 15-17 February 2005; pp. 287-291.
- (2006) Calculations of system aging through the stochastic entropy. Entropy, 8, 134-142.
Wong, K.L. (1989) The roller-coaster curve is in. Qual. Reliab. Eng. Int., 5, 29-36.
Zairi, M. (1991) Total Quality Management for Engineers; Woodhead, Cambridge, UK.