Научная статья на тему 'Fundamentals of the Dempster-Shafer theory and its applications to system safety and reliability modelling'

Fundamentals of the Dempster-Shafer theory and its applications to system safety and reliability modelling Текст научной статьи по специальности «Математика»

CC BY
373
87
i Надоели баннеры? Вы всегда можете отключить рекламу.
Область наук
Ключевые слова
Dempster-Shafer Theory / evidence / uncertainty / expert assessment / Event Tress Analysis / Fault Tree Analysis

Аннотация научной статьи по математике, автор научной работы — Rakowsky Uwe Kay

The Dempster-Shafter Theory is well-known for its usefulness to express uncertain judgments of experts. This contribution shows how to apply the calculus to safety and reliability modelling, especially to expert judgement; Failure Modes, Effects, and Criticality Analysis; Event Tree Analysis, Fault Tree Analysis, and Reliability Centred Maintenance. Including a tutorial introduction to the Dempster-Shafer Theory, the differences between the Probability and the Dempster-Shafer Theory are discussed widely

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Fundamentals of the Dempster-Shafer theory and its applications to system safety and reliability modelling»

Rakowsky Uwe Kay

University of Wuppertal, Germany, and Vossloh Kiepe, Düsseldorf, Germany

Fundamentals of the Dempster-Shafer theory and its applications to system safety and reliability modelling

Keywords

Dempster-Shafer Theory, evidence, uncertainty, expert assessment, Event Tress Analysis, Fault Tree Analysis Abstract

The Dempster-Shafter Theory is well-known for its usefulness to express uncertain judgments of experts. This contribution shows how to apply the calculus to safety and reliability modelling, especially to expert judgement; Failure Modes, Effects, and Criticality Analysis; Event Tree Analysis, Fault Tree Analysis, and Reliability Centred Maintenance. Including a tutorial introduction to the Dempster-Shafer Theory, the differences between the Probability and the Dempster-Shafer Theory are discussed widely.

1. Introduction

In the middle of the 1960s Arthur P. Dempster developed a theory [1], [2], [3] that includes a kind of "upper and lower probabilities". Later, it turned out that this approach is very useful to express uncertain judgments of experts.

About ten years later the work of Dempster was extended, refined, recast, and published by Glenn Shafer [18] as a "Mathematical Theory of Evidence". Shafer e.g. rebuilt the mathematical theory around the Dempster concept and introduced degrees of belief instead of lower probabilities. The Theory of Evidence was also denoted as the Dempster-Shafer Theory (DST) or the Dempster-Shafer Evidential Theory.

The more the Dempster-Shafer Theory was further developed, the more the evidence measures of DST departed from being probabilities, e.g. Klir & Folger [11] revised DST in that sense. As stated by [4] and [20], the advantage of DST is that it allows coping with absence of preference, due to limitations of the available information, which results in indeterminacy.

2. Fundamentals

The DST became known to the safety and reliability community in the early 1990s, refer e.g. Guth [7]. The reliability-oriented approach to DST as presented here is based on a scenario that contains the system with all hypotheses, pieces of evidence and data sources.

The hypotheses represent all the possible states (e.g. faults) of the system considered. It is required that all

hypotheses are elements (singletons) of the frame of discernment, which is given by the finite universal set W. The set of all subsets of W is its power set 2W. A subset of those 2W sets may consist of a single hypothesis or of a conjunction of hypotheses. Moreover, it is required that all hypotheses are unique, not overlapping and mutually exclusive.

In this context pieces of evidence are symptoms or events (e.g. failures) that occurred or may occur within a system. One piece of evidence is related to a single hypothesis or a set of hypotheses. It is not allowed that different pieces of evidence lead to the same hypothesis or set of hypotheses.

The qualitative relation between a piece of evidence and a hypothesis corresponds to a cause-consequence chain: A piece of evidence implies a hypothesis or a set of hypotheses, respectively. The strength of an evidence-hypothesis assignment, and thereby the strength of this implication, is quantified by a statement of a data source.

Data sources are persons, organisations, or any other entities that provide information for a scenario. In safety and reliability engineering, data sources are usually the results of empirical studies or they are experts, who give subjective quantifiable statements. As required by O'Neill [13], data sources have to be representative (e.g. studies) or as free from bias as possible (e.g. experts).

Some misunderstandings in interpretations concerning the plot of hypotheses have to be cleared up. From an objective point of view, which might e.g. be lo-

cated outside the system (e.g. observer), exactly one single hypothesis is true; from a subjective point of view of a data source (e.g. expert or operator), it might be uncertain which hypothesis fits best to reality. Therefore, DST makes it possible to model several

• single pieces of evidence within single hypothesis relations or

• single pieces of evidence within multi hypotheses relations

as uncertain assessments of a system in which exactly one hypothesis is objectively true. Both points of view, the objective and the subjective, have to be distinguished clearly. The DST calculus describes the subjective viewpoint as an assessment for an unknown objective fact.

By means of a data source, a mapping

m: 2W ® [0, 1] (1)

assigns an evidential weight to a set A c W, which contains a single hypothesis or a set of hypotheses. This is the most significant difference to the Probability Theory: The DST mapping distinguishes clearly between the evidence measures and probabilities with mapping W ® [0, 1]. Each A that holds m(A) > 0 is called a focal element. The function m is called a basic assignment and fulfils

SaeQ m( A) = 1 . (2)

This equation means that all statements of a single data source have to be normalised, just to ensure that the evidence presented by each data source is equal in weight, e.g. no data source is more important than another one. - For the "sake of simplicity" (Klir & Folger [11]), it is assumed that

m(0) = 0 ; (3)

however, this property requires an appropriate choice of the universal set W. That means, the set W has to be complete and contain all possible hypotheses of the scenario considered.

In some publications m is called the basic probability assignment, refer Shafer [18], which misleads to the assumption m(A) might be a probability. Further denotations are the basic belief assignment [19], the belief structure [21], [4] or the mass assignment function [14].

A clear distinction has to be made between probabilities and basic belief assignment: probability distribution functions are defined on W and basic assignment functions on the power set 2W. In addition, m has three further properties, which distinguishes it from being a

probability function, refer Klir & Folger [11]: It is not required

• that m(W) = 1,

• that m(A) < m(B) if A c B, or

• that there is a relationship between m(A) and m(—A).

Therefore, it seems to be useful to avoid the terms probability and belief (which is defined next) in the denotation of m.

By applying the basic assignment function, several evidential functions can be created. A belief measure is given by the function bel: 2W ® [0,1]. There is

bel(A) = cA;Bf m(B). (4)

The counterpart of bel is the plausibility measure pl: 2W® [0,1] with

p1 (A) = Af m(B). (5)

The measure pl(A) shall not be understood as the complement of bel(A). Only

{A cW | m( A) > 0} * 0 ® bel (A) < pl (A) (6)

has to be fulfilled. In addition to bel and pl, a third evidential function can be defined. Shafer [18] introduced the commonality measure with cmn: 2W ® [0,1] and

cmn( A) = X b 2 Am(B). (7)

Figure 1 shows a graphical representation of the above-defined measures belief and plausibility. The difference pl(A) - bel(A) describes the evidential interval range, which represents the uncertainty concerning the set A.

0 1

Belief _»J..—_ Doubt

bel (A) 1 1 - bel(A)

Plausibility _ ■ _ Disbelief

pl(A) 1 1 - pl(A)

Uncertainty

Figure 1. Measures of belief and plausibility and its complements for a given bel(A) < pl(A). The evidential or the uncertainty interval, respectively, is shaded grey.

The complements to the measures belief and plausibility are doubt and disbelief, respectively. Although Shafer ([18], page 43) defines doubt as a complement

to the measure plausibility, it seems to make more sense to distinguish between doubt and disbelief in the way given above because DST does not require a causal relationship between a hypothesis and its negation. As Flack [6] emphasises, lack of belief does not imply disbelief. The disbelief of set A is the belief in the complement. There is

bel(-A) = 1 - pl(A) , pl(-A) = 1 - bel(A) (8)

with

bel(-A) £ pl(-A) . (9)

The difference pl(A) - bel(A) describes the uncertainty concerning the hypothesis A represented by the evidential interval, see Figure 1.

2.1. Interpretations on Evidence Measures

Some helpful and interesting interpretations of the evidence measures are given in the literature and cited here.

Basic Assignment

• The measure m(A) assigns an evidential weight to the set A, refer Flack [6].

• The measure m(A) is the degree of evidence that the element in question belongs exactly to the set A, refer Klir & Folger [11].

• The measure m(A) is the degree of evidence supporting the claim that a specific element of W belongs to the set A, but not to any special subset of A, refer Klir & Folger [11].

• The quantity m(A) is the degree of belief that the above specified claim is warranted, refer Klir & Folger [11].

Belief

• The measure bel(A) is the degree of evidence that the element in question belongs to the set A as well as to the various special subsets of A, refer Klir & Folger [11].

• The measure bel(A) can be interpreted as the total amount of justified support given to A, refer De-noeux [4].

• The measure bel(A) is the degree of evidence supporting the claim that a specific element of W belongs to the set A, but not to any special subset of A, refer Klir & Folger [11].

Plausibility

• The quantity pl(A) is the degree of evidence that the element in question belongs to the set A or to any of its subsets [or to any set that overlaps with A], refer Klir & Folger [11].

• The quantity pl(A) can be interpreted as the maximum amount of specific support that could be

given to A, if justified by additional information, refer Smets [19].

2.2. Bayesian Statistical Modelling versus Dempster-Shafer Theory

Flack [6] describes the differences between the Bayes-ian statistical modelling and the Dempster-Shafer Theory as a difference in concepts. A Bayesian model describes a Boolean type of phenomena, which either exist or do not exist. Little belief in the existence of a phenomenon implies a strong belief in its non-existence. This implication does not necessarily hold for DST. Here, no causal relationship is required between both, belief in existence and belief in non-existence. For example, a statement concerning the failure probability of an item also implies a statement about its counterpart, the reliability of the same item. DST does not require this sub-proposition; and that adds new aspects and possibilities to reliability modelling.

As stated by Ferson et al. [5], the Dempster-Shafer Theory has been widely studied in computer science and artificial intelligence, but has never achieved complete acceptance among probabilists and traditional statisticians. (By this, the question arises if any other theory than the Probability Theory would ever be accepted by probabilists or traditional statisticians.)

However, there are still some disadvantages of the Probability Theory for the DST, which should also be stated here. There are three undesired main properties as listed by [4]:

• Lack of introspection or assessment strategies: The main criticisms of the Bayesian statistical modelling is its unreasonable requirement for precision. But the necessity to assign precise numbers in DST applications to each subset A c W by the basic assignment m is constraining in the same way. Precise degrees of the desired measures may exist, but it is perhaps too difficult to determine them with the necessary precision.

• Instability: Underlying beliefs may be unstable. Estimated beliefs may be influenced by the conditions of its estimation.

• Ambiguity: Ambiguous or imprecise judgement could not be expressed by the evidence measures.

Additional statements to disadvantages of the Demp-ster-Shafer Theory are:

• DST lists all the hypotheses into the frame of discernment W, which resembles the fault space in the Bayesian Theory. However, given k hypotheses, it can consist of up to 2k elements, representing all possible subsets of W. This leads to a similar problem encountered in the Bayesian Theory, except that it is worse because human experts have to es-

timate a larger number of belief values than after the Bayesian theory [12]. • Another caveat of the applicability of DST is that it does not offer a procedure for implementation of a diagnostic system [10].

2.3. Dempster-Shafer Rule of Combination

Dempster [2], [3] followed by Shafer [18] suggested a rule of combination which allows that the basic assignments are combined. There is

X m( A) • m( B)

, AnB=Z

m( Z) =-1-,

1 - X m( A) • m(B)

AnB=f

(10)

with A, B, Z c W. Verbally: the numerator represents the accumulated evidence for the sets A and B, which supports the hypothesis Z, and the denominator sum quantifies the amount of conflict between the two sets. Depending on the application, the denominator of

X m( A) • m( B)

, AnB=Z

m( Z) =-1-

X m( A) • m(B)

An B

(11)

The faults can be determined to at most three precisely defined hypotheses represented by the set W with

W = (M, h2, h3> .

(12)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

With that, the frame of discernment of this context is given. It should be noted that W is postulated by the operators based on their subjective points of view, assuming that W is complete. The corresponding power set of W is

2W = (0, (M), (h2>, (h3>, (h1, h2}, (h1, h3},

(h2, h3>, W> . (13)

The first operator mainly states that the faults h1 or h2 are the reason for the problems. For example, the failure ev3 might have occurred and resulted in the consequences h1 or h2. The assignments of the second operator are slightly different. Here, focus is on the faults hi or h3. Both operators give their statements to the four pieces of evidence found. The complete survey of the qualitative failure-fault(s) assignments is given in Table 1.

Table 1. Qualitative failure-fault(s) assignments given by the operators involved

is easier to apply. 3. Illustration

The following illustration is strictly step-wise structured and gives an easy-to-understand introduction to the calculus of the Dempster-Shafer Theory.

The given scenario discusses a typical situation in a power plant. The operators at the control panel detect serious changes of the system properties. Some failures are detectable; however, their consequences or the system fault, respectively, can neither be determined exactly nor interpreted certainly. (This situation is widely discussed e.g. in the ATHEANA Report [14] and by Hollnagel [8].) To avoid an error forcing context, pieces of evidence are collected, hypotheses are postulated, and conclusions are made on this basis. Therefore, a Dempster-Shafer approach is applied to support the operators in decision making.

Step O - Creating the Scenario

The scenario consists of a power plant (the system considered), two operators (data sources, denoted by the index l), the failures detected (pieces of evidence), and the system fault states (set of hypotheses). As described above, the pieces of evidence correspond to failures or causes and the hypotheses to faults or consequences.

Failure Fault(s)

1st ev1 h1

ev2 h2

ev3 h1, h2

o 15 ev4 M, h2, h3

<D Q. 2nd ev1 h1

O ev2 h3

ev3 h1, h3

ev4 h1, h2, h3

Please note that contrary to the Fault Tree Analysis (FTA), DST does not allow that more than one failure lead to the same fault (hypothesis). However, different failures may have different set of consequences, which may contain the same hypotheses as elements, e.g. ev1, ev3, and ev4 in Table 1 lead to hypotheses, which all contain h1 as a fault. Again, DST allows modelling several

• single-failure-single-fault relations and

• single-failure-multi-fault relations

as an uncertain assessment of a system, which can take exactly one state at a time. Generally, DST emphasizes more on the hypotheses (faults) than on the pieces of evidence (failures), which are of minor interest in the next steps. As described in Section 0, the system is exactly in one state of W. In other words, exactly one hypothesis

of (hi, h2, h3} is true for the given scenario and situation if the system would be observed from an objective point of view. Subjectively, the operators are not sure, in which state the system actually is.

Step © - Quantification of Statements

At this step, both operators quantify their statements as given in Table 2. The set of hypotheses A is assigned to the first operator, B to the second operator. For example, the second operator claims that the consequences h1 or h3 may have occurred with a basic assignment of 0.4. (Formulating this sentence verbally, it is rather difficult to avoid that the tongue mentions "probability". Again, basic assignments are not probabilities, see equation (1).) Non-specified statements are assigned by 0 and are not focal elements.

The subjective quantifications of the operators are based on their system experiences and mostly on their "engineering feelings". Certainly, these quantifications are imprecise.

Table 2. Quantitative statements given by the operators involved (outer columns). The inner column contains all subsets of the power set 2 .

(h u h2}, ( h u h2 u h3} 2 (h u h2} , (17)

cmn(A4) = m(A4) + m(A7) = 0.7 . (18)

The plausibility includes basic assignments of all statements which have got at least one hypothesis with those of the discussed statement in common. Concerning A4, there is

(M}, {h2}, (h u h2}, (h u h3}, (h2 u h3},

(h1 u h2 u h3} n (h1 u h2} , (19)

which results in the plausibility

pl(A4) = m(A1) + m(A2) + m(A4) + m(A5)

+ m(Ai) + m(A) = 1 (20)

and finally in no disbelief at all

1 - pl(A4) = 0 . (21)

Table 3 shows the results for belief and plausibility of all statements (k = 1,..., 7).

1st operator 2W 2nd operator

m(A0 = 0.2 {hi} m(B1) = 0.2

m(A2) = 0.1 {Й2> m(B2) = 0

m(A3) = 0 {Аэ} m(B3) = 0.2

m(A4) = 0.6 {hi È А2} m(B4) = 0

m(A5) = 0 {hi È Аэ} m(B5) = 0.4

m(A6) = 0 {h2 È Аэ} m(B6) = 0

m(A7) = 0.1 {hi è h2 È Аэ} m(B7) = 0.2

Based on the basic assignments given by both operators, the belief and doubt, commonality, plausibility and disbelief measures can be calculated. For example, the belief in the set of hypotheses (h1 u h2} is the sum of its own basic assignment with those of all of its subsets

(h1}, ( h2}, (h1 u h2} i (h1 u h2} , (14)

see equation (4). For the fourth statement of the first operator there is

bel(A4) = m(A1) + m(A2) + m(A4) = 0.9, (15)

with the corresponding doubt measure

1 - bel(A4) = 0.1 . (16)

The commonality takes every statement into account, which includes the discussed statement completely. There is for A 4

Table 3. This table corresponds to Table 2 and shows the values of basic assignments (bold typing), belief, and plausibility for each statement and operator (first left side, second right side).

m(Ak) bel(Ak) pl(Ak) 2W m(Bk) bel(Bk) pl(Bk)

0.2 0.2 0.9 {hi} 0.2 0.2 0.8

0.1 0.1 0.8 {h2} 0 0 0.2

0 0 0.1 {Аэ} 0.2 0.2 0.8

0.6 0.9 1 {hi è h2} 0 0.2 0.8

0 0.2 0.9 {hi è Аэ} 0.4 0.8 1

0 0.1 0.8 {h2 è Аэ} 0 0.2 0.8

0.1 1 1 W 0.2 1 1

Step © - Combining Hypotheses

The third step combines each hypothesis or set of hypotheses, respectively, from one data source (operator) with one from the other source and builds the cut set of both, see Table 4. Depending on quantifications given at Step ©, some of the cut sets may not be focal elements before or thereafter. Actually, this step was fit in for illustrative purpose and can be combined with Step ©.

Table 4. The Combination Table contains the full plot of hypotheses cut sets of A and B

Ç Ai A2 A3 A4 A5 A6 A7

B, hi 0 0 hi hi 0 hi

B2 0 h2 0 h2 0 h2 h2

B3 0 0 h3 0 h3 h3 h3

B4 hi h2 0 hiuh2 hi h 2, hiuh2

B5 hi 0 h3 hi hi Èh3 h3 hi Èh3

B6 0 h2 h3 h2 h3 h2Èh3 h2Èh3

B7 hi h2 h3 hi Èh2 hi Èh3 h2Èh3 W

Step © - Reducing the Combination Table

To avoid mathematical effort, those columns and rows of the Combination Table 4 were dropped, which are related to non-focal elements (non-specified statements with m(Ak) = 0, m(Bk) = 0). In this context, columns A3, A5, A6 and rows B2, B4, B6 are not applicable. Table 5 shows the reduced plot containing the combinations of focal elements exclusively.

Table 5. The reduced Combination Table

Ç A, A2 A4 A7

B, hi 0 hi hi

B3 0 0 0 h3

B5 hi 0 hi hiuh3

B7 hi h2 hi Èh2 W

Step © - Calculating Products and Sums of Combined Basic Assignments

At this step, products of the related basic assignments are calculated from the non-empty sets. Products of basic assignments corresponding to the same cut set have to be added. For {hi} yields

Zi = Ai Ç Bi = {h ^ m(Zi) = m(Ai) Z2 = Ai Ç B5 = {h ^ m(Z2) = m(Ai) Z3 = Ai Ç B7 = {h ^ m(Z3) = m(Ai) Z4 = A4 Ç Bi = {h ^ m(Z4) = m(A4) Z5 = A4 ç B5 = {h

^ m(Z5) = m(A4)

Z6 = A7 Ç Bi = {h

^ m(Z6) = m(A7)

}

m(Bi) = 0.04 }

m(B5) = 0.08 }

m(B7) = 0.04 }

m(Bi) = 0.12 }

m(B5) = 0.24 }

m(Bi) = 0.02

(22)

(23)

(24)

(25)

(26) (27)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

with the sum

£ m( Zk ) = 0.54.

k=i

Hypotheses h2 or h3 are supported by Z7 = A2 Ç B7 = {h2} ^ m(Z/) = m(A2) • m(B7) = 0.02 ,

Z8 = A7 ç B3 = {h3}

^ m(Z8) = m(A) • m(B3) = 0.02 .

(28)

(29)

(30)

There is for the sets {hi è h2}, {hi è h3}, and {hi è h2 è h3}:

Z9 = A4 Ç B7 = {hi È h2} ^ m(Z9) = m(A4) • m(B7) = 0.12 , Zi0 = A7 Ç B5 = {hi È h3} ^ m(Zi0) = m(A7) • m(B5) = 0.04 , Zii = A7 ç B7 = {hi È h2 È h3} ^ m(Zii) = m(A7) • m(B7) = 0.02 .

(3i)

(32)

(33)

To illustrate this formal procedure, the results are given in Table 6.

Table 6. This table corresponds directly to Table 5 and represents the products (•) of basic assignments; x means no focal element

• A, A2 A4 A7

B, 0.04 X 0.i2 0.02

B3 X X X 0.02

B5 0.08 X 0.24 0.04

B7 0.04 0.02 0.i2 0.02

Step © - Combining Basic Assignments

The sum over all combinations calculated in Step ©,

ii

(34)

£ m( Z*) = 0.76.

k=1

is identical with the denominator of equation (11). With that, the basic assignment of every hypothesis can be calculated. There is

6

X m( Zk)

m({hj}) = k1=1-» 0.7105 (35)

X m( Zk)

k=1

for the hypothesis h1 and m({h2}) = Im(Z7) » 0.0263 (36)

X m( Zk)

k=1

m({h3}) = Im( Z§) » 0.0263 (37)

X m( Zk)

k=1

m({h1 uh2}) = 1m(Z9) » 0.1579 (38)

X m ( Zk )

k=1

m({h1 uh3}) = m(Z10) » 0.0526 (39)

X m ( Zk )

k=1

m({h1 u h2 u h3}) = ^Z11) » 0.0263 (40)

X m( Zk)

k=1

for all relevant sets of hypotheses.

Step © - Evidence Measures of Combined Hypotheses

The evidence measures of combined hypotheses are calculated according to Step ©, Table 7. The set {hi u h2} does not occur because it vanished in Table 5.

Step © - Interpretation

Starting at the same low value, h3 takes roughly half the range of uncertainty that h2 takes. However, both hypotheses alone should not be considered further due to the low values of belief and plausibility. With about ~0.24, the single hypothesis h1 is assigned with a wide range of uncertainty. The combination of h1 and h3 covers a smaller range (~0.18) than h1 alone, and it has a higher plausibility. Finally, a combination of h1 and h2 shows the smallest range of uncertainty (~0.08) with the same (highest) plausibility as in case of the combination of h1 and h3.

The conclusion is that a combination of h1 and h2 may be responsible for the serious changes of the system properties. Please note that a probabilistic ap-

proach would have blamed h1 alone for being responsible. (And please consider the consequences.) This result clearly shows the differences between both theories, based on their different mappings W ® [0, 1] versus 2W ® [0, 1], see Section 0.

Table 7. The basic assignments (bold) and the resulting evidence measures belief, commonality, and plausibility are given; hypotheses are ranked by their belief measures, all values are rounded.

2W m bel cmn pl

W 0.0263 1 0.0263 1

{hi u ¿2} 0.1579 0.8947 0.1842 0.9737

{hi u ¿3} 0.0526 0.7895 0.0789 0.9737

{hi} 0.7105 0.7105 0.9474 0.9471

{¿2} 0.0263 0.0263 0.2105 0.2105

{¿3} 0.0263 0.0263 0.1053 0.1053

4. Applications to System Safety and Reliability Modelling

The introductory descriptions of the Failure Modes, Effects, and Criticality Analysis, the Event Tree Analysis, and the Fault Tree Analysis are taken from the "System Safety Analysis Handbook" written by Stephens & Talso [29] and published by the System Safety Society. The descriptions are shortened and slightly revised. The introductory description of the Reliability Centred Maintenance is taken from [23]. Additionally, the IEC standards are recommended for the application of all listed methods, refer to Section 0.

4.1. Failure Modes, Effects, and Criticality Analysis

As described by Stephens & Talso [29], the Failure Modes, Effects, and Criticality Analysis (FMECA) tabulates a list of items in a process along with all the possible failure modes for each item. The effect of each failure is evaluated and ranked according to a severity classification. An FMECA includes the following steps:

• Define the worksheet formats and ground rules.

• Give analysis assumptions.

• Identify the lowest indenture level of analysis.

• Code the system description.

• Give failure definitions and evaluations.

The usefulness of the FMECA as a design tool and in the decision making process depends on the effectiveness with which problem information is communicated for early design attention. Probably the most severe criticism of the FMECA has been its limited use for improvement of designs, as Stephens & Talso [29] claim. The main causes for this have been the

untimeliness and the isolated performance of the FMECA without adequate inputs to the design process. While the objective of an FMECA is to identify all modes of failure within a system design, its first purpose is the early identification of all critical failure probabilities so that they can be eliminated or minimised through design correction at the earliest possible time.

The Dempster-Shafer calculus, as described by the illustration in Section 0, can easily be applied to the FMECA. If more than one expert is involved in the quantitative assessments of the criticality and the occurrence of an item failure. The results are narrow or wide ranges of uncertainties, which require an interpretation similar to Step © Section 0.

However, it is an interesting question if institutions, which conduct system homologations, would accept these results.

4.2. Event Tree Analysis

As summarised by Stephens & Talso [29], the Event Tree Analysis (ETA) is an analytical tool that can be used to organise, characterise, and quantify potential failures in a methodical manner. An event tree models the sequence of events that results from a single initiating event. The ETA is a bottom-up analysis versus the top-down approach for the Fault Tree Analysis, see Section 0. Conducting an ETA starts with selection of the initiating events, both the desired events and the ones not desired. Thereafter, their consequences are developed through consideration of component, module, and system failure-and-success alternatives, respectively. The identification of initiating events may be based on review of the system design and operation, the results of another safety analysis, or personal operating experience acquired with a similar system. Then the success and failure of the mitigating systems are postulated and continued through all alternate paths, considering each consequence as a new initiating event. The basic steps for construction of an event tree include the following:

• List all possible initiating events.

• Identify functional system responses.

• Identify support system responses.

• Group initiating events with all responses.

• Define failure sequences.

• Assign probabilities to each step in the event tree to arrive at total probability of occurrence for each failure sequence.

The method is universally applicable to all kinds of systems, with the limitation that all events must be anticipated to produce meaningful analytical results. Among the methods presented in the "System Safety Analysis Handbook" by Stephens & Talso [29], the

Event Tree Analysis is definitely one of the most exhaustive, if it is applied properly. Axiomatically, their use also consumes large quantities of resources. Their use, therefore, is well reserved for systems in which risks are regarded as high and well concealed.

As described by Stephens & Talso [29], probabilities are assigned to each step in the event tree. To apply evidence measures instead of probabilities, the following steps are conducted, which lead to the Dempster-Shafer Event Tree Analysis D^-ETA.

It is assumed that the considered event tree consists of bifurcations only; i.e., any symbol within an event tree has one input and two outputs (the event "failure" or the event "no failure"), see Figure 2.

Initial A,

event CM

A2

a2

■r1 ' r2

■r3 ' r4

Figure 2. Event tree with three bifurcations resulting in four sequences, A, denotes "failure" and A2 "no failure"

Every bifurcation i = 1, ..., n of an event tree is considered separately and independently from the n - 1 other bifurcations. The assigned set of hypotheses contains Q, = {A], A2. A3} = {"failure", "no failure", "uncertain"}. Hence, any expert (data source) / has to give three values for the basic assignments mLI(Ak). k = 1, 2, 3 of any bifurcation i, representing his/her degree of belief that Ak may occur. Obviously, the uncertainty of an expert concerning an event A3 does not appear in the graphical representation. There is for the Expert Assessment Matrix

Mi =

mn(Ax) mn{A2) mn{A3)

ml,i(Al) mLi(A2) ml,i(A3)

mljM\) ml,„(A2) m1jMl)

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

(41)

with mhi(Ai) + mu(A2) + mu(A3) = 1.

The Combination Matrix C, combines each hypothesis or set of hypotheses from two experts and builds the cut set of both. Depending on the quantifications given, some of the cut sets (here: the cut sets of "failure" and "no failure") may not be focal elements before or afterwards. To avoid mathematical effort, those columns and rows were dropped, which are related to empty cut sets or non-specified statements (non-focal elements with a 0% basic assignment). In

case of two experts and three possible answers, as presented here, the matrix

Ci =

mu(Ax) • m2j(Al) 0 mXi(A3) • m^(Al)

0 mu ( A2) • m2j ( A2) mu ( A3) • m2j ( A2)

mu ( Ai) • m2i ( A3) mu ( A2) • m2i ( A3) mi,- ( A3) • m2i ( A3)

(42)

represents the combinations of focal elements exclusively assigned to the i-th event within the tree. The focal sum c(i) is the sum of all matrix elements Ci. In this case, it is given by

> (i) = l - (mu (Ai) • m2j (A2) + mM (A2) • m2j (Ai)) ,

(43)

considering the fact that "failure" (A1) and "no failure" (A2) are mutually exclusive. The combined basic assignments mi(Ak) for a "failure", "no failure", and "uncertain" are

, , s ml,i(Ai) • m2,i(Ai) + mi,i(A3 ) • m2,i(Ai) + mu (Ai) • m2,i(A3 )

m( Ai) = —-!-!--!-!-^

s 0)

(44)

,A s m1,i (A2) • m2,i (A2) + m1,i(A3) • m2,i(A2) + m1,i (A2) • m2,i (A3) (45)

mi (A2) = —-!-!-—-!-!-, (45)

s (i)

,A , m1,i (A3) • m2,i(A3) (46)

mi (A3) = —-TTT^- . (46)

s (i)

With mi(Ak) as given above, the evidence measures for a "failure" and a "no failure" decision can be calculated by

belt (A1) = mt (A1) ,

plt (A1) = mt (A1) + mt (A3) ; (47)

beli (A2) = mi (A2) ,

pli (A2) = mi (A2) + mi (A3) . (48)

The next step of modelling applies the basic operations of interval arithmetic to the given event tree and assigns the evidential measures as input variables. The addition and multiplication operations are commutative, associative and sub-distributive. There is for i ^ ii,

[beli(Ak), pli(Ak)] + [belii(Ak), plJAk)] = [beli(Ak) + belii(Ak), plA + pln(Ak)] , (49)

plt(Ak) • beln(Ak), plt(Ak) • pln(Ak)],] . (50)

Instead of subtraction, the complements of the evidence measures are applied; this yields

bel(Ai) = i -plA) , (5i)

pli(Ai) = i - belA) . (52)

With the inputs and operations given, the evidence of a sequence r, see Figure 2.,can be calculated easily. As described above, the basic operations of interval arithmetic are applied within the D^-ETA. Fortunately, the structure avoids the trouble that the sub-distributivity of subtraction operations may cause, e.g. as known from the Fuzzy Fault Tree Analysis (/-FTA), see [i7].

[belA), pli(Ak)] • [belii(Ak), pln(Ak)] = [min[beli(Ak) • beln(Ak), beli(Ak) • plii(Ak), pli(Ak) • beln(Ak), pli(Ak) • plii(Ak)], max[beli(Ak) • beln(Ak), belt(Ak) • pln(Ak),

4.3. Fault Tree Analysis

The Fault Tree Analysis (FTA) can model the failure of a single event or multiple failures which lead to a single system failure denoted as the top event, refer to Stephens & Talso [29]. However, the FTA is a top-down analysis versus the bottom-up approach for the Event Tree Analysis; i.e., the method identifies an undesirable top event and the contributing elements (down to the so-called basic events) that would pre-

cipitate it. The contributors are interconnected with the top event, using network paths through Boolean logic gates. The following basic steps are used to conduct a fault tree analysis:

• Define the top event of interest.

• Define the physical and analytical boundaries.

• Define the tree-top structure.

• Develop the path of failures for each branch to the logical initiating failure, represented by the basic event.

Figure 3 shows a typical fault tree with meshed basic events i, 4, and 7.

Figure 3. Typical fault tree with basic events, gates, and top event; the example represents one fault of a braking module as applied in railway vehicles

Once the fault tree has been developed to the desired degree of detail, the various paths can be evaluated to arrive at a probability of occurrence. Cut sets are combinations of components failure causing the top event. Minimal cut sets are the smallest combinations causing the top event. The method is universally applicable to systems of all kinds, with the following ground rules:

• The top events which are to be analysed, and their contributors, must be foreseen.

• Each of those top events must be analysed individually.

• The contributing factors have been adequately identified and explored in sufficient depth.

The FTA has got several strengths. The procedures are well defined and focus on failures. The top-down approach requires analysis completeness at each level before proceeding. It cannot guarantee identification of all failures, but the systematic approach enhances the likelihood of completeness. The FTA addresses effects of multiple failures by identifying interrelationships between components and identifying minimal failure combinations that cause the system to fail (minimal cut sets). The method addresses the effects of design, operation, and maintenance. The FTA can handle complex systems. It provides a graphical rep-

resentation that helps to understand these complex operations and interrelationships between modules and components. Finally, FTA provides both qualitative and quantitative information.

The method is capable of producing numerical statements of the probability of occurrence of undesirable events, given probabilities of contributing factors. As Stephens & Talso ([29], page 136) claim, this capability leads to a common abuse: much effort can be expended in producing refined numerical statements of probability, based on contributing factors whose individual probabilities are hardly known and to which broad confidence limits should be attached. Applying the Dempster-Shafer Theory to FTA can help modelling uncertainties with less effort as shown by Guth [7].

Guth discusses W = {h1, h2, h3} ° {"event occurs", "uncertain", "event does not occur"}. In the following, two events A and B are considered as inputs and Z as output of an And or Or gate, respectively. There is

m(Ai) = bel(A) , m(A2) = pl(A) - bel(A) , m(A3) = i - pl(A) , m(Ai) + m(A2) + m(A3) = i ;

(53)

(54)

(55)

(56)

the same holds for B. Please note that the Guth approach to the Dempster-Shafer Fault Tree Analysis (DS-FTA) considers events where the DS-ETA considers bifurcations. Table 8 shows the (underlying) combination of hypotheses with the different results given for the And and Or gate.

Table 8. Combination Table

And Ai A2 A3 Or Ai A2 A3

Bi hi h 2 h3 Bi hi hi hi

B2 h2 h2 h3 B2 hi h2 h2

B3 hs h3 h3 B3 hi h2 h3

Following Step © in Section 0, the combined basic assignments are calculated. An And gate yields m(Z1) = m(A{) m(B{) , (57)

m(Z2) = m(A0 m(B2) + m(A2) m(B0

+ m(A2) m(B2) , (58)

m(Z3) = m(A1) m(B3) + m(A2) m(B3)+ m(A3) m(B1)

+ m(A3) m(B2) + m(A3) m(B3)

= m(A1) m(B3) + m(A2) m(B3) + m(A3) . (59) For an or gate m{Z\) = m(A\) m(Bi) + m(Ax) m(B2) + m(Ax) m(B3) + m(A2) m{B\) + m(A3) m{Bx) = m(A\) + m(A2) m(Bx) + m(A3) m(Bx), (60) m(Z2) = m(A2) m(B2) + m(A2) m(B3) + m(A3) m(B2),

m(Z3) = m(A3) m(B3)

(61) (62)

holds similarly. With that, both evidence measures bel{Z) and pl{Z) can now be calculated recursively to the equations (53) to (55).

Cheng [22] claims that a calculus based on interval arithmetic is more concise and efficient in operation than the calculus proposed by Guth [7] and presented above. However, contrary to an event tree structure, a fault tree structure may cause trouble with the sub-distributivity property of subtraction operations as known from the Fuzzy Fault Tree Analysis, see [17]. This applies especially if events are meshed within a fault tree, see Figure 3.

Some authors apply an m\ QxQ —» [0, 1] mapping instead of the well-known m: 2n —> [0, 1] mapping which mainly characterises the calculus of the Dempster* Shafer Theory. However, an Q/O rather represents operations in interval arithmetic, where the lower bound and the upper bound are just labelled as belief and plausibility, respectively.

4.4. Reliability Centred Maintenance

Reliability Centred Maintenance (RCM), which was first introduced in the aircraft industry, has been used with considerable success in the last decades in many industrial branches. As described in [23], the RCM analysis starts with establishing an expert group and initiating the collection of important component and system data based on the system documentation. Then the system functions are broken down to the desired component level. All relevant component information should be collected in the form of a modified FMECA. The main modification of the FMECA consists of the inclusion of information facilitating the choice of the optimum maintenance strategy. This is generally performed by an RCM decision diagram. Many different decision diagrams are proposed to apply in an RCM analysis. To illustrate the approach, a diagram as given in Figure 4 is discussed, see [28].

Note that the given diagram is not necessarily complete or applicable in every context given.

Criticality 1st line maint

Detectability

Maint strategy

Significant yes First line no Detectability no Testability

consequences maint of a failure of failure

Periodical tests

Scheduled maintenance

Increasing ^ Find a failure rate better design

Cond based maint effective yes Cond based maintenance

First line maintenance

Corrective maintenance

Figure 4. Example of an RCM decision diagram [23]

Supported by the diagram and the integrated questions, a choice for the best fitting maintenance strategy should be made. Finally, the maintenance program should be implemented, and feedback from operation experience and new data should be used to improve the program regularly.

The choice of the best maintenance strategy is the objective of applying RCM; however, reasoning may be difficult, due to questions without a definite answer. For example, the question whether or not a component is critical could not easily be answered with either "yes" or "no". A Dempster-Shafer based alternative makes the avoidance of crisp "yes" or "no" decisions possible and leads to weighted recommendations on which maintenance strategy to choose.

The application of the DST to RCM (denoted as DS-RCM [28]) follows the procedure of the /),Y-ETA as described in Section 0. The RCM decision diagram corresponds to the event tree, where the decisions are the counterparts of the events. Consequently, the decisions "yes" and "no" correspond to the events "failure" and "no failure". The Expert Assessment Matrix M\, the Combination Matrix Ch and the focal sum a(i) are defined and applied analogously. Additionally, an RCM Decision Diagram Matrix D with

D =

mx(Ax) ml{A2) m^Aj)

m%iAl)

(63)

is defined, which collects the combined basic assignment values (results) of each decision within the RCM diagram. Finally, the weighted recommendations on all maintenance strategies are listed by the Recommendation Matrix

R

bel(rx) pl(rx) • •

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

bel(r6) pl(r6)

(64)

which collects the values of the evidence measures belief and plausibility for every strategy.

The main advantages of the /XV-RCM as against the qualitative RCM can be summarised as follows:

• Experts feel more comfortable giving degrees of belief instead of taking "yes" or "no" decisions. It might therefore be easier to obtain relevant data for the RCM analysis.

• The /).V-RC M approach results in a profile of all possible maintenance strategies. Decision making based on this profile helps preventing "weak decisions" and may in any case be more comprehensive than relying on a single strategy.

• In some RCM studies it may be desirable to analyse modules and not separate components. In these cases /).S'-RC M is especially useful, since it does not force a single strategy.

• As shown in the discussion of the example case, the /).V-RC M approach helps to reveal possible design problems and their causes.

A possible disadvantage of this approach is that some experts may find the evidential numbers more complicated than a simple "yes" or "no" decision. A short discussion about the nature of these numbers should therefore be given as an introduction to an RCM session.

5. Conclusions

The Dempster-Shafter Theory is well-known for its usefulness to express uncertain judgments of experts. It is shown in this contribution, how to apply the calculus to safety and reliability modelling. Approaches to expert judgement; Failure Modes, Effects, and Criticality Analysis; Event Tree Analysis; Fault Tree Analysis, and Reliability Centred Maintenance are discussed.

Generally, the Dempster-Shafter Theory adds a new flavour to safety and reliability modelling compared to probabilistic approaches. The illustration (Section 0) clearly shows the differences between the Probability and the Dempster-Shafer Theory, based on their different mappings Q —> [0, 1] versus 2n —» [0, 1], see Section 0. Probability theory would identify a single hypothesis and DST a combination of two to be responsible for the serious changes of the system considered.

6. References

6.1. Dempster-Shafer Theory

[1] Dempster, A.P. (1966). New Methods for Reasoning towards Posterior Distributions based on Sample Data. The Annals of Mathematical Statistics 37: 355-374.

[2] Dempster, A.P. (1967). Upper and Lower Probabilities Induced by a Multivalued Mapping. The Annals of Mathematical Statistics 38: 325-339.

[3] Dempster, A.P. (1968). A Generalization of Bayesian Inference. Journal of the Royal Statistical Society, Series B (methodological) 30: 205247.

[4] Denoeux, T. (1999). Reasoning with imprecise belief structures. International Journal of Approximate Reasoning 20 (1): 79-111.

[5] Ferson, S., Kreinovich, V., Ginzburg, L., Myers, D.S. & Sentz, K. (2003). Constructing Probability Boxes and Dempster-Shafer Structures. Sandia Report SAND2002-4015.

[6] Flack, J. (1996). On the Interpretation of Remotely Sensed Data Using Guided Techniques for Land Cover Analysis. Unpublished PhD thesis, Department of Geographic Information Science, Curtin. University, Perth, Australia.

[7] Guth, M. A. S. (1991). A Probabilistic Foundation for Vagueness and Imprecision in Fault-Tree Analysis. IEEE Transactions on Reliability 40 (5), 563-571.

[8] Hollnagel, E. (1998). Cognitive Reliability and Error Analysis Method - CREAM. Amsterdam: Elsevier.

[9] International Electrotechnical Commission IEC, ed. (2002). International Electrotechnical Vocabulary - Chapter 191: Dependability and Quality of Service. IEC 60050-191: 2002-05, 2nd Edition.

[10] Keung-Chi, N. & Abramson, B. (1990). Uncertainty Management in Expert Systems. IEEE Expert 5 {2), 29-48.

[11] Klir, G. J. & Folger, T. A. (1988). Fuzzy Sets, Uncertainty and Information. Englewood Cliffs. Prentice-Hall.

[12] Leang, S. (1995/ A Control and Diagnostic System For The Photolithography Process Sequence, Ph.D. Dissertation.

[13] O'Neil, A. (1999). The Dempster-Shafer Engine. http://www.quiver.freeserve.co.uk/Dse.htm

[14] Nuclear Regulatory Commission (1999). Technical Basis and Implementation Guidelines for a Technique for Human Event Analysis (AT-HEANA), NUREG-1624.

[15] Parsons, S. (1994). Some Qualitative Approaches to Applying the Dempster-Shafer Theory. Information and Decision Technologies 19, 321-337.

[16] Rakowsky, U. K. (2005). Some Notes on Probabilities and Non-Probabilistic Reliability Measures. Proceedings of the ESREL 2005, Tri-City/Poland. Leiden: Balkema, 1645-1654.

[17] Rakowsky, U. K. (2002). System Reliability (in German), Hagen/ Germany: LiLoLe Publishing.

[18] Shafer, G. (1976). A Mathematical Theory of Evidence. Princeton: Princeton University Press.

[19] Smets, P. & Kennes R. (1994). The Transferable Belief Model. Artificial Intelligence 66, 191-243.

[20] Walley, P. (1991). Statistical Reasoning with Imprecise Probabilities. London: Chapman and Hall.

[21] Yager, R. R. (1982). Generalized Probabilities of Fuzzy Events from Fuzzy Belief Structures. Information Sciences 28: 45-62.

6.2. Safety and Reliability Methodology

[22]

[23]

[24]

[25]

[26] [27]

[28]

[29]

7. Symbols

A, B set of hypotheses

bel belief measure

C Combination Matrix

cmn commonality measure

D RCM Decision Diagram Matrix

h single hypothesis

i, n index, resp. number of an element

j sequence index (ETA) or statement index

(RCM)

k set or statement index

l data source index

M Expert Assessment Matrix

m basic assignments

pl plausibility measure

R Recommendation Matrix

rj sequence evidence (ETA) or evaluation value of

the maintenance strategies (RCM)

s(i) focal sum

Z combined set of hypotheses

Cheng, Y.-L. (2000). Uncertainties in Fault Tree Analysis. Tamkang Journal of Science and Engineering, Vol. 3, No. 1, 23-29. Eisinger, S. & Rakowsky, U. K. (2001). Modeling of Uncertain-ties in Reliability Centered Maintenance - A Probabilistic Approach. Reliability Engineering and System Safety, 71 (2), 159-164.

International Electrotechnical Commission IEC (2007). Analysis techniques for dependability -Event Tree Analysis. IEC 62502, working document.

International Electrotechnical Commission IEC (2006-a). Analysis techniques for system reliability - Procedure for failure mode and effects analysis (FMEA). IEC 60812:2006-01, 2nd Edition.

International Electrotechnical Commission IEC (2006-b). Fault tree analysis (FTA). IEC 61025:2006-12, 2nd Edition. International Electrotechnical Commission IEC (1999). Dependability Management - Part 3-11: Application guide - Reliability centred maintenance. IEC 60300-3-11:1999-03, 1st Edition. Rakowsky, U. K. & Gocht, U. (appears 2007). Modelling of uncertainties in Reliability Centred Maintenance - a Dempster-Shafer approach. Proceedings of the European Conference on Safety and Reliability - ESREL 2007, Sta-vanger/Norway. Approved as full paper. Stephens, R. A. & Talso, W. (1999-08). System Safety Analysis Handbook - A Source Book for Safety Practitioners. Unionville/Virginia, U.S.A.: System Safety Society, 2nd Edition.

i Надоели баннеры? Вы всегда можете отключить рекламу.