Научная статья на тему 'The ships impact in ground of port water area'

The ships impact in ground of port water area Текст научной статьи по специальности «Медицинские технологии»

CC BY
66
10
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
navigational risk / ship impact in the bottom / port water area

Аннотация научной статьи по медицинским технологиям, автор научной работы — Galor Wiesław

The existing ports are expected to handle ships bigger than those for which they were designed. The main restriction in serving these ships is the depth of port waters, which directly affects the safety of a manoeuvring ship. The under keel-clearance of a ship in the port water area should be such that a ship moves safely. In some specific conditions it happen the ship strike the sea bottom. The undesired impact against the ground can damage the ship hull. The paper presents the algorithm of ships movement parameters during contact with the ground

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «The ships impact in ground of port water area»

In terms of the number of failures, errors or observed outcomes, Nj, then we have when sampling the jth observation interval the hazard function or failure rate:

Ms) = {(1/(N- Nj)) (dNj/ds)} (4)

where N is the total number of outcomes and A = dNj/ds, the instantaneous outcome rate, IR, and the number of outcomes we have observed over all prior intervals is just the summation, n = Ej Nj.

3. The probability linked to the rate of errors

Given the outcome rate, now we need to determine the outcome (error) probability, or the chance of failure.

a. the hazard function is equivalent to the failure or outcome rate at any experience, X(s), being the relative rate of change in the reliability with experience, 1/R(s) (dR(s)/ds);

b. the CDF or outcome fraction, F(s), is just the observed frequency of prior outcomes, the ratio n/N, where we have recorded n, out of a total possible of N outcomes;

c. the frequency of prior outcomes is identical to the observed cumulative prior probability, p(s), and hence is the CDF, so F(s) = p(s) = (n/N) = 1 - R(s);

d. here R(s) is the reliability, 1-n/N, a probability measure of how many outcomes or failures did not occur out of the total;

e. the future (or Posterior) probability, p(P) is proportional to the Prior probability, p(s) times the Likelihood, p(L), of future outcomes;

f. the chance of an outcome in any small observation interval, is the PDF f(s), which is just the rate of change of the failure or outcome fraction with experience, dp(s)/ds;

g. the Likelihood, p(L) is the ratio, f(s)/F(s), being the probability that an outcome will occur in some interval of experience, the PDF, to the total probability of occurrence, the CDF; and

h. we can write the PDF as related to the failure rate integrated between limits from the beginning with no experience up to any experience, s,

s

f(s) = dF/ds = X(s) exp - J X(s) ds. (5)

0

So, the probability of the outcome or error occurring in or taking less than s, is just the CDF, p(s) = n/N, conventionally written as F(s). Relating this to the failure rate, via (a) through (d) above, gives:

p(s) = F(s) = 1 - e-i Xds (6)

where, of course from the MERE,

Ms) = Xm + (X0 - Xm) exp - k(s-so) (7)

and X(s0) = X0 at the initial experience, s0, accumulated up to or at the initial outcome(s). The corresponding PDF f(s), is the probability that the error or outcome occurs in the interval ds, derived from the change in the CDF failure fraction with experience, or from (f), (h) and (g) above:

f(s) = dF(s)/ds = dp(s)/ds = Xe "iXds = X(s) x (1-p(s))

= {(Xm + (X0 - Xm) exp(-k(s-s0))} x {exp ((X(s)

- X0)/k - Xm(s0 - s ))} (8)

The limits are clear: as experience becomes large, s^w, or the minimum rate is small, Xm << X0, or the value of k varies, etc. We can also show that the uniform probability assumption for observing outcomes is consistent with the systematic variation of the outcome probability with experience due to learning.

We can also determine the maximum and minimum risk likelihood's, which are useful to know, by differentiating the probability expression. The result shows how the risk rate systematically varies with experience and that the most likely trend is indeed given by the learning curve. In other words, we learn as we gain experience, and then reach a region of essentially no decrease, in rate or in probability, and hence in likelihood. It is easy to obtain the first decrease in rates or probabilities but harder to proceed any lower. This is exactly what is observed in transport, manufacturing, medical, industrial and other accident, death and injury data [2].

4. The initial failure rate and its variation with experience

Having established the learning trend, we need to determine the actual parameters and values using data and insight. Now, in reality, the initial rate, A0, is not a constant as assumed so far since the outcomes are stochastic in experience "state space". Hence, A0 = A(s0), and it is not known when exactly in our experience we may have an error initially observed (and we might be lucky or not), and the initial value we ascribe to the initial rate observe is an arbitrary value.

To establish the initial rate, key data are available from commercial aircraft outcomes (fatal crashes) throughout the world. The major contributor is human error not equipment failure, although these latter can also be ascribed to the root cause of human failings. Fatal crashes and accidents for the thirty years between 1970 and 2000 are known [1], for 114 major airlines with ~725 million hours (Mh) of total flying experience. For each airline with its own experience, s, the fatal accident rate per flying hour, A(s), can be plotted as an open circular symbol in Figure 1 versus the accumulated experience in flying hours (adopting the FAA value of ~31/3 hours as an average flight time).

These are:

a) the crash of the supersonic Concorde with a rate, A0, of one in about 90,000 flights shown as a lozenge symbol; and

b) the explosion and disintegration of the space shuttles, Challenger and Columbia, with a rate, A0, of two out of 113 total missions, plotted using

Rare Events Initial Rate Commercial Airlines 1970-2000 and Space Shuttle 1980-2003

O Ail-safe Airline Data 1970-2000 A Shuttle ( 2 in 113 missions ) O Concorde ( 1 in 90,000 flights)

-Aircraft, Constant rate, CR= 15/MF at 0.1 MF

---Constant Risk Rate= 1/Tau

0.00001 0.00010 0.00100 0.01000 0.10000 1.00000 10.00000 100.0C Accumulated Experience, M Hours

Figure 1. The initial rate based on world airline and US space shuttle accident data

the triangular symbol. The typical "flight time" for the shuttle was taken as the 30-40 minutes for re-entry as reported by NASA [4] timelines, although this plot is quite insensitive to the actual value taken. For all these data and experience, there is a remarkable constancy of risk, as shown by the straight line of slope -1, which is given by the equation:

As = constant, n, (9)

where the observed rate is strictly a function of whatever experience it happened to occur at, any value being possible. Thus, in the limit for rare events, the initial rate should be the purely Bayesian estimate from the prior experience with n ~1 and A0 ~ (1/s). This rate varying as (1/s) also corresponds exactly to the risk rate that is attainable on the basis of the minimum likelihood determined from the outcome probability. What the

data are telling us is that the limiting initial rate is exactly what it is for the experience at which the first outcome occurs, no more and no less.

From the analysis of many millions of data points that include human error in the outcomes, we have been able to derive the key quantities that dominate current technological systems. These now include commercial air, road, ship and rail transport accidents; near-misses and events; chemical, nuclear and industrial injuries; mining injuries and manufacturing defects; general aviation events; medical misadministration and misdiagnoses; pressure vessel and piping component failures; and office paperwork and quality management systems [2].

From all these data, and many more, we have estimated the minimum failure rate or error interval, the typical initial error interval, and the learning rate constant for the ULC as follows:

a) minimum attainable rate, Xm, at large experience, s, of about one per 100,000 to 200,000 hours (V ~ 5.10-6 per hour of experience);

b) initial rate, X0, of 1/s, at small experience (being about one per 20,000 to 30,000 hours or X0 ~ 5.10-5 per hour of experience);

c) learning rate constant, k ~ 3, from the ULC fit of a mass of available data worldwide for accidents, injuries, events, near-misses and misadministration.

Therefore, the following numerical dynamic form for the MERE human error or outcome rate is our "best" available estimate [2]:

X(s) = Xm + (X, - U e-ks, (10)

which becomes, for X0 = (n/s), with n = 1 for the initial outcome,

X = 5.10-6 + (1/s - 5.10-6) e-3s (11)

The rate, X, can be evaluated numerically, as well as the probability, p(s), and the differential PDF, f(s). The result of these calculations is shown in Figure 2, where s = t units in order to represent the accumulated experience scale.

MERE Failure Rate, Probability and PDF

( Learning rate k=3, Initial rate= 1/tau, Minimum rate= 0.000005)

100 10 1 0.1 0.01 0.001 0.0001 0.00001

0 N — Rate.Lamda k=3

N1 i-- —»- PDF,f(t) k=3 — *- Rate, Lamda k=1 —■—Probability ,p k=1

* \ \ *

» * 1 \

. u

■—■ ■ '—' • ■

0.001 0.01

1 10 100 1000 10000 100000 1000000 Experience, tau

0.000001

0.1

Figure 2. The best MERE values

It is evident that for k>0 the probability is a classic "bathtub" shape, being just under near unity at the start (Figure 2), and then falling with the lowering of error rates with increasing experience. After falling to a low of about one in a hundred "chance" due to learning, it rises when the experience is s > 1000 tau units, and becomes a near certainty again by a million tau units of experience as failures re-accumulate, since Xm~5.10-6 per experience tau unit. The importance of learning is evident, since for k<0 forgetting causes a rapid increase to unity probability with no minimum. The solution for the maximum likelihood for the outcome rate is exponential, falling with increasing experience as given by: (Rate for Maximum Likelihood)

Xmax = X0 exp - {k(s - s0)/(1 + ks0)} (12)

However, the expression that gives the minimum likelihood indicates that the minimum risk rate is bounded by: (Rate for Minimum Likelihood)

Vin << { V }/ {1 + k( 8 - So) } (13)

The result follows common sense. Our maximum risk is dominated by our inexperience at first, and then by lack of learning, and decreasing our risk rate largely depends on attaining experience. Our most likely risk rate is extremely sensitive to our learning rate, or k value, for a given experience.

So, as might be logically expected, the maximum likelihood for outcomes occurs at or near the initial event rate when we are least experienced. This is also a common sense check on our results: we are most at risk at the very beginning. Therefore, as could have been expected, the most likely and the least risks are reduced only by attaining increasing experience and with increased learning rates.

This approach to reduce and manage risk should come as no surprise to those in the education community, and in executive and line management positions. A learning environment has the least risk.

5. Future event estimates: the past predicts the future

The probability of human error, and its associated failure or error rate, we expect to be unchanged unless dramatic technology shifts occur. We can also estimate the likelihood of another event, and whether the MERE human error rate frequency gives sensible and consistent predictions. Using Bayesian reasoning, the posterior or future probability, p(P), of an error when we are at experience, s, is,

Posterior, p(P)<x{Prior, p(s)}x{Likelihood, p(L)} (14)

where p(s) is the prior probability, and by definition both |P,L| > s, our present accumulated experience. The likelihood, p(L), is also a statistical estimate, and we must make an assumption, based on our prior knowledge, and often is taken as a uniform distribution. We can show that the likelihood is formally related to the number of outcomes for a given variation of the mean.

Either:

a) the future likelihood is of the same form as experienced up to now; and/or

b) the future is an unknown statistical sample for the next increment of experience based on the differential probability, the PDF f(s).

In the first case (a), we have that the future likelihood probability p(L) is the fraction or ratio of events remaining to occur out of the total possible number that is left. For the second case (b), the future is an unknown statistical sample for the next increment of experience based on the PDF, f(s). This is called a "conditional probability", where the probability of the next outcome depends on the prior ones occurring, which was Bayes original premise.

The so-called generalized conditional probability or Likelihood, p(L), can be defined utilizing the CDF and PDF expressions. Described by [6] as the "generalized Bayes formula", the expression given is based on the prior outcome having already occurred with the prior probability p(s). This prior probability then gives the probability or Likelihood of the next outcome, p(L), in our present experience-based notation, as:

p(L) = - PDF/CDF} - = M(1-p(s))/p(s)} (15)

We can evaluate this Bayesian likelihood and posterior expressions using our "best" MERE values of a learning rate constant of k=3 and a minimum failure rate of Am= 5.10-6, obtaining the results shown in

Figure 3.

Posterior, prior and likelihood probability estimates ( MERE "best" values with k=3 )

—•— Prior, p(e)

—□— Likelihood,p(L)

- o- - Posterior,p(P)

9

°oo OO--O--0--o — o— o--o--o- o :

V

100 1000 10000 Experience, tau

100000 1000000

0.1

0000001

Figure 3. The estimate of the likelihood and posterior probabilities when learning

It is clear from Figure 3 that the "human bathtub" prior probability, p(s), causes the likelihood to fluctuate up and down with increasing experience. The likelihood tracks the learning curve, then transitions via a bump or secondary peak to the lowest values as we approach certainty (p^-1) at large experience. However, the posterior probability, p(P), just mirrors and follows the MERE failure rate, as we predicted, decreasing to a minimum value of ~5.10-6, our ubiquitous minimum outcome rate, before finally falling away.

Hence, since the future probability estimate, the posterior p(P), is once again derivable from its (unchanged) prior value, f(s) = dp(s)/ds ~ A(s), derived from learning from experience, and thus the past predicts the future.

For the special case of "perfect learning" when we learn from all the non-outcomes as well as the outcomes, the Poisson-type triple exponential form applies for low probabilities and small numbers of outcomes (n<<m). Of course, the limit of "perfect learning" is when we have an outcome, so here p(x) = 1/t, and is the rare event case for n = 1. The Perfect Learning limit fails as soon as we have an event, as it should. But there is also a useful simple physical interpretation, which is that:

a) we learn from non-outcomes the same way we learn from outcomes, as we have assumed;

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

b) the perfect learning ends as soon as we have just a single (rare) outcome; and

c) the influence of the finite minimum rate is then lost.

6. Comparison to data: the probability of failure and human error

There are three data sets for catastrophic events with defined large human error contributions that are worth re-examining further:

a) the crash rate for global commercial airlines, noting most occur during manoeuvring and approach for take-off and landing but as we have seen can also occur in flight;

b) the loss of the space shuttles, Challenger and Columbia, also on take-off and during the approach for landing; and

c) the probability of non-detection by plant operators of so-called latent (hidden) faults in the control of nuclear system transients.

Apparently disparate, these three all share the common element of human involvement in the management, design, safety "culture", control and operation of a large technological system; all are subject to intense media and public interest; and the costs of failure are extremely expensive and unacceptable in many ways.

Probability from MERE Compared to Commercial Airline Crashes 1977-2000, Space Shuttle Losses 19872003, and Nuclear Latent Errors1997-1998 ( Best values of learning rate , k=3 and Lamda(0)=1/tau )

0 O O o O o

O

\ « O \ ' J**.'**

A Airsafe Airline Data1970-2000 \ ti /tut X 4 A*

■ Shuttle Data 2003 2 o/o 113 ^ Ä Mi AAAAA A A

O Shuttle Data 2003 2o/o213MF , --

« Baumont Latent Error Data 1997-1998 -MERE Min=5.E-6 -MERE Min=5.E-7 ©R.B .Duffey andJ.w .Saull

0.00000001 0.0000001 0.000001 0.00001 0.0001 0.001 0.01 0.1 1 Non-dimensional Experience , N*

Figure 4. An outcome probability data comparison

The comparison of the data to theory is shown in Figure 4 where the lines are the MERE calculated probability, p(s) using the "best" values. The three lines use three bounding values for the minimum error rate to illustrate the sensitivity. Despite the scatter, a minimum rate of order ~5.10-6 is indeed an upper bound value, as we estimated before.

7. Implications for generalized risk prediction

The implications of using this new approach for estimating risk are profound.

This new probability estimate is based on the failure rate describing the ULC, which is derived from the Learning Hypothesis; and utilizes the validation from the outcome data of the world's homo-technological systems. Thus, we have seamlessly linked all the way from individual human actions to the observed outcomes in entire systems. We have unified the approach to managing risk and error reduction using the Learning Hypothesis with the same values everywhere for the learning rate constant, k, and the minimum error rate, Am.

For the first time, we are also able to make predictions of the probability of errors and outcomes for any assumed experience interval in any homo-technological system.

Typically the probabilities for error are ~ 10-2, or one in a few hundred, for any act of volition beyond the first 10% or so of the risk interval; whereas in that first increment or initial phase, the risk is much higher. Interestingly, this reduction in probability in the initial interval echoes, parallels and is consistent with the maze study results of Fang [3] showed a rapid (factor of five or so) reduction in the first 10% or so of the moves needed for success by the "treasure hunt" players. Clearly the same fundamental learning factors and success motivation is at work, and are reflected in the rapid decrease in errors down the learning curve.

Conversely, the MERE probability (the human bathtub) properly represents the data trends, such as they are, and hence can be used in PRA HEP estimation provided the correct measure is taken for experience.

In an addition the MERE results implies a finite lower bound probability of p(s) >10 -3, based on the best calculations and all the available data.

8. Conclusions: the probable risk

Analysis of failure rates due to human error and the rate of learning allow a new determination of the risk due to dynamic human error in technological systems, consistent with and derived from the available world data. The basis for the analysis is the "learning hypothesis" that humans learn from experience, and consequently the accumulated experience defines the failure rate. The exponential failure rate solution of the Minimum Error Rate Equation defines the probability of human error as a function of experience. Comparisons with outcome error data from the world's commercial airlines, the two shuttle failures, and from nuclear plant operator transient control behaviour, show a reasonable level of accord. The results

i Надоели баннеры? Вы всегда можете отключить рекламу.