Научная статья на тему 'Reliability Design in Gradual Failures: A Functional-Parametric Approach'

Reliability Design in Gradual Failures: A Functional-Parametric Approach Текст научной статьи по специальности «Электротехника, электронная техника, информационные технологии»

CC BY
179
41
i Надоели баннеры? Вы всегда можете отключить рекламу.
Ключевые слова
computational methods / conceptual algorithms / gradual failure / parallel computing / parametric synthesis / projected reliability / scanning method

Аннотация научной статьи по электротехнике, электронной технике, информационным технологиям, автор научной работы — O. V. Abramov, B. N. Dimitrov

This conceptual paper discusses the main provisions of the functional-parametrical (FP) approach in reliability studies. The FP itself is a detailed frame algorithm suggested for use in design of new and unique technical items. The article also presents possibilities and perspectives of using FP approach in problems for “building in” reliability in analogy to these for technical devices and systems. It is pointed out that for solving problems of analysis and ensuring of desired reliability it is appropriate to use parallel and distributed processing techniques. We discuss the idea of constructing efficient parallel algorithms for multivariate statistical analysis necessary to calculate estimates of the probability for failure-free operation with different nominal values of internal parameters. More use of parallel algorithms, including in continuous media via discretization are discussed.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Reliability Design in Gradual Failures: A Functional-Parametric Approach»

Reliability Design in Gradual Failures: A Functional-

Parametric Approach

O. V. Abramov, B. N. Dimitrov •

Institute for Automation and Control Processes, Far Eastern Branch of the Russian Academy of Sciences, Vladivostok, Russia

E-mail: abramov@iacp. dvo, ru Department of Mathematics, Kettering University, Flint, MI 48504 Email: bdimitro@kettering. edu

Abstract

This conceptual paper discusses the main provisions of the functional-parametrical (FP) approach in reliability studies. The FP itself is a detailed frame algorithm suggested for use in design of new and unique technical items. The article also presents possibilities and perspectives of using FP approach in problems for "building in" reliability in analogy to these for technical devices and systems. It is pointed out that for solving problems of analysis and ensuring of desired reliability it is appropriate to use parallel and distributed processing techniques. We discuss the idea of constructing efficient parallel algorithms for multivariate statistical analysis necessary to calculate estimates of the probability for failure-free operation with different nominal values of internal parameters. More use of parallel algorithms, including in continuous media via discretization are discussed

Keywords: computational methods, conceptual algorithms, gradual failure, parallel computing, parametric synthesis, projected reliability, scanning method.

I. Introduction

In modern reliability analysis there are several methodological trends, the dominant position of which is the probabilistic and statistical trend. The methodology of this trend is based on empirically established fact for statistical stability failure rates. This enables the use the analytic probability theory and some elements of queuing studies.

Calculation of reliability within a probabilistic-statistical approach is based on the construction of block diagrams for the processes running in the studied system (reliability model). When constructing a model for system reliability each of its elements is usually allowed to have only two possible states - full functionality or complete failure. Consequently, the system can be also in only two states - full functionality or complete failure. Any possibility of partial functioning of the system or of its components is usually excluded. Thus, in estimation of reliability the real system is replaced by a logical (Boolean) model. Its various modifications, such as models in the form of a fault tree and even as a Markov model with finite space of states, does not change the fundamental nature of the reliability model. The main element of the design characteristics of the reliability of such models is the failure rate. Methods of this trend are quite simple, convenient for

engineering calculations, and do not require (in majority of the application) the use of modern computer equipment, since the solution of the main tasks in this direction can be obtained in closed form.

At the same time, the two-staged model used in probabilistic and statistical approach does not reflect the reliability indices with performance of multiple functions by the item, i.e. model cannot be functional and realistic. The process of developing technical objects with desired characteristics is associated mostly with current research. It is based on functional models (physical, mathematical, or combined). Functional reliability model are supposed to reflect the relationships between the specified functions (output parameters) of the system and the parameters of its elements. There are tolerance types of interactions between the given system functions and operational factors; between the functions of the elements and the physical-chemical processes (that cause changes in their parameters during the operation process); between different types of a prior information about the processes of changes in element parameters and in the system as a whole.

Note also, that using of probabilistic and statistical approach does not give trustful results in solving the reliability for unique objects and systems for critical applications where failures are not massive and do not present statistically regular phenomenon.

The study of the reliability problems for technical systems from the viewpoint of the random walk theory in phase space is the most common and promising. Reliability models of this type have been proposed originally (Gnedenko, Beliaev, Soloviev, 1969). It is allowed to find a deep connection between reliability modeling and the general theory of random functions, and allows formulating a detailed methodological approach which we call functional-parametric (FP) approach.

The possibilities and feasibilities of establishing a FP-approach not only arise from the deficiencies in the classical approach to solve reliability problems. The FP approach is predetermined by the successful development of methods of mathematical modeling of complex systems, automation, the mathematical description of the processes during their operation, as well as research methods of reliability of gradual failures (Becker and Jensen 1977). Nowadays we also may rely on much better computational equipment and digital technology which make even the wildest dreams to get true.

II. The main idea of the functional- parametric approach

The functional-parametric approach naturally follows from the generally accepted conventional definition of reliability as a property of an object to keep the values of its parameters in prescribed limits, which characterize object' ability to perform the required functions in specified modes. In accordance with this definition any mathematical model for determination of the reliability should reflect possible relation between the reliability rates and the functions of operating items in changing conditions and time. We grab this main idea of the FP-approach to describe in detail how the problems of system's reliability can be resolved by following the next basic principles:

1. The process of operation of the system and its technical conditions at any time is determined by some finite set of variables - parameters of the process and the system;

2. The accumulation of various impacts on the system leads to the evolution of its indicators (changes in the parameter values with the time). Therefore, to keep track on the possibility of switching to a different qualitative state is an important action;

3. Failures are results of deviations of parameter' values from their original prescription to current. The forms of identification of a failure consist in the departure of a parameter outside the allowed tolerance range (acceptability region);

4. When the process of parameters change is admissible (observable, predictable, or controllable), then there is (in principle) a possibility to prevent failures;

5. In FP-approach the problems of evaluating and assessment of desired reliability during the system design, its manufacture and operation are interrelated. They can all be represented as a kind of management tasks to model, simulate and analyze appropriate stochastic processes. Respective decision should be based on the results of the forecasting intervals of possible parameter's changes (technical condition) and overall assessment of the reliability of the studied objects.

The forecasting and control methods need to keep into account the specifics of modelled random processes and descriptions in drifts of parameter changes. These may belong to the class of non-stationary and locally controlled processes. Sometimes characteristics of such control may have the form of a pulse or shock correction.

Thus, when solving a problem related to reliability of technical objects based on the FP-approach it could be necessary to take into account several things: (a) Possible deviations true parameters from the calculated values; (b) To foresee the consequences of these deviations and (c) To develop a set of measures that provide sufficient information about required characteristics of the object in terms of these deviations. It is understandable that in the framework of the FP-approach in reliability evaluation is a natural extension of conventional engineering approach at the design stage. Performance assessment in the framework of the FP-approach is based on creation and optimal use of reserves of admissible variations of the system parameters. In addition, the possible control of the most important parameters, prediction of parameter changes for prevention their exit out of the permissible limits, availability of parameter' adjustments, or replacements of worn out components should be included in considerations. The reliability evaluation problem can be considered as an extended form of the optimal parametric synthesis problem (Abramov 2006).

III. Reliability models creation for technical systems and the FP-approach

The reliability and quality of technical devices and systems is built within numerous activities performed on stages of development, production (manufacturing), testing and operation. The stage of development is of a particular importance, since at this moment the principles of quality assurance and future reliability are planned. A significant part of the activities aimed at the implementation of these principles, subsequently implemented, in particular in the production and exploitation. However, their successful implementation is built in the items the most during development stage. It is essential to find out, to what extent there are taken into account the processes expected to occur in subsequent phases.

The basis in the development phase is the technical design and specification, which decides required parameter values (requirements needed for operations in the field), Numerical data describing the ranges of possible variability in environmental parameters (conditions of operation), a qualitative description of the restrictions, ergonomic requirements and conditions that are not directly measurable - each detail may affect the development phase run.

One main part of the technical specifications is the inclusion of the requirements about outcome parameters from the object of development (technical requirements). Certain relationships between output parameters and technical requirements will be called serviceability specifications. These will be later given to the item users as instructions in operation. It is needed in the design process and it is necessary to find solutions that are acceptable. First of all, from the point of view of the designer, it is important, to ensure the achievement of the conditions of functionality. Together with that, the designer should build in the pre-determined quality indicators. This is a set of quality indicators determines the ability of an object to perform its functions and therefore, characterizes the generalized states of the object.

The concept of quality should include a description of the object properties that determine the success of the problem solution in certain conditions. Such properties can be efficiency,

productivity, accuracy, stability, reliability, survivability, safety, ergonomic solutions, etc. Indicators (criteria) of quality are necessary for qualitative estimation of object features. They can be used also as quantitative measures for the degree of conformity of the object to its intended purpose. The review article (B. Dimitrov, 1998) may serve in great extend in this phase.

In the traditional sense, the problem of parametric synthesis is reduced to the selection of parameter values (for a given structure) under which serviceability specifications are performed. Consideration of possible parameters deviations from the calculated values and development of measures that ensure the operation of the object in the presence of such deviations are transferred to the subsequent stages of the design. Sometimes they appear in the stages of the production and operation. This approach to the problem of parametric tolerance synthesis is most common in practice.

The stage of parametric synthesis involves two procedures. The first one is the choice of initial values of the internal parameters, which is usually carried out on the basis of simple calculations, or is based on the experience and intuition of the engineer. The second one consists in appropriate correction of the initial values of the parameters. Synthesis vector of internal parameters found at this stage suggests only that the "tentative" project is operational.

Deviations of the parameters from the calculated nominal values can lead to loss of efficiency, so the next step of parametric synthesis is to set optimal values of internal parameters. For example, those which provide greatest perform ability margins or maximum probabilities of the specified performance fulfillment deserve special attention.

Selection of the optimal parameter values does not always allow creation of an object with the required consumer properties, i.e. to provide the functions of a given quality. Thus, defining the nominal parameter values, which guarantee maximum probability for non-failure work of the object within a certain period of time Pmx(T). Comparing the obtained values with desired ones, Pd(V), the developer cannot consider the design process completed, if Pmax(T)< Pd(T). In this case, it is necessary to look for ways of further design improvement. The required reliability can be achieved by adjusting (tuning) some parameters. Thus, to ensure the required quality of operation for the item, it is necessary to select and implement some strategies for control of its parameters.

Next step is to determine some parametric synthesis in a narrow sense, as evaluation of the nominal values of the parameters of the item, and in wide sense as a result of certain strategy for parameters control. The main content of the methodology of parametric synthesis in wide sense are hidden in the answers of the following two related questions: which control parameters to choose, if it is necessary to keep them under control and which values should take these controlled parameters.

Parameters deviations are formed under the influence of factors in the manufacture, storage, service (use), and may have random character. Therefore, the internal parameters have to be considered as some random functions depending on time. Consequently, serviceability conditions can be met only with a certain probability like the following

P{T) = P{X{t) e Dx, V te [0, Tj},

Here X(t) is a random process of the internal parameters changes; Dx is the set of acceptable changes for internal parameters (region of acceptability); T is the projected operation time. Selected nominal values of the parameters ratings (nominal ones) xr=(xir ,..., xnr) can be considered as components of the vector of average according to distribution of the random process X(t) at time t

=0, i.e. xr(1) = M [X(0)j.

If probability for correct fulfillment of the performance specifications with selected in the

previous step nominal values of the internal parameters within given time P(T, xr(1)) is below the

desired Pd (T), it will be necessary to make a switch to parametric synthesis as in the second level. Here we mean a choice of the nominal values for the parameters with respect of their patterns in

the conditions of manufacturing and operational variations.

Parametric synthesis in the second level is to select such nominal values of internal parameters, which provide maximum efficiency (the maximum probability for non-failure operations during a given time, or maximum operation time between failures, etc.).

Thus, the parametric synthesis at the second level is an optimization problem with stochastic criteria. The result of its solution is the values of the internal parameters

(2)

x / = arg max P{X(t) e Dx, for any te [0, T]}, xr e Dc.

Here Dc c Dx is the space of acceptable internal controlled parameters.

When the probability of failure does not meet the requirements with the values of parameters selected in parametric synthesis on second level, then proceed to parametric synthesis of the next level.

Increase of the probability of performance specifications can be achieved if some of the internal object and process parameters are tuned (adjusted). Synthesis of tuned objects will be called parametric synthesis in the third level. The task of selecting the set of parameters at which most appropriate to carry out adjusting (of controlled parameters), and to change their reasonable ranges becomes of independent importance in the process of synthesis. After its solution arises the problem of choosing the optimal values of tuning parameters (as shown in Abramov 2016).

The one-time adjustment problem can be solved in the process of parametric synthesis in the third level, but does not guarantee the quality of synthesized object.

Next level of parametric synthesis is the synthesis of items with multiple adjustable parameters. We call it parametric synthesis of the fourth level. It aim is to give an answer to all the above questions: what parameters, when and how should be changed to ensure that specified requirements for the object performance quality are achieved.

In the parametric synthesis of the fourth level the optimization of parameters is carried out to prevent possible loss of efficiency, and has the character of preventive corrections. It is necessary to choose for tuned items a set of tuning options (the control parameters), which will allow to determine the appropriate moments of preventive corrections (settings) of parameters, and to give recommendations for choosing optimal values of the tuned parameters. For non-tuned objects it is necessary to assign the timescales of preventive measures, to give recommendations for finding components needed to be replaced, and to determine the parameters of the replacement components.

At this level of synthesis it is necessary to distinguish between item parameters which are not controlled during the operation, and item parameters with control. Moments of change and optimal values of the controlled parameters for the first group are defined by some prior information about the processes operational changes in parameters. The previous experience with a set of similar items (of a group nature), can be determined on the basis of prior data and as results of monitoring of particular item parameters. The just obtained recommendations are valid only for a particular item, since strategies of parameters control are strictly individual. In each case necessity of identifying the specific ways of control measurements will arise.

We give now a general frame algorithm formulation for solving reliability optimization) problems, formulated as problems of parametric synthesis.

1. The problem of optimal selection of nominal parameter values. With determined characteristics X(t), defined Dx and T, find such nonrandom initial nominal values Ei, Ei, ... , En, for which

P {(Xi(t)+Ei, Xi(t)+Ei, ..., Xn(t)+En) e Dx, V te [0, T]}, = max P.

2. The problem of optimal adjustment. With determined characteristics of random process under varying non-adjustable parameters, Xi(t), ... Xk (t) and adjustable parameters, Xk+i(t), ... Xn(t) find such values Ek+i , ..., En, for which

Abramov O., Dimitrov B. RT&A, No 4 (47) RELIABILITY DESIGN IN GRADUAL FAILURES_Volume 12, December 2017

P {(Xi(t), ... Xk(t), Xk+i(t) + Ek+i , ... , Xn(t)+En) e Dx, V te [0, Tj}, = max P

3. The problem of prevention maintenance:

a) Heuristic prevention maintenance. With determined characteristics from the random priory selected process X (t), given the tolerance box Dx and the desired time T find a non-random function E(t), for which

P{(XP (t)+Ei(t),...,Xpnr (t)+En (t))eDx,Vte0,T]}=max P,

t

In each case check if C(t) < Co, where C(t)_J C(E(X^dx is the costs related to the parameters

0

correction (needed maintenance), where Co is an acceptable level of maintenance costs;

b) Posterior (individual) preventions maintenance. With the known characteristics of the

posterior stochastic process X Ps (t) derived from prior data with respect of the control results, define Dx and T, find a function E(t), for which

P{(Xps (t)+Ei(t),...,XP (t)+En(t))eDx, Vte[0,T]=maxP In each case check if Ci (t) <Co, where Ci (t) is the costs related to the control and correction of parameters.

Analyzing the given algorithm formulation, it is important to verify if there are fundamental commonalities, and if problems at stages 1 and 2 are special cases of the problem in stage 3. Inherently, they all belong to the class of control problems for stochastic processes. Their solution should be based on the results of the forecasting of parameters drift process (the technical condition) and reliability of an optimized item.

Mathematical and computational complexity of the methods of optimal synthesis of technical systems is taking into account the laws of random variations of their parameters and proposed reliability requirements. The difficulty of obtaining the necessary initial information about parametric perturbations raises certain pessimism regarding the practical usefulness of FP-approach. However, in recent years an active development of sufficiently radical ways for reducing the complexity in solving complex computational problems is observed. It is based on the idea of parallel processes of searching final results. Currently it is gained some experience in creation of algorithms and software tools for calculation and parametric optimization in reliability design of technical devices and systems, based on the use of technology in parallel and distributed computing.

To overcome the complexities associated with the deficiency or absence of information about the patterns in stochastic processes for the parameters changes of the studied systems. Possible solutions are offered, based on using the ideas of robustness and minimax approaches. In other words, the necessary level of reliability is ensured either by the creation of systems that are insensitive to variations of their parameters (robustness), or as a result of giving them a required stamina. These approaches take into account the most unfavorable variations of the system parameters.

IV. The technology of parallel computing in parametric synthesis problems

The stochastic nature of the optimality criterions, the dimensionality of the search space, the need to solve global optimization problems forced researchers to seek ways of creating efficient numerical methods for solving problems of parametric synthesis (PS). One of the most radical

solutions to the problems of high computational complexity is the parallelization of the solutions search process.

There are some versions of PS strategies using the technology of parallel computing. The base of the first strategy is the idea of parallel methods for calculation of the objective functions using optimization techniques.

Creation and implementation of a parallel analogue of the method of statistical simulations (Monte Carlo) does not cause essential difficulties. The use of parallel computation in this method is quite logical, as far as the idea of parallelism - the repetition of a certain process model with varying sets of data is inherent in the very structure of the method. It is intuitively clear that the use of k independent processors and the distribution between them as independent trials reduces the complexity of statistical simulation about almost k times. The cost of the final summation and averaging of results is almost negligible (see Abramov 2010).

Further, reducing the time of solving the problem of PS can be achieved by parallelization of the algorithm of search of extreme values of the objective functions.

The simplest method for global optimization, having the property of potential parallelism, is the scanning (full enumeration) method. The essence of the method lies in the fact that the search area is divided into unit cells, in each of them (by a particular algorithm) is chosen the point: in the center of the cell, or on the edges or the vertices. In each cell there is a consequent view for the values of the objective function and finding among them the extreme values is just a simple task. The accuracy of the method, naturally, is determined by how tightly the points in the search area are selected.

The main advantage of the scanning method is that using it with fairly dense points of location, it always guarantees to find the global optimum. However, this method requires significant amount of computation, which can be reduced by parallelization of the computational algorithm.

In the tasks of the PS sample, the set of nominal parameters is in most cases finite. This is due to the values of most parameters of radio components (resistors, capacitors, inductors, operational amplifiers, etc.) are regulated by technical specifications and standards. On the basis of experience and intuition, the developer can usually set the necessary options as used in active elements. Therefore, the nominal values of their parameters are known. In cases where it is possible to select nominal values of parameters in a continuous range, the researcher can use discretization procedure. Thus, in most cases it can be assumed that the sample set of nominal parameters for solving PS problem is discrete.

In the simplest case, the search of the solutions is reduced to exhaustive enumeration on

the full set of possible nominal values of the internal parameters D™ = {x^ | xr e Dx} , in each

in

point x r at which is needed to find the value of the objective function. Taking into account the cyclical nature of the calculation procedure for objective functions, it is easy to apply data parallelism.

\in .

Let the solution process can be performed using k processors (slaves). The set Dri

is

in k in

partitioned into non-overlapping subset Dr = u {D ,} , for which to the j-th processor is

j=1 r j

assigned a subset of original data. Thus, each j-th processor calculates the objective function for all elements of the set Drinj and finds the best vector of values of parameters for the assigned subareas. The results are transmitted to the master processor: which selects the optimal vector

from the values throughout the area Drin . Such partitioning of the entire search set on non-overlapping subsets constitutes the essence of blocks of the scheduling parallel distributed process.

For symmetric computing cluster consisting of k equal computing nodes, the total number of selected points is divided into equal amounts for each of the subordinated processes. In the case

of an asymmetric cluster it is necessary to perform a preliminary procedure for the complexity assessment. As a typical procedure in the optimization method, it acts as a single simulation of the item's operation, verification of performance specifications, and calculates the criterion of optimality. In this case the computational load is divided between the components of the complex in proportion to their productivity.

Upon completion of the program scheduling of parallel computation process, each

rdn

computing component of the complex receives original data from the borders of the subset Dr j . On the end of computation the main processor receives the results from subordinates and

generates the final result of discrete optimization on the entire set Drin .

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

Another possible strategy of the PS is based on the design of the region of admissible values of the internal parameters (region of acceptability) Dx. The attractiveness of this strategy is related to the possibility of decomposition of the general task of the PS into two subtasks. The first one consists in the construction, analysis and approximation of the area Dx. This is a task of highest computational complexity, as it is connected with the necessity of multiple calculations of the values for output parameters of the system (see Abramov, Katueva and Nazarov 2006). The second subtask involves the calculation of the objective function and finding optimal values of the nominal parameters, using information about the field of Dx. Obtaining the solution in this case is not related to the need to referee to the model of the investigated system, which significantly reduces the complexity of the parametric synthesis.

Thus, the strategy of PS in this case will consist of two stages, the first concerned with design of the regions of permissible values of parameters (variations region of acceptability) Dx. Parallel algorithms of the region of acceptability design are given in Abramov, Nazarov 2015.

The second stage is focused on the search of optimal solutions. With the known region of acceptability, the complexity of calculating the values of the objective function and the search for extreme is significantly reduced. In this case it is not necessary to compute values of the system's output parameters.

In addition, a significant reduction in computational costs and efforts can be achieved by the use of area Dx for parallel analogues of search optimization models. Thus, when using the PS strategy, based on region of acceptability design, the solution of this problem is carried out in two phases, the first of which can be considered as a pre-requisite (construction of area Dx), and the second - the optimization one.

V. Other areas of use the functional-parametrical approach

Using of the FP-approach appears to be promising in solving the problems of ensuring, assessing and maintaining reliable and safe operation of complex technical systems for critical important applications.

A, The problem of preventing failures and reducing the techno-genic risks is of particular relevance to the technical objects, which failures are associated with large financial losses, or with catastrophic consequences. Most of these complex systems are produced in small numbers, operated in variable conditions and realizing extreme technologies. Such systems are commonly called unique.

In the study of the techno genic risks, where a risk event is considered as a loss of functionality (failure) of a technical object, the correct design is a key issue. The characteristics of the objects are the operating time (uptime) or the times to failure, and the probability characteristics which are determined by methods of mathematical statistics and reliability theory. Unfortunately, in the study of complex systems to get fairly representative statistics of failures is not always possible. This is because such systems are manufactured in small numbers when needed by specific instances, and are operated in varying conditions. Their failures are rare events.

Moreover, the problem is to prevent failures rather than to accumulate its statistic.

B. In solving the problem for management of technogenic risks on the basis of the FP-approach, the constructors and designers should be able to estimate the current technical state of the system in order to predict changes in it (the time of transition to the critical state). Moreover, they have to determine the respective total and operating costs related to the monitoring of the states, carrying out preventive measures and the damages upon occurrence of a risk event.

C. It is essential to note that risk management related to the solution of the problem of individual planning of the operation is an important factor. The basis of individual approach to the problems of control of the operations is to predict changes in the parameters of technical states, based on the results of monitoring. Predicting of the technical state based on observations of each particular item can be carried out only if there are known a sufficient amount of prior characteristics of the processes occurring in similar items (or on models of a random process in different parameter variations). As theoretical basis of technical state predicting it may serve the classical methods of optimal estimation and extrapolation. The result of such predictions is the point estimates of the controlled parameter at some future point in time. The problem of risk management is associated with finding the moment of the first outage of the parameters outside the regions of acceptability. The use of this approach is allowable, if the probabilistic characteristics of the measured errors and the accepted model of parameters drift are known exactly.

The main difficulties in solving the problem of forecasting the state of synthesis of the management strategy of techno-genic risks are, that the predicting has to be carried out for individual items. They are produced in small volumes and sources of information are based on a small sample sets from control tests. In the presence of interference between control errors statistical properties of estimates are not known. In these circumstances, classical methods of engineering statistics and theory of random processes lose their attractive properties. Their use in predicting leads to significant errors and low accuracy in prediction. Therefore, it is necessary to expand the initial information base by conducting a comprehensive object surveys and following monitoring of its states. Development of new methods for predicting, supplementing the already existing knowledge is imminent. Some approaches to solving the problem of individual prediction of the reliability issues in complex technical engineering systems and operations planning in conditions of deficiency and incomplete initial information were presented here. This approach allows us to obtain suitable results in these circumstances, and get sufficiently reliable results, as reviewed in Abramov and Nazarov 2016.

D. The basic ideas of FP-approach are useful in solving problems for ensuring the survivability and safety of technical objects.

Survivability is usually described as the ability of a system to maintain its basic functions (albeit with some loss of quality of performance) in adverse effects of environmental factors, beyond the designed operating conditions. It is worthy to notice that adverse impacts may cause abnormal changes of external parameters, which can lead to unacceptable deviations of the internal and output parameters. It seems obvious that basic ideas of FP-approach can be used in the solution of the ensuring the survivability. In this case the description of possible anomalous effects and parametric changes is necessary. In such way the levels of allowable degradation of the system (change of quality indicators of functioning) can be specified. Ensuring survivability in general is also based on creating a certain (limit-exceeding) redundancy of the alleged anomalous deviations in the parameters and the optimal strategy for using this reserve can be established. In this case, some of reserve components should be used only when an abnormal situation occurs.

E. Understand safety is a key property of dynamical systems, which allow maintaining a safe condition at any stage of the life cycle. The safety theory is the science about foreseeing dangerous conditions (catastrophic, emergency, danger, etc.), threatening the destruction of systems or environment, and measures to prevent them. The safety theory cannot proceed from multiple phenomena, with dangerous consequences: from system destruction, sufficient to create a single emergency. It is important also that the basis for the safety management (prevention of

dangerous situations) is the need for monitoring, assessing and forecasting of systems studied and used. Thus, an objective assessment of the safety of a system can be made, observing the process of changing its states. For this purpose it is necessary to build a region of the safe state (similar to the region of acceptability in the reliability described above). By highlighting all modes leading to breach of safety (e.g. crash in the system) such monitoring can be achieved. In order to prevent a breach in the safety it is needed to create a safety reserve, to predict the changes (decrease) in such reserve and to take decisions on timely reserve refreshments or termination of the system operation.

VI. Conclusions

In this paper we formulate a conceptual algorithmic approach which combines traditional theoretical and practical steps in assessing reliability of technical systems subject to gradual, time depending failures. This approach is called functional-parametric (FP) since it uses possible parameter variability in tolerances type of restrictions. Under FP approach reliability assessment if based on the creation and optimal use of reserves (margin) of admissible variations of the system and process parameters. Monitoring of the governing parameters, prediction of changes in the parameters and prevention of exits out of admissible range are essential components of the FP approach. Correction in parameter values, adjustments or replacements of worn out components are part of the system control. Reliability support successfully can be used in several enhanced forms of parametric synthesis tasks. The FP approach naturally arises with the numerous of technical research tools and algorithms in reliability and risk studies authors discussed earlier,

It is outlined that the basic ideas of the FP approach and the tools of its practical implementations can be applied in solving a wide range of other problems in risks, theory, safety, survivability and many other fields.

References

[1] Abramov O., Katueva Y. and Nazarov D. (2006). The Definition of Acceptability Region for Parametric Synthesis Problem. Proceedings of the 6th Asian Control Conference (ASCC'2006): pp. 780-786.

[2] Abramov O. (2006). Reliability-Directed Computer-Aided Design System. Reliability: Theory & Application. No. 1. Pp. 35-40.

[3] Abramov O. (2010). Parallel Algorithms for Computing and Optimizing Reliability with Respect to Gradual Failures. Automation and Remote Control. Vol.71. N7. Pp. 1394-1402.

[4] Abramov O., Nazarov D. (2015). Region of Acceptability using Reliability-oriented Design. Resend Development on Reliability, Maintenance and Safety/ WIT Transaction on Engineering Sciences. Vol.108. Pp. 376-387.

[5] Abramov O. (2016). Choosing Optimal Values of Tuning Parameters for Technical Devises and Systems. Automation and Remote Control. Vol.77. N4. Pp. 594-603.

[6] Abramov O., Nazarov D. (2016). Condition-based maintenance by minimax criteria. Applied Mathematics in Engineering and Reliability: Proceedings of the 1st International Conference on Applied Mathematics in Engineering and Reliability, pp.91-94.

[7] Becker P., Jensen F. (1977). Design of Systems and Circuits for Maximum Reliability or Maximum Production Yield. New York: McGraw Buck Company.

[8] Dimitrov, B. (1998) Quality Evaluation Methods - A Review, Economics, Quality, Control (EQC Journal and Newsletter for Quality and Reliability), Vol. 13, No. 2, pp. 117 - 128.

[9] Gnedenko, B.V., Beliaev Y.K. and Soloviev, A.D. (1969) Mathematical Methods of Reliability New York, Wiley, 1969.

i Надоели баннеры? Вы всегда можете отключить рекламу.