Научная статья на тему 'Models, methods and means for solving the challenges in co-design and testing of Computer systems and their components'

Models, methods and means for solving the challenges in co-design and testing of Computer systems and their components Текст научной статьи по специальности «Компьютерные и информационные науки»

CC BY
65
17
i Надоели баннеры? Вы всегда можете отключить рекламу.

Аннотация научной статьи по компьютерным и информационным наукам, автор научной работы — Julia Drozd, Alexander Drozd

This paper is addressed to a problem of development of the resources used for solving challenges in co-design and testing of the computer systems and their components. Both target and natural are considered. Target resources include models, methods and means. Natural resources like particularities of the target resources are examined. Natural development of target resources with structuring under particularities of the Universe which is considered like parallel and approximate is analysed. Natural resources in remove of contradicts between target resources are shown.

i Надоели баннеры? Вы всегда можете отключить рекламу.
iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.
i Надоели баннеры? Вы всегда можете отключить рекламу.

Текст научной работы на тему «Models, methods and means for solving the challenges in co-design and testing of Computer systems and their components»

MODELS, METHODS AND MEANS FOR SOLVING THE CHALLENGES IN CO-DESIGN AND TESTING OF COMPUTER SYSTEMS AND THEIR

COMPONENTS

Julia Drozd, Alexander Drozd

Odessa National Polytechnic University, Odessa, Ukraine e-mail: drozd@ukr.net

ABSTRACT

This paper is addressed to a problem of development of the resources used for solving challenges in co-design and testing of the computer systems and their components. Both target and natural are considered. Target resources include models, methods and means. Natural resources like particularities of the target resources are examined. Natural development of target resources with structuring under particularities of the Universe which is considered like parallel and approximate is analysed. Natural resources in remove of contradicts between target resources are shown.

1 INTRODUCTION

The artificial Computer World created by the human takes a special place in development of the Universe and its components. The Computer World is most dynamically developing area of knowledge and human creativity. For short historical term the huge way of development which analysis allows coming nearer to understanding of laws of this development and development of the World as a whole is gone.

Whether creation of the Computer World casual or natural is? What role is allocated to the computer World in modern development? What role is played in this development by the human, and in what degree he is free in a choice of solved problems and decisions? These questions are considered in section 2 operating with concepts of models, methods, means and target resources as a whole for the decision of problems of synthesis and the analysis, including challenges of co-design and testing of computer systems and their components.

What basic directions of development of target resources in co-design and testing of computer systems? What are the natural resources? How they can be shown and used? How the development of resources is stimulated? Answers to these questions are offered in section 3 using examples of development of the models, methods and means in information and computer technologies..

2 THE WORLD CREATED BY THE HUMAN

2.1 THE WORLD CREATED BY THE HUMAN

Development of the Computer World can be analysed as process of decision of the challenges, including problems of co-design and testing in computer systems and their components.

The problem can be solved at performance of two conditions: execution of a set of works for limited time achieving the certain productivity, and reception of reliable results. The decision of the problem has also the third condition - investments in this decision of the certain resources which further refer to target.

2.2 TARGET RESOURCES

Target resources contain all necessary for solving a problem: models, methods and means.

Models are our ideas of the Universe and its components. We think using models. Methods describe the transformations which are carried out with resources. We operate using methods. Means allow realizing these transformations. Models and methods can be related to an information part of target resources, and means - to technological one, including materials and tools, made in one's turn with use of models, methods and tools.

2.3 STAGES OF ACCUMULATION OF THE INFORMATION

The human is the tool in the decision of a challenge connected with development of models, methods and means. Models are our knowledge, and methods are skills. These information resources make two parties of human experience. In this respect creation of the Computer World is not casual. It is one of stages in accumulation of the information in the form of human experience. The previous stages: both the writing and the publishing were marked in speech keynote of academic A.P. Ershov (Ershov 1981). They were preceded with a stage of accumulation of human experience using its transfer "by word of mouth". Development of writing has raised accuracy of both kept knowledge and skills. Publishing also has considerably increased a level of replication, which promotes distribution of information and increase of reliability in process of its accumulation.

With development of both information and computer technologies the stage of human experience formalization began. On the one hand, this is creation of databases and the knowledge connected by networks. Search systems provide a high degree of accessibility to the data that becomes a necessary condition for using these data in view of prompt growth of their amount. On the other hand, formalization of human experience was representing in development of the software with the prepared decisions of problems. Once the written program of function evaluation sin x for ever has attributed this problem to set of closed problems as well as the program of joining of spacecrafts the Soyuz - Apollo.

It is necessary to note, that process of information accumulation passed and occurs also outside of the human cerebration, for example at a genic level. Training which begins with fastening conditioned reflexes at worms and proceeds up to complex behaviour at mammal and birds (Akimushkin 1991), also are stages of accumulation of the information. These stages have begun before accumulation of human experience. Therefore it is possible to assume, that development of resources by the human concerns to intermediate stages of accumulation of the information which has more common nature.

3 DEVELOPMENT OF RESOURCES

3.1 А RESOURCE AS AN ELEMENT OF THE UNIVERSE

Development of resources can be considered from two positions: organizational and functional. In the first case the resource is represented as system of elements. The second position shows a resource as an element of system in interrelation with other elements.

All resources are elements of such system as our Universe, which itself cannot be investigated as an element of system. Studying of features of the Universe probably only by research of its elements as well as these elements inherit features of the Universe, being structured under its realities.

Development of the Computer World most obviously shows particularities of the Universe, characterizing it as parallel and approximate. The growing level of parallelism and fuzziness testifies to it in the decisions of challenges in co-design and testing of computer systems and their components. As an example development of personal computers can serve. Parallelism of structural decisions has passed a way of development at both a circuit and system level from realization of consecutive and series-parallel operations in structure of arithmetic-logic units up to single-cycle iterative array circuits, projected for performance of each operation in structure of pipeline.

Processing of the approximate data has received development from co-processors of non-obligatory delivery up to several floating point pipelines in structure of the CPU and up to many thousand floating point pipelines in the graphic processor with its use for performance of parallel calculations on technology CUDA (Guk 97, NVIDIA CUDA 2007).

The Universe is the generator of the approximate data. All in this Universe exists in admissions (tolerance). Results of measurements are the approximate data. Therefore the importance of computer processing of the approximate data permanently grows.

Both models and methods also develop from exact to approximate, changing representations about their adequacy in relation to features of the Universe.

For example, the number has passed a way of development from the codeword up to approximate representation in floating-point formats with two components: a significant and an exponent. Codeword size determines strong dependence between accuracy and size of range for exact data. The sizes of significant and exponent independent determine accuracy and range size in a floating-point number. This independence or parallelism follows from the features of parallel and fuzzy Universe which produces different requirements to accuracy and range of data.

The model of arithmetic operation has passed a way from complete exact operation to approximated truncated one executed in simultaneous units (parallel adders and shifters, iterative array multipliers and dividers) using floating point formats with single accuracy (Kahan 1996).

A model of the calculated result is transformed from exact representation of a number up to approximate one, which has high exact most significant bits (MSB) and low non-exact least significant bits (LSB). Such transformation develops correct result into reliable one which can contain the errors caused by the circuit faults in LSB. These errors are inessential for reliability of result (Drozd 2003).

Increase of a level of parallelism in solved problems makes exact methods inefficient, replacing them on approximated. For example, testing of the software products containing thousand of modules becomes approximated (Pomorova 2009).

Thus, development of target resources occurs by the natural way being structured under features of the Universe including first of all its parallelism and fuzziness.

3.2 NATURAL RESOURCES

Target resources are a cost-based part of the task decision. First of all it is obvious to technological resources. Payment of information resources can enter cost of the tools developed with use of valuable models and methods. Information resources can be also freely distributed as are paid by work of the previous generations of researchers.

Decision of a challenge can use not only paid target resources but also the natural resources which are free as well as they are given to target resources like their particularities. Free character of natural resources makes their attractive defining a problem of their study.

Two kinds of natural resources are most known. There are natural information redundancy and natural time redundancy used in on-line testing of the digital components (Savchenko 1977, Romankevich 1979).

Definition of natural resources as particularities of the target resources essentially expands set of their kinds. As a rule, the challenge is solved stage by stage. Particularities of the target resources involved at the previous stages can be used as natural resources at the following stages.

Use of both kinds of natural redundancy is completely described by such model of natural resources activation as these kinds are pawned on a design stage of digital systems and their components, and are used at the decision of the following problem of on-line testing.

It is necessary to note, that resources grow together being structured under the same features of the Universe. It can be observed regarding their organization. For example, expansion of a set of solved problems is carried out by increase of productivity, reliability and also due to resource-saving. The basic approach to increase of productivity will consist in replicating operational

elements and perfection of functions for a choice of results from parallel branches of calculations (Guk 2003). Reliability of results increases using fault-tolerant structures which contain replicated operational elements and functions for a choice of reliable results (Ushakov 2003). One of approaches to power-saving consists in downturn of frequency of the operational elements work that is compensated by duplicating of operational elements and use of functions of a choice (Chandracasan 1992).

It is important to notice that replication of operational elements is the first step of parallelism development. At the following steps parallelism transforms operational elements into versions with various kinds of diversity (Kharchenko 2008) reducing of both amount and significance of the common parts of versions, rising of their independent degree and development of their particularities.

Following development of the structures from replication up to diversity are supported by a method of the results preparation which under various names grasps space of decisions behind a clear advantage (Drozd 2004).

Thus, resources in the development get the similar structure working on all components of the decision of a problem: productivity, reliability, and resource-saving. Natural narrowing of target resources to each other on uniform base developed under common features of the Universe eliminates contradictions between them demonstrating natural resources.

3.3 MOTIVATION OF RESOURCES DEVELOPMENT

Natural development of resources is stimulated by the method of carrot and stick. A stick is natural selection, and carrot is gifts as realization of natural resources. Increase of both efforts and spent resources leads to decline of result in case of move against the stream of Universe development.

This issue can be shown considering reliability of the on-line testing methods which is estimated for digital circuits with use of both probability of an essential error and error detection probability. The probability of an essential error is the basis characteristic of the computing circuit as object of on-line testing. The main characteristic of the on-line testing methods is the error detection probability. Reliability of on-line testing methods can be visually considered using both of probabilities in unit-side square shown in Figure 1 (Drozd 2006).

£

P.

E

P.

DE

P

SE

P

N

P

DN

P

SN

£

.Bp

PE

P

DE

P

SE

P

N

P

DN

P

SN

Figure 1. Reliability of on-line testing methods a - for traditional methods; b - for residue checking of truncated operation Horizontal side of this square contains a sum of the probabilities PE and PN = 1 - PE that the occurred error is essential and inessential. Vertical side of the square contains a sum of the probabilities PD and PS = 1 - PD of error detection and error skipping.

1

2

1

2

4

3

The square is splitting into four parts that define the probabilities connected by the following formula:

PdE + PDN + PSE + PSN = 1, where PDE = PD PE and PDN = PD PN are detection probabilities of an essential and inessential error accordingly; PSE = PS PE and PSN = PS PN are skipping probabilities of an essential and inessential error accordingly.

The on-line testing method has reliability in checking the result detecting essential errors and skipping inessential ones. This reliability contains the probabilities of the first and last parts of the square:

R = PDE + PSN = PD PE + (1 - PD) (1 - PE). (1)

The features of approximate calculations significantly reduce the probability PE. Operation of multiplication used in representation of approximate data in floating point formats twice reduces probability PE for complete arithmetic operations. Additionally this probability is reduced in both operations of denormalization and normalization of the mantissas in results of all previous and following operations accordingly. That's why the traditional on-line testing methods, such as both parity prediction and residue checking (based on self-checking circuit theory (Anderson, 1973)), which check the complete arithmetic operations with high error detection probability PD >> PS have very low reliability. This fact is shown in Figure 1, a, where high error detection probability is basically used for detection of inessential errors. It leads to reject erroneous but reliable results and reduces reliability of the traditional on-line testing methods. Increase of error detection probability reduces reliability of the traditional on-line testing methods.

According to the formula (1), the high probability of error detection PD > 0.5 can provide high reliability of on-line testing methods R > 0.5 only in case of high probability of an essential error PE > 0.5. It is achievable only at performance of the truncated operations. These operations become effective at performance of two conditions. First, computing circuits receive development up to a parallelism level of the simultaneous. The second condition will consist in development of operation up to a level of calculation execution with single accuracy when size of mantissa in result is inherited from mantissa of operand that is typical for floating point formats (Goldberg 1991).

In these conditions the truncated operation almost twice simplifies the digital circuit and reduces time of calculations (Savelyev 1987, Rabinovich 1980). Except increase of reliability in online testing with high error detection probability becomes possible as it is shown in Figure 1, b.

Other way of increase in reliability of the on-line testing methods is realized for the most widespread case of low probability PE < 0.5 under condition of error detection with low probability PD < 0.5. This way can be considered by the example of a known on-line testing method which checks iterative array squarer using the forbidden values of the result residue by modulo. Iterative array squarer uses operand A for calculating result Q = A2. The error detection circuit executed by this method is shown in Figure 2 (Drozd 2008).

I Error detection scheme ; i_____________________________i

Figure 2. Error detection scheme of a squarer

The circuit contains two blocks. The block B1 calculates a result residue by modulo. The block 2 forms check code E, which identifies the forbidden values of residue R.

Let's consider the checking for modulo M = 15. Then on first half of values of the modulo 15 number A accepts values from 0 up to 7 for which values of squares and their residues by modulo

accept values 0, 1, 4, 9, 16, 25, 36, 49 and 0, 1, 4, 9, 1, 10, 6, 4, accordingly. On the following half of modulo values the residue accept the same values only upside-down, as A2 mod M = (M - A2) modM. The calculated residues make set of the allowed values 0, 1, 4, 6, 9 and 10. Other values of the module are forbidden. Accepting equal probability of occurrence of any value of number A frequency of occurrence of the allowed residues is defined. The zero meets only once on set of the result residues, both residues 1 and 4 meet twice in half of this set, and the residues 6, 9 and 10 once. Hence, the allowed values meet with frequency 1, 4, 4, 2, 2 and 2, accordingly.

Typical faults of iterative array squarer deform result on weight of any one bit, determining a kind of errors as ±2W, where W - number of the result bit. Amount N of such errors by modulo is final. Errors by modulo M = 15 accept values ±1, ±2, ±4 and ±8. The error is detected, if the sum of its and the allowed value is equal to the forbidden value.

Frequency of error detection in case of receiving a forbidden value is shown in Table 1.

Table 1. Frequency of error detection

Z 7 S

1 2 4 8 -1 -2 -4 -8

2 4 1 2 4 2 2 15

3 4 2 4 10

iНе можете найти то, что вам нужно? Попробуйте сервис подбора литературы.

5 4 4 2 2 12

7 2 2 1 5

8 2 4 1 2 2 4 15

11 2 2 1 4 9

12 2 4 4 10

13 2 1 2 5

14 2 2 1 4 9

Lines and columns of table contain forbidden values Z and values Y of errors by modulo 15, accordingly. Last column contains sums Sr of elements in lines.

The probability of error detection can be appreciated as PD = S / (M ■ N). The maximal value of probability PD is determined in case of the check of all forbidden values when S = 90 and PD = 0.75. The minimal value of probability PD is calculated at the minimal sum of values Sr for lines which nonzero elements cover all errors. Minimal probability PD is achieved checking only two forbidden values 11 and 14. It is equal to PD = 0.15.

In case of checking of the exact data all errors are essential, PE = 1. It determines reliability by the formula (1) as R = PD = 0.75. For the approximated calculations which are executed with probability PE = 0.1 the reliability is estimated for cases of maximal and minimal probability PD as R = 0.30 and R = 0.78, accordingly.

Thus, the increase in amount of checks from 2 forbidden values up to 9 reduces reliability of on-line testing method from 0.78 down to 0.30 in 2.6 times.

The important role in natural development of resources plays a method of the results preparation. Thus method allows starting to solve a task before obtaining of all initial data, simultaneously (in parallel) their formation. This determines approximate way for solving a task firstly receiving a set of possible results. One result is selected from set of possible results on receipt of the missing data (by using of them).

Efficiency of the method is explained by structuring in particularity of the parallel and approximated Universe. Comparing its parallelism with iterative array and pipeline structures, it is necessary to note, that paralleling of calculations two types of dependences interfere: on the data and on control. In the first case operation is executed consistently to formation process of initial data as results of the previous operations. In the second case the branching of algorithms obstructs to paralleling of calculations. Matrix parallelism is realized at absence of both types of dependences. Pipeline parallelism removes dependence on the data. A method of results preparation (and only it) removes dependence on control. The method of results preparation not only reduces

time of calculations, but simultaneously reduces expenses of the equipment, traditional opposed to speed.

The most simple and high-speed (on half of bits of the address) the realization of a memory which is carried out on architecture 2.5 D (Ugryumov 2004), also is an example of using a method of results preparation. In blocks of memory with structure 2.5 D hardware expenses for decoding of the 16-digit address are reduced 85 times (Drozd 2012).

Efficiency of a method considerably grows at a choice of the several prepared results. So libraries where sets of results continuously are prepared and get out are constructed. All modern co-design of digital systems and components is based on a method of the results preparation. For example, every FPGA chip is initially preform for set of projects, and the chip programmed under one project is preparation of results (for various input data) in the tables which have been written down in memory LUT (Altera Corporation, 2004).

Due to structuring into a reality of Universe development FPGA-projects receive the features allowing to provide at a high level a set of characteristics: productivity of calculations and reliability of their results, universality, efficiency of designing, adaptability to manufacture, flexibility of decisions, and the most important advantage which is the combination of achievable levels testifying to their mutual consistency.

Development of resources by the way of elimination of contradictions can be taking into account for predicting such development in concrete applications. For example, now there is a contradiction between checkability, playing a special role in digital components of safety-critical systems (Drozd 2011), and power-saving. For maintenance of checkability it is necessary to train points of digital circuits by their switchings making the basic part of a dynamic component of power consumption. Therefore it is possible to assume, that in digital circuitry will win the principle of the accumulator used, for example, in movement of electric trains when at their dispersal the electric power is consumed, and in a mode of braking comes back. Similarly, switching of a voltage in points of the circuit from a low level up to high should consume energy, and in case of return switching to return. Then the current consumption will be influenced not with the sum of amounts of direct and return switchings, and their difference which is coming nearer to zero.

4 CONCLUSIONS

The resources used for the decision of problems of both synthesis and the analysis, pass a way of natural development, being structured under features of the Universe. Such development most clearly is shown in the Computer World which is artificial created by the human. Models, methods and means which are target resources for the decision of problems of co-design and testing of computer systems, permanently raise a level of parallelism and the fuzziness inherent in the Universe.

Creation of the Computer World is natural as well as formalizations of human experience is an obligatory stage during accumulation of the information.

Development of resources by the natural way is motivated by the method of "carrot and stick" using natural selection on the one hand and gifts as realization of natural resources on the other hand.

Natural resources can be considered as particularities of the target resources used at previous stages of the decision of a problem. Use of natural resources at the following stages can considerably simplify the decision of a problem and increase parameters of result.

Structurization of target resources under the same features of the Universe grow together them, showing natural resources in elimination of traditional contradictions between target resources. The traditional contradiction between speed and expenses of the equipment is eliminated at development of arithmetic operation up to a level of the truncated operation which are executed in single-cycle devices with single accuracy. Simplification of the device and reduction of operating time is achieved simultaneously with increase of reliability of on-line testing methods using high

probability of error detection. The method of results preparation which realizes a high level of parallelism and fuzziness shows elimination of many contradictions.

5 REFERENCES

Akimushkin I. 1991. Moskau. Fauna. publishing house "Idea".

Altera Corporation 2004. Netlist Optimizations and Physical Synthesis. Qii52007-2.0. Quartus II Handbook. Vol. 2. Altera Corporation.

Anderson D.A. & Metze G. 1973. Design of Totally Self-Checking Circuits for n-out-of-m Codes. IEEE Trans. on Computers, Vol. C-22: 263 - 269.

Chandracasan A.P. et al.1992. Low-Power CMOS Digital Design. IEEE Journal of solid-state circuits, V. 27, No 4: 473-484.

Drozd A. 2003. On-Line Testing of Computing Circuits at Approximate Data Processing.. East-West Design & Test Conf. Yalta-Alushta, Ukraine: pp. 148 - 158.

Drozd A. et al. 2004. Dedicated Architectures of Computers. Learning aid. Odessa: Science and technique.

Drozd A. et al. 2008. Increase in reliability of the on-line testing methods using features of approximate data processing. 1th International Conference on Waterside Security. Copenhagen, Denmark: 137 - 140.

Drozd A. et al. 2011. Checkability of the digital components in safety-critical systems: problems

and solutions. IEEE East-West Design & Test Symposium. Sevastopol, Ukraine: 411 - 416.

Drozd A. & Kharchenko V. (edits) 2012. On line testing of the safe instrumentation and control

systems. Kharkiv. National Aerospace University named after N.E. Zhukovsky "KhAI".

Drozd A. et al. 2006. The problem of on-line testing methods in approximate data processing. 12th

IEEE International On-Line Testing Symposium. Como, Italy: 251 - 256.

Goldberg D. 1991. What Every Computer Scientist Should Know About Floating-Point Arithmetic. ACM Computer Surveys, Vol. 23, No 1: 5 - 18.

Guk M. 1997. Processors Intel: from 8086 to Pentium II. Petersburg. SPb: Piter.

Guk M. 2003. Hardware of IBM PC: Encyclopaedia, 2rd Edition, Petersburg. SPb: Piter.

http://ershov.iis.nsk.su/ershov/english/index.html.

Kahan W. 1996. IEEE Standard 754 for Binary Floating-Point Arithmetic. Lecture Notes on the Status of IEEE 754. Berkeley. Elect. Eng. & Computer Science University of California. Kharchenko V.S. & Sklyar V.V. (edits) 2008. FPGA-basedNPP I&C Systems: Development and Safety Assessment. RPC Radiy, National Aerospace University "KhAI", SSTC on Nuclear and Radiation Safety.

NVIDIA CUDA 2007. Compute Unified Device Architecture. Programming Guide / Version 1.0, NVIDIA Corporation.

Pomorova O.V. & Govorushchenko T.A. 2009. Analysis of Software System Quality Valuation Techniques and Means. Radioelectronic and Computer Systems, vol. 6: 113 - 116. Rabinovich Z.L. & Ramanauskas V.A. 1980. Typical Operations in Computers. Kiev: Technika. Romankevich A. M. et al. 1979. Structural Time Redundancy in Control Circuits. Kiev: High School, Head publishers.

Savchenko J. 1977. Digital Tolerant Devices, Moskow: Soviet radio. Savelyev A. 1987 Applied Theory of Digital Machines. Moskow: High School.

Ugryumov E.P. 2004. Digital Circuitry Engineering. Learning aid, Learning aid, 3rd Edition. Petersburg. SPb: BHV-Peterburg.

Ushakov A.A. et al. 2003. Fault Toletant Embedded PLD-Systems: Structures, Simulation, Design Technologies. 10th Intern. Conf. MIXDES 2003. Lodz, Poland: 546 - 551.

i Надоели баннеры? Вы всегда можете отключить рекламу.